mirror of
https://github.com/gluster/glusterdocs.git
synced 2026-02-05 15:47:01 +01:00
dding release notes to master
Signed-off-by: shravantc <shravantc@ymail.com>
This commit is contained in:
13
mkdocs.yml
13
mkdocs.yml
@@ -133,6 +133,18 @@ pages:
|
||||
- Distributed Geo Replication: Features/distributed-geo-rep.md
|
||||
- libgf Changelog: Features/libgfchangelog.md
|
||||
- meta xlator: Features/meta.md
|
||||
- Release Notes:
|
||||
- index: release-notes/index.md
|
||||
- 3.7.1: release-notes/3.7.1.md
|
||||
- 3.7.0: release-notes/3.7.0.md
|
||||
- geo-rep in 3.7: release-notes/geo-rep-in-3.7.md
|
||||
- 3.6.3: release-notes/3.6.3.md
|
||||
- 3.6.0: release-notes/3.6.0.md
|
||||
- 3.5.4: release-notes/3.5.4.md
|
||||
- 3.5.3: release-notes/3.5.3.md
|
||||
- 3.5.2: release-notes/3.5.2.md
|
||||
- 3.5.1: release-notes/3.5.1.md
|
||||
- 3.5.0: release-notes/3.5.0.md
|
||||
- Feature Planning:
|
||||
- index: Feature Planning/index.md
|
||||
- New Feature Template: Feature Planning/Feature Template.md
|
||||
@@ -147,7 +159,6 @@ pages:
|
||||
- stat-xattr-cache: Feature Planning/GlusterFS 4.0/stat-xattr-cache.md
|
||||
- Code Generation: Feature Planning/GlusterFS 4.0/code-generation.md
|
||||
- Volgen rewrite: Feature Planning/GlusterFS 4.0/volgen-rewrite.md
|
||||
|
||||
- Feature Planning 3.7:
|
||||
- index: Feature Planning/GlusterFS 3.7/index.md
|
||||
- Small File Performance: Feature Planning/GlusterFS 3.7/Small File Performance.md
|
||||
|
||||
149
release-notes/3.5.0.md
Normal file
149
release-notes/3.5.0.md
Normal file
@@ -0,0 +1,149 @@
|
||||
## Major Changes and Features
|
||||
|
||||
Documentation about major changes and features is also included in the `doc/features/` directory of GlusterFS repository.
|
||||
|
||||
### AFR_CLI_enhancements
|
||||
|
||||
The AFR reporting via CLI has been improved. This feature provides a coherent
|
||||
mechanism to present heal status,information and the logs associated.
|
||||
This makes the end user more aware of healing status and provides statistics.
|
||||
|
||||
### File_Snapshot
|
||||
|
||||
This feature provides ability to take snapshots of files in GlusterFS.
|
||||
File snapshot is supported on the files of QCOW2/QED format.
|
||||
|
||||
This feature adds better integration with Openstack Cinder, and
|
||||
in general ability to take snapshots of files (typically VM images)
|
||||
|
||||
For more information refer [here](../Features/file-snapshot.md).
|
||||
|
||||
### gfid-access
|
||||
|
||||
This feature add a new translator which is designed to provide direct access
|
||||
to files in glusterfs using its GFID
|
||||
|
||||
For more information refer [here](../Features/gfid-access.md).
|
||||
|
||||
### Prevent NFS restart on Volume change
|
||||
Earlier any volume change (volume option, volume start, volume stop, volume
|
||||
delete,brick add, etc) required restarting NFS server.
|
||||
|
||||
With this feature, it is no longer required to restart NFS server, thereby
|
||||
providing better usability with no disrupts in NFS connections
|
||||
|
||||
### Features/Quota_Scalability
|
||||
|
||||
This feature provides support upto 65536 quota configurations per volume.
|
||||
|
||||
### readdir_ahead
|
||||
|
||||
This feature provides read-ahead support for directories to improve sequential
|
||||
directory read performance.
|
||||
|
||||
### zerofill
|
||||
|
||||
zerofill feature allows creation of pre-allocated and zeroed-out files on
|
||||
GlusterFS volumes by offloading the zeroing part to server and/or storage
|
||||
(storage offloads use SCSI WRITESAME), thereby achieves quick creation of
|
||||
pre-allocated and zeroed-out VM disk image by using server/storage off-loads.
|
||||
|
||||
For more information refer [here](../Features/zerofill.md).
|
||||
|
||||
### Brick_Failure_Detection
|
||||
|
||||
This feature attempts to identify storage/file system failures and disable
|
||||
the failed brick without disrupting the rest of the NODE operation.
|
||||
|
||||
This adds a health-checker that periodically checks the status of the
|
||||
filesystem (implies checking of functional storage-hardware).
|
||||
|
||||
For more information refer [here](../Features/brick-failure-detection.md).
|
||||
|
||||
### Changelog based distributed geo-replication
|
||||
|
||||
New improved geo-replication which makes use of all the nodes in the master volume.
|
||||
Unlike previous version of geo-replication where all changes were detected and synced
|
||||
on a single node in master volume, now each node of master volume participates in the
|
||||
geo-replication.
|
||||
|
||||
Change Detection - Now geo-rep makes use of changelog xlator to detect the set of files
|
||||
which needs to be synced. Changelog xlator runs per brick and when enabled, records
|
||||
each fops which modifies the files. geo-rep consumes these journal created by this
|
||||
xlator and syncs the files identified as 'changed' to slave.
|
||||
|
||||
Distributed nature - Each of the nodes take the repsonsibility of syncing the data
|
||||
which is present in that node. In case of replicated volume, one of them will be
|
||||
'Active'ly syncing the data, while the other one is 'Passive'.
|
||||
|
||||
Syncing Method - Apart from the using rsync as the syncing method, now there tar+ssh
|
||||
syncing method, which can be leveraged by the workload where there is large amount
|
||||
of smallfiles.
|
||||
|
||||
### Improved block device translator
|
||||
|
||||
This feature provides a translator to use logical volumes to store VM images
|
||||
and expose them as files to QEMU/KVM.
|
||||
|
||||
The Volume group is represented as directory and logical volumes as files.
|
||||
|
||||
### Remove brick CLI Change
|
||||
|
||||
remove-brick CLI earlier used to remove the brick forcefully ( without data migration ),
|
||||
when called without any arguments. This mode of 'remove-brick' cli, without any
|
||||
arguments has been deprecated.
|
||||
|
||||
### Experimental Features
|
||||
|
||||
The following features are experimental with this release:
|
||||
|
||||
- RDMA-connection manager (RDMA-CM).
|
||||
- support for NUFA translator.
|
||||
- disk-encryption
|
||||
- On-Wire Compression + Decompression [CDC]
|
||||
|
||||
## Minor Improvements:
|
||||
|
||||
- Old graphs are cleaned up by FUSE clients
|
||||
|
||||
- New command "volume status tasks" introduced to track asynchronous tasks like rebalance and remove-brick
|
||||
|
||||
- glfs_readdir(), glfs_readdirplus(), glfs_fallocate(), glfs_discard() APIs support added in libgfapi
|
||||
|
||||
- Per client RPC throttling added in rpc server
|
||||
|
||||
- Communication between cli and glusterd happens over unix domain socket
|
||||
|
||||
- Information on connected NFS clients is persistent across NFS restarts.
|
||||
|
||||
- Hardlink creation failures with SMB addressed
|
||||
|
||||
- Non-local clients function with nufa volumes
|
||||
|
||||
- Configurable option added to mount.glusterfs to use kernel-readdirp with fuse client
|
||||
|
||||
- AUTH support for exported nfs sub-directories added
|
||||
|
||||
|
||||
### Known Issues:
|
||||
- The following configuration changes are necessary for qemu and samba
|
||||
integration with libgfapi to work seamlessly:
|
||||
|
||||
1) gluster volume set <volname> server.allow-insecure on
|
||||
|
||||
2) Edit /etc/glusterfs/glusterd.vol to contain this line:
|
||||
option rpc-auth-allow-insecure on
|
||||
Post 1), restarting the volume would be necessary.
|
||||
Post 2), restarting glusterd would be necessary.
|
||||
|
||||
- RDMA connection manager needs IPoIB for connection establishment. More
|
||||
details can be found [here](../Features/rdmacm.md).
|
||||
|
||||
|
||||
- For Block Device translator based volumes open-behind translator at the
|
||||
client side needs to be disabled.
|
||||
|
||||
- libgfapi clients calling glfs_fini before a successfull glfs_init will cause the client to
|
||||
hang as reported [here](http://lists.gnu.org/archive/html/gluster-devel/2014-04/msg00179.html).
|
||||
The workaround is NOT to call glfs_fini for error cases encountered before a successfull
|
||||
glfs_init.
|
||||
108
release-notes/3.5.1.md
Normal file
108
release-notes/3.5.1.md
Normal file
@@ -0,0 +1,108 @@
|
||||
## Release Notes for GlusterFS 3.5.1
|
||||
|
||||
This is mostly a bugfix release. The [Release Notes for 3.5.0](./3.5.0.md)
|
||||
contain a listing of all the new features that were added.
|
||||
|
||||
There are two notable changes that are not only bug fixes, or documentation
|
||||
additions:
|
||||
|
||||
1. a new volume option `server.manage-gids` has been added
|
||||
This option should be used when users of a volume are in more than
|
||||
approximately 93 groups (Bug [1096425](https://bugzilla.redhat.com/1096425))
|
||||
2. Duplicate Request Cache for NFS has now been disabled by default, this may
|
||||
reduce performance for certain workloads, but improves the overall stability
|
||||
and memory footprint for most users
|
||||
|
||||
### Bugs Fixed:
|
||||
|
||||
* [765202](https://bugzilla.redhat.com/765202): lgetxattr called with invalid keys on the bricks
|
||||
* [833586](https://bugzilla.redhat.com/833586): inodelk hang from marker_rename_release_newp_lock
|
||||
* [859581](https://bugzilla.redhat.com/859581): self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
|
||||
* [986429](https://bugzilla.redhat.com/986429): Backupvolfile server option should work internal to GlusterFS framework
|
||||
* [1039544](https://bugzilla.redhat.com/1039544): [FEAT] "gluster volume heal info" should list the entries that actually required to be healed.
|
||||
* [1046624](https://bugzilla.redhat.com/1046624): Unable to heal symbolic Links
|
||||
* [1046853](https://bugzilla.redhat.com/1046853): AFR : For every file self-heal there are warning messages reported in glustershd.log file
|
||||
* [1063190](https://bugzilla.redhat.com/1063190): Volume was not accessible after server side quorum was met
|
||||
* [1064096](https://bugzilla.redhat.com/1064096): The old Python Translator code (not Glupy) should be removed
|
||||
* [1066996](https://bugzilla.redhat.com/1066996): Using sanlock on a gluster mount with replica 3 (quorum-type auto) leads to a split-brain
|
||||
* [1071191](https://bugzilla.redhat.com/1071191): [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created with open(), seek(), write()
|
||||
* [1078061](https://bugzilla.redhat.com/1078061): Need ability to heal mismatching user extended attributes without any changelogs
|
||||
* [1078365](https://bugzilla.redhat.com/1078365): New xlators are linked as versioned .so files, creating <xlator>.so.0.0.0
|
||||
* [1086743](https://bugzilla.redhat.com/1086743): Add documentation for the Feature: RDMA-connection manager (RDMA-CM)
|
||||
* [1086748](https://bugzilla.redhat.com/1086748): Add documentation for the Feature: AFR CLI enhancements
|
||||
* [1086749](https://bugzilla.redhat.com/1086749): Add documentation for the Feature: Exposing Volume Capabilities
|
||||
* [1086750](https://bugzilla.redhat.com/1086750): Add documentation for the Feature: File Snapshots in GlusterFS
|
||||
* [1086751](https://bugzilla.redhat.com/1086751): Add documentation for the Feature: gfid-access
|
||||
* [1086752](https://bugzilla.redhat.com/1086752): Add documentation for the Feature: On-Wire Compression/Decompression
|
||||
* [1086754](https://bugzilla.redhat.com/1086754): Add documentation for the Feature: Quota Scalability
|
||||
* [1086755](https://bugzilla.redhat.com/1086755): Add documentation for the Feature: readdir-ahead
|
||||
* [1086756](https://bugzilla.redhat.com/1086756): Add documentation for the Feature: zerofill API for GlusterFS
|
||||
* [1086758](https://bugzilla.redhat.com/1086758): Add documentation for the Feature: Changelog based parallel geo-replication
|
||||
* [1086760](https://bugzilla.redhat.com/1086760): Add documentation for the Feature: Write Once Read Many (WORM) volume
|
||||
* [1086762](https://bugzilla.redhat.com/1086762): Add documentation for the Feature: BD Xlator - Block Device translator
|
||||
* [1086766](https://bugzilla.redhat.com/1086766): Add documentation for the Feature: Libgfapi
|
||||
* [1086774](https://bugzilla.redhat.com/1086774): Add documentation for the Feature: Access Control List - Version 3 support for Gluster NFS
|
||||
* [1086781](https://bugzilla.redhat.com/1086781): Add documentation for the Feature: Eager locking
|
||||
* [1086782](https://bugzilla.redhat.com/1086782): Add documentation for the Feature: glusterfs and oVirt integration
|
||||
* [1086783](https://bugzilla.redhat.com/1086783): Add documentation for the Feature: qemu 1.3 - libgfapi integration
|
||||
* [1088848](https://bugzilla.redhat.com/1088848): Spelling errors in rpc/rpc-transport/rdma/src/rdma.c
|
||||
* [1089054](https://bugzilla.redhat.com/1089054): gf-error-codes.h is missing from source tarball
|
||||
* [1089470](https://bugzilla.redhat.com/1089470): SMB: Crash on brick process during compile kernel.
|
||||
* [1089934](https://bugzilla.redhat.com/1089934): list dir with more than N files results in Input/output error
|
||||
* [1091340](https://bugzilla.redhat.com/1091340): Doc: Add glfs_fini known issue to release notes 3.5
|
||||
* [1091392](https://bugzilla.redhat.com/1091392): glusterfs.spec.in: minor/nit changes to sync with Fedora spec
|
||||
* [1095256](https://bugzilla.redhat.com/1095256): Excessive logging from self-heal daemon, and bricks
|
||||
* [1095595](https://bugzilla.redhat.com/1095595): Stick to IANA standard while allocating brick ports
|
||||
* [1095775](https://bugzilla.redhat.com/1095775): Add support in libgfapi to fetch volume info from glusterd.
|
||||
* [1095971](https://bugzilla.redhat.com/1095971): Stopping/Starting a Gluster volume resets ownership
|
||||
* [1096040](https://bugzilla.redhat.com/1096040): AFR : self-heal-daemon not clearing the change-logs of all the sources after self-heal
|
||||
* [1096425](https://bugzilla.redhat.com/1096425): i/o error when one user tries to access RHS volume over NFS with 100+ GIDs
|
||||
* [1099878](https://bugzilla.redhat.com/1099878): Need support for handle based Ops to fetch/modify extended attributes of a file
|
||||
* [1101647](https://bugzilla.redhat.com/1101647): gluster volume heal volname statistics heal-count not giving desired output.
|
||||
* [1102306](https://bugzilla.redhat.com/1102306): license: xlators/features/glupy dual license GPLv2 and LGPLv3+
|
||||
* [1103413](https://bugzilla.redhat.com/1103413): Failure in gf_log_init reopening stderr
|
||||
* [1104592](https://bugzilla.redhat.com/1104592): heal info may give Success instead of transport end point not connected when a brick is down.
|
||||
* [1104915](https://bugzilla.redhat.com/1104915): glusterfsd crashes while doing stress tests
|
||||
* [1104919](https://bugzilla.redhat.com/1104919): Fix memory leaks in gfid-access xlator.
|
||||
* [1104959](https://bugzilla.redhat.com/1104959): Dist-geo-rep : some of the files not accessible on slave after the geo-rep sync from master to slave.
|
||||
* [1105188](https://bugzilla.redhat.com/1105188): Two instances each, of brick processes, glusterfs-nfs and quotad seen after glusterd restart
|
||||
* [1105524](https://bugzilla.redhat.com/1105524): Disable nfs.drc by default
|
||||
* [1107937](https://bugzilla.redhat.com/1107937): quota-anon-fd-nfs.t fails spuriously
|
||||
* [1109832](https://bugzilla.redhat.com/1109832): I/O fails for for glusterfs 3.4 AFR clients accessing servers upgraded to glusterfs 3.5
|
||||
* [1110777](https://bugzilla.redhat.com/1110777): glusterfsd OOM - using all memory when quota is enabled
|
||||
|
||||
### Known Issues:
|
||||
|
||||
- The following configuration changes are necessary for qemu and samba
|
||||
integration with libgfapi to work seamlessly:
|
||||
|
||||
1. gluster volume set <volname> server.allow-insecure on
|
||||
2. restarting the volume is necessary
|
||||
~~~
|
||||
gluster volume stop <volname>
|
||||
gluster volume start <volname>
|
||||
~~~
|
||||
3. Edit `/etc/glusterfs/glusterd.vol` to contain this line:
|
||||
~~~
|
||||
option rpc-auth-allow-insecure on
|
||||
~~~
|
||||
4. restarting glusterd is necessary
|
||||
~~~
|
||||
service glusterd restart
|
||||
~~~
|
||||
|
||||
More details are also documented in the Gluster Wiki on the [Libgfapi with qemu libvirt](http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt) page.
|
||||
|
||||
- For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
|
||||
|
||||
- libgfapi clients calling `glfs_fini` before a successfull `glfs_init` will cause the client to
|
||||
hang has been [reported by QEMU developers](https://bugs.launchpad.net/bugs/1308542).
|
||||
The workaround is NOT to call `glfs_fini` for error cases encountered before a successfull
|
||||
`glfs_init`. Follow [Bug 1091335](https://bugzilla.redhat.com/1091335) to get informed when a
|
||||
release is made available that contains a final fix.
|
||||
|
||||
- After enabling `server.manage-gids`, the volume needs to be stopped and
|
||||
started again to have the option enabled in the brick processes
|
||||
|
||||
gluster volume stop <volname>
|
||||
gluster volume start <volname>
|
||||
68
release-notes/3.5.2.md
Normal file
68
release-notes/3.5.2.md
Normal file
@@ -0,0 +1,68 @@
|
||||
## Release Notes for GlusterFS 3.5.2
|
||||
|
||||
This is mostly a bugfix release. The [Release Notes for 3.5.0](./3.5.0.md) and [3.5.1](./3.5.1.md) contain a listing of all the new features that were added and bugs fixed.
|
||||
|
||||
### Bugs Fixed:
|
||||
|
||||
- [1096020](https://bugzilla.redhat.com/1096020): NFS server crashes in _socket_read_vectored_request
|
||||
- [1100050](https://bugzilla.redhat.com/1100050): Can't write to quota enable folder
|
||||
- [1103050](https://bugzilla.redhat.com/1103050): nfs: reset command does not alter the result for nfs options earlier set
|
||||
- [1105891](https://bugzilla.redhat.com/1105891): features/gfid-access: stat on .gfid virtual directory return EINVAL
|
||||
- [1111454](https://bugzilla.redhat.com/1111454): creating symlinks generates errors on stripe volume
|
||||
- [1112111](https://bugzilla.redhat.com/1112111): Self-heal errors with "afr crawl failed for child 0 with ret -1" while performing rolling upgrade.
|
||||
- [1112348](https://bugzilla.redhat.com/1112348): [AFR] I/O fails when one of the replica nodes go down
|
||||
- [1112659](https://bugzilla.redhat.com/1112659): Fix inode leaks in gfid-access xlator
|
||||
- [1112980](https://bugzilla.redhat.com/1112980): NFS subdir authentication doesn't correctly handle multi-(homed,protocol,etc) network addresses
|
||||
- [1113007](https://bugzilla.redhat.com/1113007): nfs-utils should be installed as dependency while installing glusterfs-server
|
||||
- [1113403](https://bugzilla.redhat.com/1113403): Excessive logging in quotad.log of the kind 'null client'
|
||||
- [1113749](https://bugzilla.redhat.com/1113749): client_t clienttable cliententries are never expanded when all entries are used
|
||||
- [1113894](https://bugzilla.redhat.com/1113894): AFR : self-heal of few files not happening when a AWS EC2 Instance is back online after a restart
|
||||
- [1113959](https://bugzilla.redhat.com/1113959): Spec %post server does not wait for the old glusterd to exit
|
||||
- [1114501](https://bugzilla.redhat.com/1114501): Dist-geo-rep : deletion of files on master, geo-rep fails to propagate to slaves.
|
||||
- [1115369](https://bugzilla.redhat.com/1115369): Allow the usage of the wildcard character '*' to the options "nfs.rpc-auth-allow" and "nfs.rpc-auth-reject"
|
||||
- [1115950](https://bugzilla.redhat.com/1115950): glfsheal: Improve the way in which we check the presence of replica volumes
|
||||
- [1116672](https://bugzilla.redhat.com/1116672): Resource cleanup doesn't happen for clients on servers after disconnect
|
||||
- [1116997](https://bugzilla.redhat.com/1116997): mounting a volume over NFS (TCP) with MOUNT over UDP fails
|
||||
- [1117241](https://bugzilla.redhat.com/1117241): backport 'gluster volume status --xml' issues
|
||||
- [1120151](https://bugzilla.redhat.com/1120151): Glustershd memory usage too high
|
||||
- [1124728](https://bugzilla.redhat.com/1124728): SMB: CIFS mount fails with the latest glusterfs rpm's
|
||||
|
||||
### Known Issues:
|
||||
|
||||
- The following configuration changes are necessary for 'qemu' and 'samba vfs
|
||||
plugin' integration with libgfapi to work seamlessly:
|
||||
|
||||
1. gluster volume set <volname> server.allow-insecure on
|
||||
2. restarting the volume is necessary
|
||||
|
||||
~~~
|
||||
gluster volume stop <volname>
|
||||
gluster volume start <volname>
|
||||
~~~
|
||||
|
||||
3. Edit `/etc/glusterfs/glusterd.vol` to contain this line:
|
||||
|
||||
~~~
|
||||
option rpc-auth-allow-insecure on
|
||||
~~~
|
||||
|
||||
4. restarting glusterd is necessary
|
||||
|
||||
~~~
|
||||
service glusterd restart
|
||||
~~~
|
||||
|
||||
More details are also documented in the Gluster Wiki on the [Libgfapi with qemu libvirt](http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt) page.
|
||||
|
||||
- For Block Device translator based volumes open-behind translator at the
|
||||
client side needs to be disabled.
|
||||
|
||||
gluster volume set <volname> performance.open-behind disabled
|
||||
|
||||
|
||||
- libgfapi clients calling `glfs_fini` before a successfull `glfs_init` will cause the client to
|
||||
hang as reported [here](http://lists.gnu.org/archive/html/gluster-devel/2014-04/msg00179.html).
|
||||
The workaround is NOT to call `glfs_fini` for error cases encountered before a successfull
|
||||
`glfs_init`.
|
||||
|
||||
- If the `/var/run/gluster` directory does not exist enabling quota will likely fail ([Bug 1117888](https://bugzilla.redhat.com/show_bug.cgi?id=1117888)).
|
||||
81
release-notes/3.5.3.md
Normal file
81
release-notes/3.5.3.md
Normal file
@@ -0,0 +1,81 @@
|
||||
## Release Notes for GlusterFS 3.5.3
|
||||
|
||||
This is a bugfix release. The [Release Notes for 3.5.0](./3.5.0.md),
|
||||
[3.5.1](./3.5.1.md) and [3.5.2](./3.5.2.md) contain a listing of all the new
|
||||
features that were added and bugs fixed in the GlusterFS 3.5 stable release.
|
||||
|
||||
### Bugs Fixed:
|
||||
|
||||
- [1081016](https://bugzilla.redhat.com/1081016): glusterd needs xfsprogs and e2fsprogs packages
|
||||
- [1100204](https://bugzilla.redhat.com/1100204): brick failure detection does not work for ext4 filesystems
|
||||
- [1126801](https://bugzilla.redhat.com/1126801): glusterfs logrotate config file pollutes global config
|
||||
- [1129527](https://bugzilla.redhat.com/1129527): DHT :- data loss - file is missing on renaming same file from multiple client at same time
|
||||
- [1129541](https://bugzilla.redhat.com/1129541): [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists"
|
||||
- [1132391](https://bugzilla.redhat.com/1132391): NFS interoperability problem: stripe-xlator removes EOF at end of READDIR
|
||||
- [1133949](https://bugzilla.redhat.com/1133949): Minor typo in afr logging
|
||||
- [1136221](https://bugzilla.redhat.com/1136221): The memories are exhausted quickly when handle the message which has multi fragments in a single record
|
||||
- [1136835](https://bugzilla.redhat.com/1136835): crash on fsync
|
||||
- [1138922](https://bugzilla.redhat.com/1138922): DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories
|
||||
- [1139103](https://bugzilla.redhat.com/1139103): DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing
|
||||
- [1139170](https://bugzilla.redhat.com/1139170): DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file
|
||||
- [1139245](https://bugzilla.redhat.com/1139245): vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process)
|
||||
- [1140338](https://bugzilla.redhat.com/1140338): rebalance is not resulting in the hash layout changes being available to nfs client
|
||||
- [1140348](https://bugzilla.redhat.com/1140348): Renaming file while rebalance is in progress causes data loss
|
||||
- [1140549](https://bugzilla.redhat.com/1140549): DHT: Rebalance process crash after add-brick and `rebalance start' operation
|
||||
- [1140556](https://bugzilla.redhat.com/1140556): Core: client crash while doing rename operations on the mount
|
||||
- [1141558](https://bugzilla.redhat.com/1141558): AFR : "gluster volume heal <volume_name> info" prints some random characters
|
||||
- [1141733](https://bugzilla.redhat.com/1141733): data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back
|
||||
- [1142052](https://bugzilla.redhat.com/1142052): Very high memory usage during rebalance
|
||||
- [1142614](https://bugzilla.redhat.com/1142614): files with open fd's getting into split-brain when bricks goes offline and comes back online
|
||||
- [1144315](https://bugzilla.redhat.com/1144315): core: all brick processes crash when quota is enabled
|
||||
- [1145000](https://bugzilla.redhat.com/1145000): Spec %post server does not wait for the old glusterd to exit
|
||||
- [1147156](https://bugzilla.redhat.com/1147156): AFR client segmentation fault in afr_priv_destroy
|
||||
- [1147243](https://bugzilla.redhat.com/1147243): nfs: volume set help says the rmtab file is in "/var/lib/glusterd/rmtab"
|
||||
- [1149857](https://bugzilla.redhat.com/1149857): Option transport.socket.bind-address ignored
|
||||
- [1153626](https://bugzilla.redhat.com/1153626): Sizeof bug for allocation of memory in afr_lookup
|
||||
- [1153629](https://bugzilla.redhat.com/1153629): AFR : excessive logging of "Non blocking entrylks failed" in glfsheal log file.
|
||||
- [1153900](https://bugzilla.redhat.com/1153900): Enabling Quota on existing data won't create pgfid xattrs
|
||||
- [1153904](https://bugzilla.redhat.com/1153904): self heal info logs are filled with messages reporting ENOENT while self-heal is going on
|
||||
- [1155073](https://bugzilla.redhat.com/1155073): Excessive logging in the self-heal daemon after a replace-brick
|
||||
- [1157661](https://bugzilla.redhat.com/1157661): GlusterFS allows insecure SSL modes
|
||||
|
||||
### Known Issues:
|
||||
|
||||
- The following configuration changes are necessary for 'qemu' and 'samba vfs
|
||||
plugin' integration with libgfapi to work seamlessly:
|
||||
|
||||
1. gluster volume set <volname> server.allow-insecure on
|
||||
2. restarting the volume is necessary
|
||||
|
||||
~~~
|
||||
gluster volume stop <volname>
|
||||
gluster volume start <volname>
|
||||
~~~
|
||||
|
||||
3. Edit `/etc/glusterfs/glusterd.vol` to contain this line:
|
||||
|
||||
~~~
|
||||
option rpc-auth-allow-insecure on
|
||||
~~~
|
||||
|
||||
4. restarting glusterd is necessary
|
||||
|
||||
~~~
|
||||
service glusterd restart
|
||||
~~~
|
||||
|
||||
More details are also documented in the Gluster Wiki on the [Libgfapi with qemu libvirt](http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt) page.
|
||||
|
||||
- For Block Device translator based volumes open-behind translator at the
|
||||
client side needs to be disabled.
|
||||
|
||||
gluster volume set <volname> performance.open-behind disabled
|
||||
|
||||
- libgfapi clients calling `glfs_fini` before a successful `glfs_init` will cause the client to
|
||||
hang as reported [here](http://lists.gnu.org/archive/html/gluster-devel/2014-04/msg00179.html).
|
||||
The workaround is NOT to call `glfs_fini` for error cases encountered before a successful
|
||||
`glfs_init`. This is being tracked in [Bug 1134050](https://bugzilla.redhat.com/1134050) for
|
||||
glusterfs-3.5 and [Bug 1093594](https://bugzilla.redhat.com/1093594) for mainline.
|
||||
|
||||
- If the `/var/run/gluster` directory does not exist enabling quota will likely
|
||||
fail ([Bug 1117888](https://bugzilla.redhat.com/show_bug.cgi?id=1117888)).
|
||||
76
release-notes/3.5.4.md
Normal file
76
release-notes/3.5.4.md
Normal file
@@ -0,0 +1,76 @@
|
||||
## Release Notes for GlusterFS 3.5.4
|
||||
|
||||
This is a bugfix release. The [Release Notes for 3.5.0](./3.5.0.md),
|
||||
[3.5.1](./3.5.1.md), [3.5.2](./3.5.2.md) and [3.5.3](./3.5.3.md) contain a listing of
|
||||
all the new features that were added and bugs fixed in the GlusterFS 3.5 stable
|
||||
release.
|
||||
|
||||
### Bugs Fixed:
|
||||
|
||||
- [1092037](https://bugzilla.redhat.com/1092037): Issues reported by Cppcheck static analysis tool
|
||||
- [1101138](https://bugzilla.redhat.com/1101138): meta-data split-brain prevents entry/data self-heal of dir/file respectively
|
||||
- [1115197](https://bugzilla.redhat.com/1115197): Directory quota does not apply on it's sub-directories
|
||||
- [1159968](https://bugzilla.redhat.com/1159968): glusterfs.spec.in: deprecate *.logrotate files in dist-git in favor of the upstream logrotate files
|
||||
- [1160711](https://bugzilla.redhat.com/1160711): libgfapi: use versioned symbols in libgfapi.so for compatibility
|
||||
- [1161102](https://bugzilla.redhat.com/1161102): self heal info logs are filled up with messages reporting split-brain
|
||||
- [1162150](https://bugzilla.redhat.com/1162150): AFR gives EROFS when fop fails on all subvolumes when client-quorum is enabled
|
||||
- [1162226](https://bugzilla.redhat.com/1162226): bulk remove xattr should not fail if removexattr fails with ENOATTR/ENODATA
|
||||
- [1162230](https://bugzilla.redhat.com/1162230): quota xattrs are exposed in lookup and getxattr
|
||||
- [1162767](https://bugzilla.redhat.com/1162767): DHT: Rebalance- Rebalance process crash after remove-brick
|
||||
- [1166275](https://bugzilla.redhat.com/1166275): Directory fd leaks in index translator
|
||||
- [1168173](https://bugzilla.redhat.com/1168173): Regression tests fail in quota-anon-fs-nfs.t
|
||||
- [1173515](https://bugzilla.redhat.com/1173515): [HC] - mount.glusterfs fails to check return of mount command.
|
||||
- [1174250](https://bugzilla.redhat.com/1174250): Glusterfs outputs a lot of warnings and errors when quota is enabled
|
||||
- [1177339](https://bugzilla.redhat.com/1177339): entry self-heal in 3.5 and 3.6 are not compatible
|
||||
- [1177928](https://bugzilla.redhat.com/1177928): Directories not visible anymore after add-brick, new brick dirs not part of old bricks
|
||||
- [1184528](https://bugzilla.redhat.com/1184528): Some newly created folders have root ownership although created by unprivileged user
|
||||
- [1186121](https://bugzilla.redhat.com/1186121): tar on a gluster directory gives message "file changed as we read it" even though no updates to file in progress
|
||||
- [1190633](https://bugzilla.redhat.com/1190633): self-heal-algorithm with option "full" doesn't heal sparse files correctly
|
||||
- [1191006](https://bugzilla.redhat.com/1191006): Building argp-standalone breaks nightly builds on Fedora Rawhide
|
||||
- [1192832](https://bugzilla.redhat.com/1192832): log files get flooded when removexattr() can't find a specified key or value
|
||||
- [1200764](https://bugzilla.redhat.com/1200764): [AFR] Core dump and crash observed during disk replacement case
|
||||
- [1202675](https://bugzilla.redhat.com/1202675): Perf: readdirp in replicated volumes causes performance degrade
|
||||
- [1211841](https://bugzilla.redhat.com/1211841): glusterfs-api.pc versioning breaks QEMU
|
||||
- [1222150](https://bugzilla.redhat.com/1222150): readdirp return 64bits inodes even if enable-ino32 is set
|
||||
|
||||
### Known Issues:
|
||||
|
||||
- The following configuration changes are necessary for 'qemu' and 'samba vfs
|
||||
plugin' integration with libgfapi to work seamlessly:
|
||||
|
||||
1. gluster volume set <volname> server.allow-insecure on
|
||||
2. restarting the volume is necessary
|
||||
|
||||
~~~
|
||||
gluster volume stop <volname>
|
||||
gluster volume start <volname>
|
||||
~~~
|
||||
|
||||
3. Edit `/etc/glusterfs/glusterd.vol` to contain this line:
|
||||
|
||||
~~~
|
||||
option rpc-auth-allow-insecure on
|
||||
~~~
|
||||
|
||||
4. restarting glusterd is necessary
|
||||
|
||||
~~~
|
||||
service glusterd restart
|
||||
~~~
|
||||
|
||||
More details are also documented in the Gluster Wiki on the [Libgfapi with qemu libvirt](http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt) page.
|
||||
|
||||
- For Block Device translator based volumes open-behind translator at the
|
||||
client side needs to be disabled.
|
||||
|
||||
gluster volume set <volname> performance.open-behind disabled
|
||||
|
||||
|
||||
- libgfapi clients calling `glfs_fini` before a successful `glfs_init` will cause the client to
|
||||
hang as reported [here](http://lists.gnu.org/archive/html/gluster-devel/2014-04/msg00179.html).
|
||||
The workaround is NOT to call `glfs_fini` for error cases encountered before a successful
|
||||
`glfs_init`. This is being tracked in [Bug 1134050](https://bugzilla.redhat.com/1134050) for
|
||||
glusterfs-3.5 and [Bug 1093594](https://bugzilla.redhat.com/1093594) for mainline.
|
||||
|
||||
- If the `/var/run/gluster` directory does not exist enabling quota will likely
|
||||
fail ([Bug 1117888](https://bugzilla.redhat.com/show_bug.cgi?id=1117888)).
|
||||
132
release-notes/3.6.0.md
Normal file
132
release-notes/3.6.0.md
Normal file
@@ -0,0 +1,132 @@
|
||||
## Major Changes and Features
|
||||
|
||||
Documentation about major changes and features is also included in the `doc/features/` directory of GlusterFS repository.
|
||||
|
||||
### Volume Snapshot
|
||||
|
||||
Volume snapshot provides a point-in-time copy of a GlusterFS volume. The snapshot is an online operation and hence filesystem data continues to be available for the clients while the snapshot is being taken.
|
||||
|
||||
For more information refer [here](../Feature Planning/GlusterFS 3.6/Gluster Volume Snapshot.md).
|
||||
|
||||
### User Serviceable Snapshots
|
||||
|
||||
User Serviceable Snapshots provides the ability for users to access snapshots of GlusterFS volumes without administrative intervention.
|
||||
|
||||
For more information refer [here](../Feature Planning/GlusterFS 3.6/Gluster User Serviceable Snapshots.md).
|
||||
|
||||
### Erasure Coding
|
||||
|
||||
The new disperse translator provides the ability to perform erasure coding across nodes.
|
||||
|
||||
For more information refer [here](../Feature Planning/GlusterFS 3.6/disperse.md).
|
||||
|
||||
### Granular locking support for management operations
|
||||
|
||||
Glusterd now holds a volume lock to support parallel management operations on different volumes.
|
||||
|
||||
### Journaling enhancements (changelog xlator)
|
||||
|
||||
Introduction of history API to consume journal records which were persisted by the changelog translator. With this API, it's not longer required to perform an expensive
|
||||
filesystem crawl to identify changes. Geo-replication makes use of this (on [re]start) thereby optimizing remote replication for purges, hardlinks, etc.
|
||||
|
||||
### Better Support for bricks with heterogeneous sizes
|
||||
|
||||
Prior to 3.6, bricks with heterogeneous sizes were treated as equal regardless of size, and would have been assigned an equal share of files. From 3.6, assignment of files to bricks will take into account the sizes of the bricks.
|
||||
|
||||
### Improved SSL support
|
||||
|
||||
GlusterFS 3.6 provides better support to enable SSL on both management and data connections. This feature is currently being consumed by the GlusterFS native driver in OpenStack Manila.
|
||||
|
||||
### Better peer identification
|
||||
GlusterFS 3.6 improves peer identification. GlusterD will no longer complain when a mixture of FQDNs, shortnames and IP addresses are used. Changes done for this improvement have also laid down a base for improving multi network support in GlusterFS.
|
||||
|
||||
### Meta translator
|
||||
|
||||
Meta translator provides a virtual interface for viewing internal state of translators.
|
||||
|
||||
### Improved synchronous replication support (AFRv2)
|
||||
|
||||
The replication translator (AFR) in GlusterFS 3.6 has undergone a complete rewrite (http://review.gluster.org/#/c/6010/) and is referred to as AFRv2.
|
||||
|
||||
From a user point of view, there is no change in the replication behaviour but there are some caveats to be noted from an admin point of view:
|
||||
|
||||
- Lookups do not trigger meta-data and data self-heals anymore. They only trigger entry-self-heals. Data and meta-data are healed by the self-heal daemon only.
|
||||
|
||||
- Bricks in a replica set do not mark any pending change log extended attributes for itself during pre or post op. They only mark it for other bricks in the replica set.
|
||||
|
||||
For e.g.:
|
||||
In a replica 2 volume, `trusted.afr.<volname>-client-0` for brick-0 and `trusted.afr.<volname>-client-1` for brick-1 will always be `0x000000000000000000000000`.
|
||||
|
||||
- If the post-op changelog updation does not complete successfully on a brick, a `trusted.afr.dirty` extended attribute is set on that brick.
|
||||
|
||||
### Barrier translator
|
||||
The barrier translator allows file operations to be temporarily 'paused' on GlusterFS bricks, which is needed for performing consistent snapshots of a GlusterFS volume.
|
||||
|
||||
For more information, see [here](../Feature Planning/GlusterFS 3.6/Server-side Barrier feature.md).
|
||||
|
||||
### Remove brick moves data by default
|
||||
|
||||
Prior to 3.6, `volume remove-brick <volname>` CLI would remove the brick from the volume without performing any data migration. Now the default behavior has been changed to perform data migration when this command is issued. Removing a brick without data migration can now be performed through `volume remove-brick <volname> force` interface.
|
||||
|
||||
### Experimental Features
|
||||
|
||||
The following features are experimental with this release:
|
||||
|
||||
- support for rdma volumes.
|
||||
- support for NUFA translator.
|
||||
- disk-encryption
|
||||
- On-Wire Compression + Decompression [CDC]
|
||||
|
||||
### Porting Status
|
||||
|
||||
- NetBSD and FreeBSD support is experimental, but regressions tests suggest that it is close to be fully supported. Please make sure you use latest NetBSD code from -current or netbsd-7 branches.
|
||||
|
||||
- OSX support is in an alpha state. More testing will help in maturing this support.
|
||||
|
||||
## Minor Improvements:
|
||||
|
||||
- Introduction of `server.anonuid` and `server.anongid` options for root squashing
|
||||
|
||||
- Root squashing doesn't happen for clients in trusted storage pool
|
||||
|
||||
- Memory accounting of glusterfs processes has been enabled by default
|
||||
|
||||
- The Gluster/NFS server now has support for setting access permissions on volumes with wildcard IP-addresses and IP-address/subnet (CIDR notation). More details and examples are in the [commit message](http://review.gluster.org/7485).
|
||||
|
||||
- More preparation for better integration with the [nfs-ganesha](http://nfs-ganesha.github.com/) user-space NFS-server. The changes are mostly related to the handle-based functions in `libgfapi.so`.
|
||||
|
||||
- A new logging framework that can suppress repetitive log messages and provide a dictionary of messages has been added. Few translators have now been integrated with the framework. More translators are expected to integrate with this framework in upcoming minor & major releases.
|
||||
|
||||
### Known Issues:
|
||||
- The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:
|
||||
|
||||
1. `gluster volume set <volname> server.allow-insecure on`
|
||||
|
||||
2. Edit `/etc/glusterfs/glusterd.vol` to contain this line:
|
||||
`option rpc-auth-allow-insecure on`
|
||||
|
||||
Post 1, restarting the volume would be necessary:
|
||||
`# gluster volume stop <volname>`
|
||||
`# gluster volume start <volname>`
|
||||
|
||||
Post 2, restarting glusterd would be necessary:
|
||||
`# service glusterd restart`
|
||||
|
||||
- For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
|
||||
|
||||
- Renames happening on a file that is being migrated during rebalance will fail.
|
||||
|
||||
- Dispersed volumes do not work with self-heal daemon. Self-healing is only activated when a damaged file or directory is accessed. To force a full self-heal or to replace a brick requires to traverse the file system from a mount point. This is the recommended command to do so:
|
||||
|
||||
find <mount> -d -exec getfattr -h -n test {} \;
|
||||
|
||||
- Quota on dispersed volumes is not correctly computed, allowing to store more data than specified. A workaround to this problem is to define a smaller quota based on this formula:
|
||||
|
||||
Q' = Q / (N - R)
|
||||
|
||||
Where Q is the desired quota value, Q' is the new quota value to use, N is the number of bricks per disperse set, and R is the redundancy.
|
||||
|
||||
### Upgrading to 3.6.X
|
||||
|
||||
Before upgrading to 3.6 version of gluster from 3.4.X or 3.5.x, please take a look at following link:
|
||||
[Upgrade Gluster to 3.6](../Upgrade-Guide/upgrade_to_3.6.md)
|
||||
81
release-notes/3.6.3.md
Normal file
81
release-notes/3.6.3.md
Normal file
@@ -0,0 +1,81 @@
|
||||
## Release Notes for GlusterFS 3.6.3
|
||||
|
||||
This is a bugfix release. The [Release Notes for 3.6.0](./3.6.0.md) contain a listing of
|
||||
all the new features that were added and bugs fixed in the GlusterFS 3.6 stable
|
||||
release.
|
||||
|
||||
### Bugs Fixed:
|
||||
|
||||
- [1187526](https://bugzilla.redhat.com/1187526): Disperse volume mounted through NFS doesn't list any files/directories
|
||||
- [1188471](https://bugzilla.redhat.com/1188471): When the volume is in stopped state/all the bricks are down mount of the volume hangs
|
||||
- [1201484](https://bugzilla.redhat.com/1201484): glusterfs-3.6.2 fails to build on Ubuntu Precise: 'RDMA_OPTION_ID_REUSEADDR' undeclared
|
||||
- [1202212](https://bugzilla.redhat.com/1202212): Performance enhancement for RDMA
|
||||
- [1189023](https://bugzilla.redhat.com/1189023): Directories not visible anymore after add-brick, new brick dirs not part of old bricks
|
||||
- [1202673](https://bugzilla.redhat.com/1202673): Perf: readdirp in replicated volumes causes performance degrade
|
||||
- [1203081](https://bugzilla.redhat.com/1203081): Entries in indices/xattrop directory not removed appropriately
|
||||
- [1203648](https://bugzilla.redhat.com/1203648): Quota: Build ancestry in the lookup
|
||||
- [1199936](https://bugzilla.redhat.com/1199936): readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed when NFS is disabled
|
||||
- [1200297](https://bugzilla.redhat.com/1200297): cli crashes when listing quota limits with xml output
|
||||
- [1201622](https://bugzilla.redhat.com/1201622): Convert quota size from n-to-h order before using it
|
||||
- [1194141](https://bugzilla.redhat.com/1194141): AFR : failure in self-heald.t
|
||||
- [1201624](https://bugzilla.redhat.com/1201624): Spurious failure of tests/bugs/quota/bug-1038598.t
|
||||
- [1194306](https://bugzilla.redhat.com/1194306): Do not count files which did not need index heal in the first place as successfully healed
|
||||
- [1200258](https://bugzilla.redhat.com/1200258): Quota: features.quota-deem-statfs is "on" even after disabling quota.
|
||||
- [1165938](https://bugzilla.redhat.com/1165938): Fix regression test spurious failures
|
||||
- [1197598](https://bugzilla.redhat.com/1197958): NFS logs are filled with system.posix_acl_access messages
|
||||
- [1199577](https://bugzilla.redhat.com/1199577): mount.glusterfs uses /dev/stderr and fails if the device does not exist
|
||||
- [1197598](https://bugzilla.redhat.com/1197598): NFS logs are filled with system.posix_acl_access messages
|
||||
- [1188066](https://bugzilla.redhat.com/1188066): logging improvements in marker translator
|
||||
- [1191537](https://bugzilla.redhat.com/1191537): With afrv2 + ext4, lookups on directories with large offsets could result in duplicate/missing entries
|
||||
- [1165129](https://bugzilla.redhat.com/1165129): libgfapi: use versioned symbols in libgfapi.so for compatibility
|
||||
- [1179136](https://bugzilla.redhat.com/1179136): glusterd: Gluster rebalance status returns failure
|
||||
- [1176756](https://bugzilla.redhat.com/1176756): glusterd: remote locking failure when multiple synctask transactions are run
|
||||
- [1188064](https://bugzilla.redhat.com/1188064): log files get flooded when removexattr() can't find a specified key or value
|
||||
- [1165938](https://bugzilla.redhat.com/1165938): Fix regression test spurious failures
|
||||
- [1192522](https://bugzilla.redhat.com/1192522): index heal doesn't continue crawl on self-heal failure
|
||||
- [1193970](https://bugzilla.redhat.com/1193970): Fix spurious ssl-authz.t regression failure (backport)
|
||||
- [1138897](https://bugzilla.redhat.com/1138897): NetBSD port
|
||||
- [1184527](https://bugzilla.redhat.com/1184527): Some newly created folders have root ownership although created by unprivileged user
|
||||
- [1181977](https://bugzilla.redhat.com/1181977): gluster vol clear-locks vol-name path kind all inode return IO error in a disperse volume
|
||||
- [1159471](https://bugzilla.redhat.com/1159471): rename operation leads to core dump
|
||||
- [1173528](https://bugzilla.redhat.com/1173528): Change in volume heal info command output
|
||||
- [1186119](https://bugzilla.redhat.com/1186119): tar on a gluster directory gives message "file changed as we read it" even though no updates to file in progress
|
||||
- [1183716](https://bugzilla.redhat.com/1183716): Force replace-brick lead to the persistent write(use dd) return Input/output error
|
||||
- [1138897](https://bugzilla.redhat.com/1138897): NetBSD port
|
||||
- [1178590](https://bugzilla.redhat.com/1178590): Enable quota(default) leads to heal directory's xattr failed.
|
||||
- [1182490](https://bugzilla.redhat.com/1182490): Internal ec xattrs are allowed to be modified
|
||||
- [1187547](https://bugzilla.redhat.com/1187547): self-heal-algorithm with option "full" doesn't heal sparse files correctly
|
||||
- [1174170](https://bugzilla.redhat.com/1174170): Glusterfs outputs a lot of warnings and errors when quota is enabled
|
||||
- [1212684](https://bugzilla.redhat.com/1212684): - GlusterD segfaults when started with management SSL
|
||||
|
||||
### Known Issues:
|
||||
|
||||
- The following configuration changes are necessary for 'qemu' and 'samba vfs
|
||||
plugin' integration with libgfapi to work seamlessly:
|
||||
|
||||
1. gluster volume set <volname> server.allow-insecure on
|
||||
2. restarting the volume is necessary
|
||||
|
||||
~~~
|
||||
gluster volume stop <volname>
|
||||
gluster volume start <volname>
|
||||
~~~
|
||||
|
||||
3. Edit `/etc/glusterfs/glusterd.vol` to contain this line:
|
||||
|
||||
~~~
|
||||
option rpc-auth-allow-insecure on
|
||||
~~~
|
||||
|
||||
4. restarting glusterd is necessary
|
||||
|
||||
~~~
|
||||
service glusterd restart
|
||||
~~~
|
||||
|
||||
More details are also documented in the Gluster Wiki on the [Libgfapi with qemu libvirt](http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt) page.
|
||||
|
||||
- For Block Device translator based volumes open-behind translator at the
|
||||
client side needs to be disabled.
|
||||
|
||||
gluster volume set <volname> performance.open-behind disable
|
||||
166
release-notes/3.7.0.md
Normal file
166
release-notes/3.7.0.md
Normal file
@@ -0,0 +1,166 @@
|
||||
Release Notes for GlusterFS 3.7.0
|
||||
|
||||
## Major Changes and Features
|
||||
|
||||
Documentation about major changes and features is included in the [`doc/features/` directory](https://github.com/gluster/glusterdocs/tree/release-3.7.0-1/doc/Features) of GlusterFS repository.
|
||||
|
||||
### Bitrot Detection
|
||||
|
||||
Bitrot detection is a technique used to identify an “insidious” type of disk error where data is silently corrupted with no indication from the disk to the
|
||||
storage software layer that an error has occurred. When bitrot detection is enabled on a volume, gluster performs signing of all files/objects in the volume and scrubs data periodically for signature verification. All anomalies observed will be noted in log files.
|
||||
|
||||
For more information, refer [here](../Feature Planning/GlusterFS 3.7/BitRot.md).
|
||||
|
||||
### Multi threaded epoll for performance improvements
|
||||
|
||||
Gluster 3.7 introduces multiple threads to dequeue and process more requests from epoll queues. This improves performance by processing more I/O requests. Workloads that involve read/write operations on a lot of small files can benefit from this enhancement.
|
||||
|
||||
For more information refer [here](../Feature Planning/GlusterFS 3.7/Small File Performance.md).
|
||||
|
||||
### Volume Tiering [Experimental]
|
||||
|
||||
Policy based tiering for placement of files. This feature will serve as a foundational piece for building support for data classification.
|
||||
|
||||
For more information refer [here](../Feature Planning/GlusterFS 3.7/Data Classification.md).
|
||||
|
||||
Volume Tiering is marked as an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release.
|
||||
|
||||
### Trashcan
|
||||
|
||||
This feature will enable administrators to temporarily store deleted files from Gluster volumes for a specified time period.
|
||||
|
||||
For more information refer [here](../Feature Planning/GlusterFS 3.7/Trash.md).
|
||||
|
||||
### Efficient Object Count and Inode Quota Support
|
||||
|
||||
This improvement enables an easy mechanism to retrieve the number of objects per directory or volume. Count of objects/files within a directory hierarchy is stored as an extended attribute of a directory. The extended attribute can be queried to retrieve the count.
|
||||
|
||||
For more information refer [here](../Feature Planning/GlusterFS 3.7/Object Count.md).
|
||||
|
||||
This feature has been utilized to add support for inode quotas.
|
||||
|
||||
For more details about inode quotas, refer [here](../Features/quota-object-count.md).
|
||||
|
||||
### Pro-active Self healing for Erasure Coding
|
||||
|
||||
Gluster 3.7 adds pro-active self healing support for erasure coded volumes.
|
||||
|
||||
### Exports and Netgroups Authentication for NFS
|
||||
|
||||
This feature adds Linux-style exports & netgroups authentication to the native NFS server. This enables administrators to restrict access to specific clients & netgroups for volume/sub-directory NFSv3 exports.
|
||||
|
||||
For more information refer [here](../Feature Planning/GlusterFS 3.7/Exports and Netgroups Authentication.md).
|
||||
|
||||
### GlusterFind
|
||||
|
||||
GlusterFind is a new tool that provides a mechanism to monitor data events within a volume. Detection of events like modified files is made easier without having to traverse the entire volume.
|
||||
|
||||
For more information refer [here](../GlusterFS Tools/glusterfind.md).
|
||||
|
||||
### Rebalance Performance Improvements
|
||||
|
||||
Rebalance and remove brick operations in Gluster get a performance boost by speeding up identification of files needing movement and a multi-threaded mechanism to move all such files.
|
||||
|
||||
For more information refer [here](../Feature Planning/GlusterFS 3.7/Improve Rebalance Performance.md).
|
||||
|
||||
### NFSv4 and pNFS support
|
||||
|
||||
Gluster 3.7 supports export of volumes through NFSv4, NFSv4.1 and pNFS. This support is enabled via NFS Ganesha. Infrastructure changes done in Gluster 3.7 to support this feature include:
|
||||
|
||||
- Addition of upcall infrastructure for cache invalidation.
|
||||
- Support for lease locks and delegations.
|
||||
- Support for enabling Ganesha through Gluster CLI.
|
||||
- Corosync and pacemaker based implementation providing resource monitoring and failover to accomplish NFS HA.
|
||||
|
||||
For more information refer the below links:
|
||||
|
||||
- [NFS Ganesha Integration](../Features/glusterfs_nfs-ganesha_integration.md)
|
||||
- [Upcall Infrastructure](../Features/upcall.md)
|
||||
- [Gluster CLI for NFS Ganesha](../Feature Planning/GlusterFS 3.7/Gluster CLI for NFS Ganesha.md)
|
||||
- [High Availability for NFS Ganesha](../Feature Planning/GlusterFS 3.7/HA for Ganesha.md)
|
||||
- [pNFS support for Gluster](../Features/mount_gluster_volume_using_pnfs.md)
|
||||
|
||||
pNFS support for Gluster volumes and NFSv4 delegations are in beta for this release. Infrastructure changes to support Lease locks and NFSv4 delegations are targeted for a 3.7.x minor release.
|
||||
|
||||
### Snapshot Scheduling
|
||||
|
||||
With this enhancement, administrators can schedule volume snapshots.
|
||||
|
||||
For more information, see [here](../Feature Planning/GlusterFS 3.7/Scheduling of Snapshot.md).
|
||||
|
||||
### Snapshot Cloning
|
||||
|
||||
Volume snapshots can now be cloned to create a new writeable volume.
|
||||
|
||||
For more information, see [here](../Feature Planning/GlusterFS 3.7/Clone of Snapshot.md).
|
||||
|
||||
### Sharding [Experimental]
|
||||
|
||||
Sharding addresses the problem of fragmentation of space within a volume. This feature adds support for files that are larger than the size of an individual brick. Sharding works by chunking files to blobs of a configurabe size.
|
||||
|
||||
For more information, see [here](../Feature Planning/GlusterFS 3.7/Sharding xlator.md).
|
||||
|
||||
Sharding is an experimental feature for this release. It is expected to be fully supported in a 3.7.x minor release.
|
||||
|
||||
### RCU in glusterd
|
||||
|
||||
Thread synchronization and critical section access has been improved by introducing userspace RCU in glusterd
|
||||
|
||||
### Arbiter Volumes
|
||||
|
||||
Arbiter volumes are 3 way replicated volumes where the 3rd brick of the replica is automatically configured as an arbiter. The 3rd brick contains only metadata which provides network partition tolerance and prevents split-brains from happening.
|
||||
|
||||
For more information, see [here](../Features/afr-arbiter-volumes.md).
|
||||
|
||||
### Better split-brain resolution
|
||||
|
||||
split-brain resolutions can now be also driven by users without administrative intervention.
|
||||
|
||||
For more information, see the 'Resolution of split-brain from the mount point' section [here](../Features/heal-info-and-split-brain-resolution.md).
|
||||
|
||||
### Geo-replication improvements
|
||||
|
||||
There have been several improvements in geo-replication for stability and performance. For more details, see [here](./geo-rep-in-3.7.md).
|
||||
|
||||
### Minor Improvements
|
||||
|
||||
* Message ID based logging has been added for several translators.
|
||||
* Quorum support for reads.
|
||||
* Snapshot names contain timestamps by default.Subsequent access to the snapshots should be done by the name listed in `gluster snapshot list`
|
||||
* Support for `gluster volume get <volname>` added.
|
||||
* libgfapi has added handle based functions to get/set POSIX ACLs based on common libacl structures.
|
||||
|
||||
### Known Issues
|
||||
|
||||
* Enabling Bitrot on volumes with more than 2 bricks on a node is known to cause problems.
|
||||
* Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported.
|
||||
* The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:
|
||||
|
||||
~~~
|
||||
# gluster volume set <volname> server.allow-insecure on
|
||||
~~~
|
||||
|
||||
Edit `/etc/glusterfs/glusterd.vol` to contain this line: `option rpc-auth-allow-insecure on`
|
||||
|
||||
Post 1, restarting the volume would be necessary:
|
||||
|
||||
~~~
|
||||
# gluster volume stop <volname>
|
||||
# gluster volume start <volname>
|
||||
~~~
|
||||
|
||||
Post 2, restarting glusterd would be necessary:
|
||||
|
||||
~~~
|
||||
# service glusterd restart
|
||||
~~~
|
||||
|
||||
or
|
||||
|
||||
~~~
|
||||
# systemctl restart glusterd
|
||||
~~~
|
||||
|
||||
### Upgrading to 3.7.0
|
||||
|
||||
Instructions for upgrading from previous versions of GlusterFS are maintained on [this wiki page](../Upgrade-Guide/Upgrade to 3.7.md).
|
||||
98
release-notes/3.7.1.md
Normal file
98
release-notes/3.7.1.md
Normal file
@@ -0,0 +1,98 @@
|
||||
## Release Notes for GlusterFS 3.7.1
|
||||
|
||||
This is a bugfix release. The [Release Notes for 3.7.0](./3.7.0.md), contain a
|
||||
listing of all the new features that were added.
|
||||
|
||||
```Note: Enabling Bitrot on volumes with more than 2 bricks on a node works with this release. ```
|
||||
|
||||
### Bugs Fixed
|
||||
|
||||
- [1212676](http://bugzilla.redhat.com/1212676): NetBSD port
|
||||
- [1218863](http://bugzilla.redhat.com/1218863): `ls' on a directory which has files with mismatching gfid's does not list anything
|
||||
- [1219782](http://bugzilla.redhat.com/1219782): Regression failures in tests/bugs/snapshot/bug-1112559.t
|
||||
- [1221000](http://bugzilla.redhat.com/1221000): detach-tier status emulates like detach-tier stop
|
||||
- [1221470](http://bugzilla.redhat.com/1221470): dHT rebalance: Dict_copy log messages when running rebalance on a dist-rep volume
|
||||
- [1221476](http://bugzilla.redhat.com/1221476): Data Tiering:rebalance fails on a tiered volume
|
||||
- [1221477](http://bugzilla.redhat.com/1221477): The tiering feature requires counters.
|
||||
- [1221503](http://bugzilla.redhat.com/1221503): DHT Rebalance : Misleading log messages for linkfiles
|
||||
- [1221507](http://bugzilla.redhat.com/1221507): NFS-Ganesha: ACL should not be enabled by default
|
||||
- [1221534](http://bugzilla.redhat.com/1221534): rebalance failed after attaching the tier to the volume.
|
||||
- [1221967](http://bugzilla.redhat.com/1221967): Do not allow detach-tier commands on a non tiered volume
|
||||
- [1221969](http://bugzilla.redhat.com/1221969): tiering: use sperate log/socket/pid file for tiering
|
||||
- [1222198](http://bugzilla.redhat.com/1222198): Fix nfs/mount3.c build warnings reported in Koji
|
||||
- [1222750](http://bugzilla.redhat.com/1222750): non-root geo-replication session goes to faulty state, when the session is started
|
||||
- [1222869](http://bugzilla.redhat.com/1222869): [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rhel6.6
|
||||
- [1223215](http://bugzilla.redhat.com/1223215): gluster volume status fails with locking failed error message
|
||||
- [1223286](http://bugzilla.redhat.com/1223286): [geo-rep]: worker died with "ESTALE" when performed rm -rf on a directory from mount of master volume
|
||||
- [1223644](http://bugzilla.redhat.com/1223644): [geo-rep]: With tarssh the file is created at slave but it doesnt get sync
|
||||
- [1224100](http://bugzilla.redhat.com/1224100): [geo-rep]: Even after successful sync, the DATA counter did not reset to 0
|
||||
- [1224241](http://bugzilla.redhat.com/1224241): gfapi: zero size issue in glfs_h_acl_set()
|
||||
- [1224292](http://bugzilla.redhat.com/1224292): peers connected in the middle of a transaction are participating in the transaction
|
||||
- [1224647](http://bugzilla.redhat.com/1224647): [RFE] Provide hourly scrubbing option
|
||||
- [1224650](http://bugzilla.redhat.com/1224650): SIGNING FAILURE Error messages are poping up in the bitd log
|
||||
- [1224894](http://bugzilla.redhat.com/1224894): Quota: spurious failures with quota testcases
|
||||
- [1225077](http://bugzilla.redhat.com/1225077): Fix regression test spurious failures
|
||||
- [1225279](http://bugzilla.redhat.com/1225279): Different client can not execute "for((i=0;i<1000;i++));do ls -al;done" in a same directory at the sametime
|
||||
- [1225318](http://bugzilla.redhat.com/1225318): glusterd could crash in remove-brick-status when local remove-brick process has just completed
|
||||
- [1225320](http://bugzilla.redhat.com/1225320): ls command failed with features.read-only on while mounting ec volume.
|
||||
- [1225331](http://bugzilla.redhat.com/1225331): [geo-rep] stop-all-gluster-processes.sh fails to stop all gluster processes
|
||||
- [1225543](http://bugzilla.redhat.com/1225543): [geo-rep]: snapshot creation timesout even if geo-replication is in pause/stop/delete state
|
||||
- [1225552](http://bugzilla.redhat.com/1225552): [Backup]: Unable to create a glusterfind session
|
||||
- [1225709](http://bugzilla.redhat.com/1225709): [RFE] Move signing trigger mechanism to [f]setxattr()
|
||||
- [1225743](http://bugzilla.redhat.com/1225743): [AFR-V2] - afr_final_errno() should treat op_ret > 0 also as success
|
||||
- [1225796](http://bugzilla.redhat.com/1225796): Spurious failure in tests/bugs/disperse/bug-1161621.t
|
||||
- [1225919](http://bugzilla.redhat.com/1225919): Log EEXIST errors in DEBUG level in fops MKNOD and MKDIR
|
||||
- [1225922](http://bugzilla.redhat.com/1225922): Sharding - Skip update of block count and size for directories in readdirp callback
|
||||
- [1226024](http://bugzilla.redhat.com/1226024): cli/tiering:typo errors in tiering
|
||||
- [1226029](http://bugzilla.redhat.com/1226029): I/O's hanging on tiered volumes (NFS)
|
||||
- [1226032](http://bugzilla.redhat.com/1226032): glusterd crashed on the node when tried to detach a tier after restoring data from the snapshot.
|
||||
- [1226117](http://bugzilla.redhat.com/1226117): [RFE] Return proper error codes in case of snapshot failure
|
||||
- [1226120](http://bugzilla.redhat.com/1226120): [Snapshot] Do not run scheduler if ovirt scheduler is running
|
||||
- [1226139](http://bugzilla.redhat.com/1226139): Implement MKNOD fop in bit-rot.
|
||||
- [1226146](http://bugzilla.redhat.com/1226146): BitRot :- bitd is not signing Objects if more than 3 bricks are present on same node
|
||||
- [1226153](http://bugzilla.redhat.com/1226153): Quota: Do not allow set/unset of quota limit in heterogeneous cluster
|
||||
- [1226629](http://bugzilla.redhat.com/1226629): bug-973073.t fails spuriously
|
||||
- [1226853](http://bugzilla.redhat.com/1226853): Volume start fails when glusterfs is source compiled with GCC v5.1.1
|
||||
|
||||
### Known Issues
|
||||
|
||||
- [1227677](http://bugzilla.redhat.com/1227677): Glusterd crashes and cannot start after rebalance
|
||||
- [1227656](http://bugzilla.redhat.com/1227656): Glusted dies when adding new brick to a distributed volume and converting to replicated volume
|
||||
- [1210256](http://bugzilla.redhat.com/1210256): gluster volume info --xml gives back incorrect typrStr in xml
|
||||
- [1212842](http://bugzilla.redhat.com/1212842): tar on a glusterfs mount displays "file changed as we read it" even though the file was not changed
|
||||
- [1220347](http://bugzilla.redhat.com/1220347): Read operation on a file which is in split-brain condition is successful
|
||||
- [1213352](http://bugzilla.redhat.com/1213352): nfs-ganesha: HA issue, the iozone process is not moving ahead, once the nfs-ganesha is killed
|
||||
- [1220270](http://bugzilla.redhat.com/1220270): nfs-ganesha: Rename fails while exectuing Cthon general category test
|
||||
- [1214169](http://bugzilla.redhat.com/1214169): glusterfsd crashed while rebalance and self-heal were in progress
|
||||
- [1221941](http://bugzilla.redhat.com/1221941): glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3
|
||||
- [1225809](http://bugzilla.redhat.com/1225809): [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done
|
||||
- [1225940](http://bugzilla.redhat.com/1225940): DHT: lookup-unhashed feature breaks runtime compatibility with older client versions
|
||||
|
||||
|
||||
- Addition of bricks dynamically to cold or hot tiers in a tiered volume is not supported.
|
||||
- The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:
|
||||
|
||||
~~~
|
||||
# gluster volume set <volname> server.allow-insecure on
|
||||
~~~
|
||||
|
||||
Edit `/etc/glusterfs/glusterd.vol` to contain this line: `option rpc-auth-allow-insecure on`
|
||||
|
||||
Post 1, restarting the volume would be necessary:
|
||||
|
||||
~~~
|
||||
# gluster volume stop <volname>
|
||||
# gluster volume start <volname>
|
||||
~~~
|
||||
|
||||
Post 2, restarting glusterd would be necessary:
|
||||
|
||||
~~~
|
||||
# service glusterd restart
|
||||
~~~
|
||||
|
||||
or
|
||||
|
||||
~~~
|
||||
# systemctl restart glusterd
|
||||
~~~
|
||||
211
release-notes/geo-rep-in-3.7.md
Normal file
211
release-notes/geo-rep-in-3.7.md
Normal file
@@ -0,0 +1,211 @@
|
||||
### Improved Node fail-over issues handling by using Gluster Meta Volume
|
||||
|
||||
In replica pairs one Geo-rep worker should be active and all
|
||||
the other replica workers should be passive. When Active worker goes
|
||||
down, Passive worker will become active. In previous releases, this logic
|
||||
was based on node-uuid, but now it is based on Lock file in Meta
|
||||
Volume. Now it is possible to decide Active/Passive more accurately
|
||||
and multiple Active worker scenarios minimized.
|
||||
|
||||
Geo-rep works without Meta Volume also, this feature is backward
|
||||
compatible. By default config option `use_meta_volume` is False. This
|
||||
feature can be turned on with geo-rep config `use_meta_volume`
|
||||
true. Without this feature Geo-rep works as it was working in previous
|
||||
releases.
|
||||
|
||||
Issues if meta_volume is turned off:
|
||||
|
||||
1. Multiple workers becoming active and participate in
|
||||
syncing. Duplicate efforts and all the issues related to concurrent
|
||||
execution exists.
|
||||
|
||||
2. Failover only works at node level, if a brick process goes down but
|
||||
node is alive then fail-back will not happen and delay in syncing.
|
||||
|
||||
3. Very difficult documented steps about placements of bricks in case
|
||||
of replica 3. For example, first brick in each replica should not be
|
||||
placed in same node. etc.
|
||||
|
||||
4. Consuming Changelogs from previously failed node when it comes
|
||||
back, which may lead to issues like delayed syncing and data
|
||||
inconsistencies in case of Renames.
|
||||
|
||||
**Fixes**: [1196632](https://bugzilla.redhat.com/show_bug.cgi?id=1196632),
|
||||
[1217939](https://bugzilla.redhat.com/show_bug.cgi?id=1217939)
|
||||
|
||||
|
||||
### Improved Historical Changelogs consumption
|
||||
|
||||
Support for consuming Historical Changelogs introduced in previous
|
||||
releases, with this release this is more stable and improved. Use of
|
||||
Filesystem crawl is minimized and limited only during initial sync.In
|
||||
previous release, Node reboot or brick process going down was treated as
|
||||
Changelog Breakage and Geo-rep was fallback to XSync for that
|
||||
duration. With this release, Changelog session will be considered
|
||||
broken only if Changelog is turned off. All the other scenarios
|
||||
considered as safe.
|
||||
|
||||
This feature is also required by glusterfind.
|
||||
|
||||
**Fixes**: [1217944](https://bugzilla.redhat.com/show_bug.cgi?id=1217944)
|
||||
|
||||
|
||||
### Improved Status and Checkpoint
|
||||
|
||||
Status got many improvements, Showing accurate details of Session
|
||||
info, User info, Slave node to which master node is connected, Last
|
||||
Synced Time etc. Initializing time is reduced, Status change will
|
||||
happen as soon as geo-rep workers ready.(In previous releases
|
||||
Initializing time was 60 sec)
|
||||
|
||||
**Fixes**: [1212410](https://bugzilla.redhat.com/show_bug.cgi?id=1212410)
|
||||
|
||||
### Worker Restart improvements
|
||||
|
||||
Workers going down and coming back is very common in geo-rep for
|
||||
reasons like network failure, Slave node going down etc. When it comes
|
||||
up it has to reprocess the changelogs again because worker died before
|
||||
updating the last sync time. The batch size is now optimized such that
|
||||
the amount of reprocess is minimized.
|
||||
|
||||
**Fixes**: [1210965](https://bugzilla.redhat.com/show_bug.cgi?id=1210965)
|
||||
|
||||
|
||||
### Improved RENAME handling
|
||||
|
||||
When renamed filename hash falls to other brick, respective brick's
|
||||
changelog records RENAME, but rest of the fops like CREATE, DATA are
|
||||
recorded in first brick. Each Geo-rep worker per brick syncs data to
|
||||
Slave Volume independently, These things go out of order and Master
|
||||
and Slave Volume become inconsistent. With the help of DHT team,
|
||||
RENAMEs are recorded where CREATE and DATA are recorded.
|
||||
|
||||
**Fixes**: [1141379](https://bugzilla.redhat.com/show_bug.cgi?id=1141379)
|
||||
|
||||
|
||||
### Syncing xattrs and acls
|
||||
|
||||
Syncing both xattrs and acls to Slave cluster are now supported. These
|
||||
can be disabled setting config options sync-xattrs or sync-acls to
|
||||
false.
|
||||
|
||||
**Fixes**: [1187021](https://bugzilla.redhat.com/show_bug.cgi?id=1187021),
|
||||
[1196690](https://bugzilla.redhat.com/show_bug.cgi?id=1196690)
|
||||
|
||||
|
||||
### Identifying Entry failures
|
||||
|
||||
Logging improvements to identify exact reason for Entry failures, GFID
|
||||
conflicts, I/O errors etc. Safe errors are not logged in Mount logs
|
||||
in Slave, Safe errors are post processed and only genuine errors are
|
||||
logged in Master logs.
|
||||
|
||||
**Fixes**: [1207115](https://bugzilla.redhat.com/show_bug.cgi?id=1207115),
|
||||
[1210562](https://bugzilla.redhat.com/show_bug.cgi?id=1210562)
|
||||
|
||||
|
||||
### Improved rm -rf issues handling
|
||||
|
||||
Successive deletes and creates had issues, Handling these issues
|
||||
minimized. (Not completely fixed since it depends on Open issues of
|
||||
DHT)
|
||||
|
||||
**Fixes**: [1211037](https://bugzilla.redhat.com/show_bug.cgi?id=1211037)
|
||||
|
||||
|
||||
### Non root Geo-replication simplified
|
||||
|
||||
Manual editing of Glusterd vol file is simplified by introducing
|
||||
`gluster system:: mountbroker` command
|
||||
|
||||
**Fixes**: [1136312](https://bugzilla.redhat.com/show_bug.cgi?id=1136312)
|
||||
|
||||
### Logging Rsync performance on request basis
|
||||
|
||||
Rsync performance can be evaluated by enabling a config option. After
|
||||
this Geo-rep starts recording rsync performance in log file, which can
|
||||
be post processed to get meaningful metrics.
|
||||
|
||||
**Fixes**: [764827](https://bugzilla.redhat.com/show_bug.cgi?id=764827)
|
||||
|
||||
### Initial sync issues due to upper limit comparison during Filesystem Crawl
|
||||
|
||||
Bug fix, Fixed wrong logic in Xsync Change detection. Upper limit was
|
||||
considered during xsync crawl. Geo-rep XSync was missing many files
|
||||
considering Changelog will take care. But Changelog will not have
|
||||
complete details of the files created before enabling Geo-replication.
|
||||
|
||||
When rsync/tarssh fails, geo-rep is now capable of identifying safe
|
||||
errors and continue syncing by ignoring those issues. For example,
|
||||
rsync fails to sync a file which is deleted in master during
|
||||
sync. This can be ignored since the file is unlinked and no need to
|
||||
try syncing.
|
||||
|
||||
**Fixes**: [1200733](https://bugzilla.redhat.com/show_bug.cgi?id=1200733)
|
||||
|
||||
|
||||
### Changelog failures and Brick failures handling
|
||||
|
||||
When Brick process goes down, or any Changelog exception Geo-rep
|
||||
worker was failing back to XSync crawl. Which was bad since Xsync
|
||||
fails to identify Deletes and Renames. Now this is prevented, worker
|
||||
goes to Faulty and wait for that Brick process to comeback.
|
||||
|
||||
|
||||
**Fixes**: [1202649](https://bugzilla.redhat.com/show_bug.cgi?id=1202649)
|
||||
|
||||
|
||||
### Archive Changelogs in working directory after processing
|
||||
|
||||
Archive Changelogs after processing not generate empty changelogs when
|
||||
no data is available. This is great improvement in terms of reducing
|
||||
the inode consumption in Brick.
|
||||
|
||||
**Fixes**: [1169331](https://bugzilla.redhat.com/show_bug.cgi?id=1169331)
|
||||
|
||||
|
||||
### Virtual xattr to trigger sync
|
||||
|
||||
Since we use Historical Changelogs when Geo-rep worker restarts. Only
|
||||
`SETATTR` will be recorded when we touch a file. In previous versions,
|
||||
Re triggering a file sync is stop geo-rep, touch files and start
|
||||
geo-replication. Now touch will not help since it records only `SETATTR`.
|
||||
Virtual Xattr is introduced to retrigger the sync. No Geo-rep restart
|
||||
required.
|
||||
|
||||
**Fixes**: [1176934](https://bugzilla.redhat.com/show_bug.cgi?id=1176934)
|
||||
|
||||
|
||||
### SSH Keys overwrite issues during Geo-rep create
|
||||
|
||||
Parallel creates or multiple Geo-rep session creation was overwriting
|
||||
the pem keys written by first one. This leads to connectivity issues
|
||||
when Geo-rep is started.
|
||||
|
||||
**Fixes**: [1183229](https://bugzilla.redhat.com/show_bug.cgi?id=1183229)
|
||||
|
||||
|
||||
### Ownership sync improvements
|
||||
|
||||
Geo-rep was failing to sync ownership information from master cluster
|
||||
to Slave cluster.
|
||||
|
||||
**Fixes**: [1104954](https://bugzilla.redhat.com/show_bug.cgi?id=1104954)
|
||||
|
||||
|
||||
### Slave node failover handling improvements
|
||||
|
||||
When slave node goes down, Master worker which is connected to that
|
||||
brick will go to faulty. Now it tries to connect to another slave node
|
||||
instead of waiting for that Slave node to come back.
|
||||
|
||||
**Fixes**: [1151412](https://bugzilla.redhat.com/show_bug.cgi?id=1151412)
|
||||
|
||||
|
||||
### Support of ssh keys custom location
|
||||
|
||||
If ssh `authorized_keys` are configured in non standard location instead
|
||||
of default `$HOME/.ssh/authorized_keys`. Geo-rep create was failing, now
|
||||
this is supported.
|
||||
|
||||
**Fixes**: [1181117](https://bugzilla.redhat.com/show_bug.cgi?id=1181117)
|
||||
21
release-notes/index.md
Normal file
21
release-notes/index.md
Normal file
@@ -0,0 +1,21 @@
|
||||
Release Notes
|
||||
-------------
|
||||
|
||||
###GlusterFS 3.7 release notes
|
||||
|
||||
- [3.7.1](./3.7.1.md)
|
||||
- [3.7.0](./3.7.0.md)
|
||||
- [geo rep in 3.7](./geo-rep-in-3.7.md)
|
||||
|
||||
###GlusterFS 3.6 release notes
|
||||
|
||||
- [3.6.3](./3.6.3.md)
|
||||
- [3.6.0](./3.6.0.md)
|
||||
|
||||
###GlusterFS 3.5 release notes
|
||||
|
||||
- [3.5.4](./3.5.4.md)
|
||||
- [3.5.3](./3.5.3.md)
|
||||
- [3.5.2](./3.5.2.md)
|
||||
- [3.5.1](./3.5.1.md)
|
||||
- [3.5.0](./3.5.0.md)
|
||||
Reference in New Issue
Block a user