mirror of
https://github.com/gluster/glusterdocs.git
synced 2026-02-06 00:48:24 +01:00
* Switch to latest mkdocs * Update travis to python3.6 * Fix for the JS error * Add themedir * Update the template to match latest base.html
73 lines
6.7 KiB
Markdown
73 lines
6.7 KiB
Markdown
# Release notes for Gluster 3.10.2
|
|
|
|
This is a bugfix release. The release notes for [3.10.0](3.10.0.md) and
|
|
[3.10.1](3.10.1.md)
|
|
contains a listing of all the new features that were added and
|
|
bugs in the GlusterFS 3.10 stable release.
|
|
|
|
## Major changes, features and limitations addressed in this release
|
|
1. Many bugs brick multiplexing and nfs-ganesha+ha bugs have been addressed.
|
|
2. Rebalance and remove brick operations have been disabled for sharded volumes
|
|
to prevent data corruption.
|
|
|
|
|
|
## Major issues
|
|
1. Expanding a gluster volume that is sharded may cause file corruption
|
|
- Sharded volumes are typically used for VM images, if such volumes are
|
|
expanded or possibly contracted (i.e add/remove bricks and rebalance)
|
|
there are reports of VM images getting corrupted.
|
|
- Status of this bug can be tracked here, [#1426508](https://bugzilla.redhat.com/1426508)
|
|
|
|
|
|
## Bugs addressed
|
|
|
|
A total of 63 patches have been merged, addressing 46 bugs:
|
|
|
|
- [#1437854](https://bugzilla.redhat.com/1437854): Spellcheck issues reported during Debian build
|
|
- [#1425726](https://bugzilla.redhat.com/1425726): Stale export entries in ganesha.conf after executing "gluster nfs-ganesha disable"
|
|
- [#1427079](https://bugzilla.redhat.com/1427079): [Ganesha] : unexport fails if export configuration file is not present
|
|
- [#1440148](https://bugzilla.redhat.com/1440148): common-ha (debian/ubuntu): ganesha-ha.sh has a hard-coded /usr/libexec/ganesha...
|
|
- [#1443478](https://bugzilla.redhat.com/1443478): RFE: Support to update NFS-Ganesha export options dynamically
|
|
- [#1443490](https://bugzilla.redhat.com/1443490): [Nfs-ganesha] Refresh config fails when ganesha cluster is in failover mode.
|
|
- [#1441474](https://bugzilla.redhat.com/1441474): synclocks don't work correctly under contention
|
|
- [#1449002](https://bugzilla.redhat.com/1449002): [Brick Multiplexing] : Bricks for multiple volumes going down after glusterd restart and not coming back up after volume start force
|
|
- [#1438813](https://bugzilla.redhat.com/1438813): Segmentation fault when creating a qcow2 with qemu-img
|
|
- [#1438423](https://bugzilla.redhat.com/1438423): [Ganesha + EC] : Input/Output Error while creating LOTS of smallfiles
|
|
- [#1444540](https://bugzilla.redhat.com/1444540): rm -rf \<dir\> returns ENOTEMPTY even though ls on the mount point returns no files
|
|
- [#1446227](https://bugzilla.redhat.com/1446227): Incorrect and redundant logs in the DHT rmdir code path
|
|
- [#1447608](https://bugzilla.redhat.com/1447608): Don't allow rebalance/fix-layout operation on sharding enabled volumes till dht+sharding bugs are fixed
|
|
- [#1448864](https://bugzilla.redhat.com/1448864): Seeing error "Failed to get the total number of files. Unable to estimate time to complete rebalance" in rebalance logs
|
|
- [#1443349](https://bugzilla.redhat.com/1443349): [Eventing]: Unrelated error message displayed when path specified during a 'webhook-test/add' is missing a schema
|
|
- [#1441576](https://bugzilla.redhat.com/1441576): [geo-rep]: rsync should not try to sync internal xattrs
|
|
- [#1441927](https://bugzilla.redhat.com/1441927): [geo-rep]: Worker crashes with [Errno 16] Device or resource busy: '.gfid/00000000-0000-0000-0000-000000000001/dir.166 while renaming directories
|
|
- [#1401877](https://bugzilla.redhat.com/1401877): [GANESHA] Symlinks from /etc/ganesha/ganesha.conf to shared\_storage are created on the non-ganesha nodes in 8 node gluster having 4 node ganesha cluster
|
|
- [#1425723](https://bugzilla.redhat.com/1425723): nfs-ganesha volume export file remains stale in shared\_storage\_volume when volume is deleted
|
|
- [#1427759](https://bugzilla.redhat.com/1427759): nfs-ganesha: Incorrect error message returned when disable fails
|
|
- [#1438325](https://bugzilla.redhat.com/1438325): Need to improve remove-brick failure message when the brick process is down.
|
|
- [#1438338](https://bugzilla.redhat.com/1438338): glusterd is setting replicate volume property over disperse volume or vice versa
|
|
- [#1438340](https://bugzilla.redhat.com/1438340): glusterd is not validating for allowed values while setting "cluster.brick-multiplex" property
|
|
- [#1441476](https://bugzilla.redhat.com/1441476): Glusterd crashes when restarted with many volumes
|
|
- [#1444128](https://bugzilla.redhat.com/1444128): [BrickMultiplex] gluster command not responding and .snaps directory is not visible after executing snapshot related command
|
|
- [#1445260](https://bugzilla.redhat.com/1445260): [GANESHA] Volume start and stop having ganesha enable on it,turns off cache-invalidation on volume
|
|
- [#1445408](https://bugzilla.redhat.com/1445408): gluster volume stop hangs
|
|
- [#1449934](https://bugzilla.redhat.com/1449934): Brick Multiplexing :- resetting a brick bring down other bricks with same PID
|
|
- [#1435779](https://bugzilla.redhat.com/1435779): Inode ref leak on anonymous reads and writes
|
|
- [#1440278](https://bugzilla.redhat.com/1440278): [GSS] NFS Sub-directory mount not working on solaris10 client
|
|
- [#1450378](https://bugzilla.redhat.com/1450378): GNFS crashed while taking lock on a file from 2 different clients having same volume mounted from 2 different servers
|
|
- [#1449779](https://bugzilla.redhat.com/1449779): quota: limit-usage command failed with error " Failed to start aux mount"
|
|
- [#1450564](https://bugzilla.redhat.com/1450564): glfsheal: crashed(segfault) with disperse volume in RDMA
|
|
- [#1443501](https://bugzilla.redhat.com/1443501): Don't wind post-op on a brick where the fop phase failed.
|
|
- [#1444892](https://bugzilla.redhat.com/1444892): When either killing or restarting a brick with performance.stat-prefetch on, stat sometimes returns a bad st\_size value.
|
|
- [#1449169](https://bugzilla.redhat.com/1449169): Multiple bricks WILL crash after TCP port probing
|
|
- [#1440805](https://bugzilla.redhat.com/1440805): Update rfc.sh to check Change-Id consistency for backports
|
|
- [#1443010](https://bugzilla.redhat.com/1443010): snapshot: snapshots appear to be failing with respect to secure geo-rep slave
|
|
- [#1445209](https://bugzilla.redhat.com/1445209): snapshot: Unable to take snapshot on a geo-replicated volume, even after stopping the session
|
|
- [#1444773](https://bugzilla.redhat.com/1444773): explicitly specify executor to be bash for tests
|
|
- [#1445407](https://bugzilla.redhat.com/1445407): remove bug-1421590-brick-mux-reuse-ports.t
|
|
- [#1440742](https://bugzilla.redhat.com/1440742): Test files clean up for tier during 3.10
|
|
- [#1448790](https://bugzilla.redhat.com/1448790): [Tiering]: High and low watermark values when set to the same level, is allowed
|
|
- [#1435942](https://bugzilla.redhat.com/1435942): Enabling parallel-readdir causes dht linkto files to be visible on the mount,
|
|
- [#1437763](https://bugzilla.redhat.com/1437763): File-level WORM allows ftruncate() on read-only files
|
|
- [#1439148](https://bugzilla.redhat.com/1439148): Parallel readdir on Gluster NFS displays less number of dentries
|
|
|