1
0
mirror of https://github.com/gluster/glusterdocs.git synced 2026-02-05 15:47:01 +01:00

Update 3.10.x minor release notes to the docs

This update carries, 3.10.2-5 release notes

Signed-off-by: ShyamsundarR <srangana@redhat.com>
This commit is contained in:
ShyamsundarR
2017-08-16 14:34:55 -04:00
parent 1f64dd83e7
commit 999458b7d7
6 changed files with 207 additions and 0 deletions

View File

@@ -106,6 +106,10 @@ pages:
- 3.11.2: release-notes/3.11.2.md
- 3.11.1: release-notes/3.11.1.md
- 3.11.0: release-notes/3.11.0.md
- 3.10.5: release-notes/3.10.5.md
- 3.10.4: release-notes/3.10.4.md
- 3.10.3: release-notes/3.10.3.md
- 3.10.2: release-notes/3.10.2.md
- 3.10.1: release-notes/3.10.1.md
- 3.10.0: release-notes/3.10.0.md
- 3.9.0: release-notes/3.9.0.md

72
release-notes/3.10.2.md Normal file
View File

@@ -0,0 +1,72 @@
# Release notes for Gluster 3.10.2
This is a bugfix release. The release notes for [3.10.0](3.10.0.md) and
[3.10.1](3.10.1.md)
contains a listing of all the new features that were added and
bugs in the GlusterFS 3.10 stable release.
## Major changes, features and limitations addressed in this release
1. Many bugs brick multiplexing and nfs-ganesha+ha bugs have been addressed.
2. Rebalance and remove brick operations have been disabled for sharded volumes
to prevent data corruption.
## Major issues
1. Expanding a gluster volume that is sharded may cause file corruption
- Sharded volumes are typically used for VM images, if such volumes are
expanded or possibly contracted (i.e add/remove bricks and rebalance)
there are reports of VM images getting corrupted.
- Status of this bug can be tracked here, [#1426508](https://bugzilla.redhat.com/1426508)
## Bugs addressed
A total of 63 patches have been merged, addressing 46 bugs:
- [#1437854](https://bugzilla.redhat.com/1437854): Spellcheck issues reported during Debian build
- [#1425726](https://bugzilla.redhat.com/1425726): Stale export entries in ganesha.conf after executing "gluster nfs-ganesha disable"
- [#1427079](https://bugzilla.redhat.com/1427079): [Ganesha] : unexport fails if export configuration file is not present
- [#1440148](https://bugzilla.redhat.com/1440148): common-ha (debian/ubuntu): ganesha-ha.sh has a hard-coded /usr/libexec/ganesha...
- [#1443478](https://bugzilla.redhat.com/1443478): RFE: Support to update NFS-Ganesha export options dynamically
- [#1443490](https://bugzilla.redhat.com/1443490): [Nfs-ganesha] Refresh config fails when ganesha cluster is in failover mode.
- [#1441474](https://bugzilla.redhat.com/1441474): synclocks don't work correctly under contention
- [#1449002](https://bugzilla.redhat.com/1449002): [Brick Multiplexing] : Bricks for multiple volumes going down after glusterd restart and not coming back up after volume start force
- [#1438813](https://bugzilla.redhat.com/1438813): Segmentation fault when creating a qcow2 with qemu-img
- [#1438423](https://bugzilla.redhat.com/1438423): [Ganesha + EC] : Input/Output Error while creating LOTS of smallfiles
- [#1444540](https://bugzilla.redhat.com/1444540): rm -rf \<dir\> returns ENOTEMPTY even though ls on the mount point returns no files
- [#1446227](https://bugzilla.redhat.com/1446227): Incorrect and redundant logs in the DHT rmdir code path
- [#1447608](https://bugzilla.redhat.com/1447608): Don't allow rebalance/fix-layout operation on sharding enabled volumes till dht+sharding bugs are fixed
- [#1448864](https://bugzilla.redhat.com/1448864): Seeing error "Failed to get the total number of files. Unable to estimate time to complete rebalance" in rebalance logs
- [#1443349](https://bugzilla.redhat.com/1443349): [Eventing]: Unrelated error message displayed when path specified during a 'webhook-test/add' is missing a schema
- [#1441576](https://bugzilla.redhat.com/1441576): [geo-rep]: rsync should not try to sync internal xattrs
- [#1441927](https://bugzilla.redhat.com/1441927): [geo-rep]: Worker crashes with [Errno 16] Device or resource busy: '.gfid/00000000-0000-0000-0000-000000000001/dir.166 while renaming directories
- [#1401877](https://bugzilla.redhat.com/1401877): [GANESHA] Symlinks from /etc/ganesha/ganesha.conf to shared\_storage are created on the non-ganesha nodes in 8 node gluster having 4 node ganesha cluster
- [#1425723](https://bugzilla.redhat.com/1425723): nfs-ganesha volume export file remains stale in shared\_storage\_volume when volume is deleted
- [#1427759](https://bugzilla.redhat.com/1427759): nfs-ganesha: Incorrect error message returned when disable fails
- [#1438325](https://bugzilla.redhat.com/1438325): Need to improve remove-brick failure message when the brick process is down.
- [#1438338](https://bugzilla.redhat.com/1438338): glusterd is setting replicate volume property over disperse volume or vice versa
- [#1438340](https://bugzilla.redhat.com/1438340): glusterd is not validating for allowed values while setting "cluster.brick-multiplex" property
- [#1441476](https://bugzilla.redhat.com/1441476): Glusterd crashes when restarted with many volumes
- [#1444128](https://bugzilla.redhat.com/1444128): [BrickMultiplex] gluster command not responding and .snaps directory is not visible after executing snapshot related command
- [#1445260](https://bugzilla.redhat.com/1445260): [GANESHA] Volume start and stop having ganesha enable on it,turns off cache-invalidation on volume
- [#1445408](https://bugzilla.redhat.com/1445408): gluster volume stop hangs
- [#1449934](https://bugzilla.redhat.com/1449934): Brick Multiplexing :- resetting a brick bring down other bricks with same PID
- [#1435779](https://bugzilla.redhat.com/1435779): Inode ref leak on anonymous reads and writes
- [#1440278](https://bugzilla.redhat.com/1440278): [GSS] NFS Sub-directory mount not working on solaris10 client
- [#1450378](https://bugzilla.redhat.com/1450378): GNFS crashed while taking lock on a file from 2 different clients having same volume mounted from 2 different servers
- [#1449779](https://bugzilla.redhat.com/1449779): quota: limit-usage command failed with error " Failed to start aux mount"
- [#1450564](https://bugzilla.redhat.com/1450564): glfsheal: crashed(segfault) with disperse volume in RDMA
- [#1443501](https://bugzilla.redhat.com/1443501): Don't wind post-op on a brick where the fop phase failed.
- [#1444892](https://bugzilla.redhat.com/1444892): When either killing or restarting a brick with performance.stat-prefetch on, stat sometimes returns a bad st\_size value.
- [#1449169](https://bugzilla.redhat.com/1449169): Multiple bricks WILL crash after TCP port probing
- [#1440805](https://bugzilla.redhat.com/1440805): Update rfc.sh to check Change-Id consistency for backports
- [#1443010](https://bugzilla.redhat.com/1443010): snapshot: snapshots appear to be failing with respect to secure geo-rep slave
- [#1445209](https://bugzilla.redhat.com/1445209): snapshot: Unable to take snapshot on a geo-replicated volume, even after stopping the session
- [#1444773](https://bugzilla.redhat.com/1444773): explicitly specify executor to be bash for tests
- [#1445407](https://bugzilla.redhat.com/1445407): remove bug-1421590-brick-mux-reuse-ports.t
- [#1440742](https://bugzilla.redhat.com/1440742): Test files clean up for tier during 3.10
- [#1448790](https://bugzilla.redhat.com/1448790): [Tiering]: High and low watermark values when set to the same level, is allowed
- [#1435942](https://bugzilla.redhat.com/1435942): Enabling parallel-readdir causes dht linkto files to be visible on the mount,
- [#1437763](https://bugzilla.redhat.com/1437763): File-level WORM allows ftruncate() on read-only files
- [#1439148](https://bugzilla.redhat.com/1439148): Parallel readdir on Gluster NFS displays less number of dentries

39
release-notes/3.10.3.md Normal file
View File

@@ -0,0 +1,39 @@
# Release notes for Gluster 3.10.3
This is a bugfix release. The release notes for [3.10.0](3.10.0.md) ,
[3.10.1](3.10.1.md) and [3.10.2](3.10.2.md)
contain a listing of all the new features that were added and
bugs in the GlusterFS 3.10 stable release.
## Major changes, features and limitations addressed in this release
1. No Major changes
## Major issues
1. Expanding a gluster volume that is sharded may cause file corruption
- Sharded volumes are typically used for VM images, if such volumes are
expanded or possibly contracted (i.e add/remove bricks and rebalance)
there are reports of VM images getting corrupted.
- Status of this bug can be tracked here, [#1426508](https://bugzilla.redhat.com/1426508)
2. Brick multiplexing is being tested and fixed aggressively but we still have a
few crashes and memory leaks to fix.
## Bugs addressed
A total of 18 patches have been merged, addressing 13 bugs:
- [#1450053](https://bugzilla.redhat.com/1450053): [GANESHA] Adding a node to existing cluster failed to start pacemaker service on new node
- [#1450773](https://bugzilla.redhat.com/1450773): Quota: After upgrade from 3.7 to higher version , gluster quota list command shows "No quota configured on volume repvol"
- [#1450934](https://bugzilla.redhat.com/1450934): [New] - Replacing an arbiter brick while I/O happens causes vm pause
- [#1450947](https://bugzilla.redhat.com/1450947): Autoconf leaves unexpanded variables in path names of non-shell-scripttext files
- [#1451371](https://bugzilla.redhat.com/1451371): crash in dht\_rmdir\_do
- [#1451561](https://bugzilla.redhat.com/1451561): AFR returns the node uuid of the same node for every file in the replica
- [#1451587](https://bugzilla.redhat.com/1451587): cli xml status of detach tier broken
- [#1451977](https://bugzilla.redhat.com/1451977): Add logs to identify whether disconnects are voluntary or due to network problems
- [#1451995](https://bugzilla.redhat.com/1451995): Log message shows error code as success even when rpc fails to connect
- [#1453056](https://bugzilla.redhat.com/1453056): [DHt] : segfault in dht\_selfheal\_dir\_setattr while running regressions
- [#1453087](https://bugzilla.redhat.com/1453087): Brick Multiplexing: On reboot of a node Brick multiplexing feature lost on that node as multiple brick processes get spawned
- [#1456682](https://bugzilla.redhat.com/1456682): tierd listens to a port.
- [#1457054](https://bugzilla.redhat.com/1457054): glusterfs client crash on io-cache.so(\_\_ioc\_page\_wakeup+0x44)

39
release-notes/3.10.4.md Normal file
View File

@@ -0,0 +1,39 @@
# Release notes for Gluster 3.10.4
This is a bugfix release. The release notes for [3.10.0](3.10.0.md) ,
[3.10.1](3.10.1.md), [3.10.2](3.10.2.md) and [3.10.3](3.10.3.md)
contain a listing of all the new features that were added and
bugs fixed in the GlusterFS 3.10 stable release.
## Major changes, features and limitations addressed in this release
1. No Major changes
## Major issues
1. Expanding a gluster volume that is sharded may cause file corruption
- Sharded volumes are typically used for VM images, if such volumes are
expanded or possibly contracted (i.e add/remove bricks and rebalance)
there are reports of VM images getting corrupted.
- Status of this bug can be tracked here, [#1426508](https://bugzilla.redhat.com/1426508)
2. Brick multiplexing is being tested and fixed aggressively but we still have a
few crashes and memory leaks to fix.
3. Another rebalance related bug is being worked upon [#1467010](https://bugzilla.redhat.com/1467010)
## Bugs addressed
A total of 18 patches have been merged, addressing 13 bugs:
- [#1457732](https://bugzilla.redhat.com/1457732): "split-brain observed [Input/output error]" error messages in samba logs during parallel rm -rf
- [#1459760](https://bugzilla.redhat.com/1459760): Glusterd segmentation fault in ' _Unwind_Backtrace' while running peer probe
- [#1460649](https://bugzilla.redhat.com/1460649): posix-acl: Whitelist virtual ACL xattrs
- [#1460914](https://bugzilla.redhat.com/1460914): Rebalance estimate time sometimes shows negative values
- [#1460993](https://bugzilla.redhat.com/1460993): Revert CLI restrictions on running rebalance in VM store use case
- [#1461019](https://bugzilla.redhat.com/1461019): [Ganesha] : Grace period is not being adhered to on RHEL 7.4; Clients continue running IO even during grace.
- [#1462080](https://bugzilla.redhat.com/1462080): [Bitrot]: Inconsistency seen with 'scrub ondemand' - fails to trigger scrub
- [#1463623](https://bugzilla.redhat.com/1463623): [Ganesha]Bricks got crashed while running posix compliance test suit on V4 mount
- [#1463641](https://bugzilla.redhat.com/1463641): [Ganesha] Ganesha service failed to start on new node added in existing ganeshacluster
- [#1464078](https://bugzilla.redhat.com/1464078): with AFR now making both nodes to return UUID for a file will result in georep consuming more resources
- [#1466852](https://bugzilla.redhat.com/1466852): assorted typos and spelling mistakes from Debian lintian
- [#1466863](https://bugzilla.redhat.com/1466863): dht_rename_lock_cbk crashes in upstream regression test
- [#1467269](https://bugzilla.redhat.com/1467269): Heal info shows incorrect status

49
release-notes/3.10.5.md Normal file
View File

@@ -0,0 +1,49 @@
# Release notes for Gluster 3.10.5
This is a bugfix release. The release notes for [3.10.0](3.10.0.md) ,
[3.10.1](3.10.1.md), [3.10.2](3.10.2.md), [3.10.3](3.10.3.md) and [3.10.4](3.10.4.md)
contain a listing of all the new features that were added and
bugs fixed in the GlusterFS 3.10 stable release.
## Major changes, features and limitations addressed in this release
**No Major changes**
## Major issues
1. Expanding a gluster volume that is sharded may cause file corruption
- Sharded volumes are typically used for VM images, if such volumes are
expanded or possibly contracted (i.e add/remove bricks and rebalance)
there are reports of VM images getting corrupted.
- The last known cause for corruption [#1467010](https://bugzilla.redhat.com/show_bug.cgi?id=1467010)
has a fix with this release. As further testing is still in progress, the issue
is retained as a major issue.
2. Brick multiplexing is being tested and fixed aggressively but we still have a
few crashes and memory leaks to fix.
## Bugs addressed
Bugs addressed since release-3.10.4 are listed below.
- [#1467010](https://bugzilla.redhat.com/1467010): Fd based fops fail with EBADF on file migration
- [#1468126](https://bugzilla.redhat.com/1468126): disperse seek does not correctly handle the end of file
- [#1468198](https://bugzilla.redhat.com/1468198): [Geo-rep]: entry failed to sync to slave with ENOENT errror
- [#1470040](https://bugzilla.redhat.com/1470040): packaging: Upgrade glusterfs-ganesha sometimes fails to semanage ganesha_use_fusefs
- [#1470488](https://bugzilla.redhat.com/1470488): gluster volume status --xml fails when there are 100 volumes
- [#1471028](https://bugzilla.redhat.com/1471028): glusterfs process leaking memory when error occurs
- [#1471612](https://bugzilla.redhat.com/1471612): metadata heal not happening despite having an active sink
- [#1471870](https://bugzilla.redhat.com/1471870): cthon04 can cause segfault in gNFS/NLM
- [#1471917](https://bugzilla.redhat.com/1471917): [GANESHA] Ganesha setup creation fails due to selinux blocking some services required for setup creation
- [#1472446](https://bugzilla.redhat.com/1472446): packaging: save ganesha config files in (/var)/run/gluster/shared_storage/nfs-ganesha
- [#1473129](https://bugzilla.redhat.com/1473129): dht/rebalance: Improve rebalance crawl performance
- [#1473132](https://bugzilla.redhat.com/1473132): dht/cluster: rebalance/remove-brick should honor min-free-disk
- [#1473133](https://bugzilla.redhat.com/1473133): dht/cluster: rebalance/remove-brick should honor min-free-disk
- [#1473134](https://bugzilla.redhat.com/1473134): The rebal-throttle setting does not work as expected
- [#1473136](https://bugzilla.redhat.com/1473136): rebalance: Allow admin to change thread count for rebalance
- [#1473137](https://bugzilla.redhat.com/1473137): dht: Make throttle option "normal" value uniform across dht_init and dht_reconfigure
- [#1473140](https://bugzilla.redhat.com/1473140): Fix on demand file migration from client
- [#1473141](https://bugzilla.redhat.com/1473141): cluster/dht: Fix hardlink migration failures
- [#1475638](https://bugzilla.redhat.com/1475638): [Scale] : Client logs flooded with "inode context is NULL" error messages
- [#1476212](https://bugzilla.redhat.com/1476212): [geo-rep]: few of the self healed hardlinks on master did not sync to slave
- [#1478498](https://bugzilla.redhat.com/1478498): scripts: invalid test in S32gluster_enable_shared_storage.sh
- [#1478499](https://bugzilla.redhat.com/1478499): packaging: /var/lib/glusterd/options should be %config(noreplace)
- [#1480594](https://bugzilla.redhat.com/1480594): nfs process crashed in "nfs3_getattr"

View File

@@ -9,6 +9,10 @@ Release Notes
###GlusterFS 3.10 release notes
- [3.10.5](./3.10.5.md)
- [3.10.4](./3.10.4.md)
- [3.10.3](./3.10.3.md)
- [3.10.2](./3.10.2.md)
- [3.10.1](./3.10.1.md)
- [3.10.0](./3.10.0.md)