Currently, to GET/DELETE any block volume we mount all the block hosting
volumes and readdir all the volumes, and loop through it, to find the
block hosting volume, the block belongs to. This approach is not scalable.
Hence, in the metadata of the block hosting volume, we keep a list of all
the block volumes present in that host volume. This way, GET/DELETE walks
through the volume metadata rather than the readdir.
Signed-off-by: Poornima G <pgurusid@redhat.com>
The implementation is an extension to brick-multiplexing. The max-bricks-per-process is being
monitored by using entries in pmap corresponding to a particular port.
So, If a brick needs to be multiplexed following are the series of steps that needs to be followed. The following are the genralised sets of steps -
- look into all started volumes with same options as the current volume(one by one)
- traverse through all local bricks of taregtVolume
- find the port of the local brick by using its path
- look for number of entries in pmap corresponding to particular port No. and count num of bricks already attached to that port
- If the number of bricks attached to a port is less that max-bricks-per-process constraint
- Then we have our target brick or repeat from step 1 until all targetVolumes are covered.
- Since current volume is not considered into started volumes list, so if we don't have any target brick yet from any of the
started volumes then
- look for any target brick in the current volume
- If even the current volume doesn't have any target brick then
- start seperate glusterfsd.
Cases Handles in this PR and approach followed -
1- If no started volume already present
- check current volume for target brick
- if taregt brick not found, start seperate glusterfsd
2- If atleast one started volume present
- if Target volume found, out of all the started volumes
* if target brick found, attach to the brick
* if taregt brick not found, look through other taregt volumes
* if still target brick not found, look into current volume
* if still taregt brick not found, start seperate glusterfsd process
- if target Volume not found
* look into current volume for any target bricks
* if taregt brick not found, start seperate glusterfsd
Signed-off-by: Vishal Pandey <vpandey@redhat.com>
- Added default group profiles for each volume
types(replicate,disperse,distribute)
- Included perf xlators in volfile template and disabled in default
profiles. So that these xlators can be enabled/disabled if
required.
- Included "features/shard" xlator in client volfile(Fixes: #954)
- Enabled self heal by default for replicate and disperse volumes.
- On glusterd2 start/restart, group profiles are saved in
`$workdir/templates/profiles.json` (Ex:
`/var/lib/glusterd2/templates/profiles.json)
- To modify profile defaults, update respective default profile and
restart glusterd2.(Note: Already created Volume options will not
change, only applicable for new volumes)
- All perf xlators are disabled by default till we decide the best
defaults.
- Disabled set/reset of default option profiles.
Signed-off-by: Aravinda VK <avishwan@redhat.com>
MountLocalBricks() and MountVolumeBricks() are called during GD2 startup.
Both these methods are responsible for mounting bricks for all the volumes in the store.
MountLocalBricks() exits as soon as there is some issue with mounting any volume's localBricks because of
which other volumes' localBricks(which are yet to be processed) are skipped for mounting.
MountVolumeBricks() exits as soon as any one of the brick of that volume is unable to be mounted because of which
other localBricks of that volume(which are yet to be processed), are skipped for mounting.
FIX-
Modify both the methods to not exit after an error but log the error and continue the mounting process for othe volumes and localBricks
Signed-off-by: Vishal Pandey <vpandey@redhat.com>
Brick volfiles generation added during Snapshot activate and
volfiles delete added during snapshot deactivate.
Also fixed client volfile regeneration of snapshot volume during
snapshot restore undo.
Signed-off-by: Aravinda VK <avishwan@redhat.com>
quota enable and disable as a volume set option
The daemon start and stop are called using the actor
framework.
Updates: #421
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
volume create and volume expand
added additional flag to create brick dir if its not exists
exposed flags for volume create and expand
in glustercli
added e2e test cases for flags in
volume create and volume expand
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
Setting of cluster wide options like cluster-op-version,
cluster.brick-multiplex, do not require volfile changes. It makes
sense to separate the handling of such cluster wide options outside
of the traditional volume set/get interface. Added a new cluster
object instance that contains a map of <option:value> for cluster
wide options.
Issue: #493
Signed-off-by: Samikshan Bairagya <samikshan@gmail.com>
This patch adds following support
1. Enable bitrot : Enables bitrot-stub xlator and starts bitd and scrubber
2. Disable bitrot: Disables bitort-stub xlator and stops bitd and scrubber
Updates: #431
Signed-off-by: Kotresh HR <khiremat@redhat.com>