The block host volume will still exist even when the blocks are all
deleted. Manually deleting block host volume will need another step
of unmounting. With this patch we auto-delete the block host volume
if there are no blocks.
The availablility check of bhv free space is not done within lock, hence
there is a possiblity that the available space has changed by the
time we decide to create the volume. This patch also fixes the race
condition.
Signed-off-by: Poornima G <pgurusid@redhat.com>
Implement glustercli command to disable and delete the current tracing
configuration on the cluster. The changes include gd2 transaction that
first deletes the trace configuration from the store on one node and then
a subsequent step clears the in-memory trace configuration on all nodes.
closes #1368
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Implement glustercli command to update the current tracing status on the
cluster. All trace config options are passed as flags to the command. If
any option is not passed, the existing value for that option will be
retained.
closes #1368
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Implement glustercli command to get the current tracing status on the
cluster. The tracing info is read from the store and presented to the
user in table format with info like Status, Jaeger Endpoints, Sampler
type and sample fraction. For e.g.,
+------------------------+----------------------------+
| TRACE OPTION | VALUE |
+------------------------+----------------------------+
| Status | enabled |
| Jaeger Endpoint | http://192.168.122.1:14268 |
| Jaeger Agent Endpoint | http://192.168.122.1:6831 |
| Jaeger Sampler | 2 (Probabilistic) |
| Jaeger Sample Fraction | 0.99 |
+------------------------+----------------------------+
Add "trace enable" e2e test cases. The tests also exercise the
"trace status" request.
closes #1368
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Implement the undo step for trace enable transaction. This
step removes the trace configuration from the store if
written.
closes #1368
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
This commit implements a gd2 plugin that allows management of tracing
operations across the cluster. This change-set implements the server side
handling of the request to enable tracing on all gd2 nodes. The
pre-condition to execute this transaction is that there shouldn't be any
existing trace configuration in etcd. The steps
involved in the transaction are,
1. Node receiving the request Validates the passed tracing configuration,
2. Node Stores the tracing configuration in etcd, and,
3. Set the in-memory trace configuration on all nodes
Failure in steps 2 and 3 will result in the undo logic restoring the
previous configuration both in memory and in etcd.
closes #1368
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Currently, to GET/DELETE any block volume we mount all the block hosting
volumes and readdir all the volumes, and loop through it, to find the
block hosting volume, the block belongs to. This approach is not scalable.
Hence, in the metadata of the block hosting volume, we keep a list of all
the block volumes present in that host volume. This way, GET/DELETE walks
through the volume metadata rather than the readdir.
Signed-off-by: Poornima G <pgurusid@redhat.com>
When the block host cluster options are set to values other
than default, the block volume creation fails on the first
attempt, but succeeds on the subsequent attempts. This is
due to the initialization of block vol create req before
reading the cluster options. In this patch, change the order
of the same to fix the issue.
Signed-off-by: Poornima G <pgurusid@redhat.com>
If a Volume is cloned from another volume, the bricks of cloned volume
will also belong to same LV thinpool as the original Volume. So before
removing the thinpool a check was made to confirm if number of Lvs in
that thinpool is zero. This check was causing hang when parallel
Volume delete commands were issued.
With this PR, number of Lvs check is removed, instead of that captured
the failure of thinpool delete and handled it gracefully.
This PR also adds support for gracefully delete the volume if lv or
thinpool already deleted by previous failed transaction or manual delete.
Signed-off-by: Aravinda VK <avishwan@redhat.com>
- moved hosts parameter from mandatory to optional field in CreateBlockVolume
method of BlockProvider interface,since hosts field may not required for
other block providers like loopback.
- a common function for updating available hosting volume size will prevent
from duplicate code
Signed-off-by: Oshank Kumar <okumar@redhat.com>
- added block volume provider name in path parameter of url
- block provider will not be responsible for managing host volumes.
Signed-off-by: Oshank Kumar <okumar@redhat.com>
it will be user friendly if we have total,
free,used field in device response.
total size= total size of device.
free size= available size of device for
volume creation.
used size= space used for volume creation.
Signed-off-by: Madhu Rajanna <mrajanna@redhat.com>
Category prefix is optional when setting Volume options. For example
to set replicate.eager-lock, we can pass `replicate.eager-lock` or
`cluster/replicate.eager-lock`. With this PR, always stored with
category prefix.
Also fixed the issue of loosing template variables when xlator default
options and volinfo.Options are loaded(Fixes: #1397)
Signed-off-by: Aravinda VK <avishwan@redhat.com>
- Group profile for transactional DB workload added
- Fixed the option names used in other group profiles
- Fixed the validation issues related to setting the options
glusterfs PR to support enable/disable of xlators
https://review.gluster.org/21813
Fixes: #1250
Signed-off-by: Aravinda VK <avishwan@redhat.com>
If a device exists with same name in different Peer, it is possible to
get the available size information of the device from different
Peer than getting information locally.
Signed-off-by: Aravinda VK <avishwan@redhat.com>
- gsyncd path defaulted to /usr/libexec/glusterfs/gsyncd
- Fixes remote REST API auth issues
- Workaround to make it work with marker xlator
Signed-off-by: Aravinda VK <avishwan@redhat.com>
The change is basically in the way we pass xml flag to glfsheal binary.
Previously it was passed with a simple 'xml', now after the change
https://review.gluster.org/#/c/glusterfs/+/21501/
its passed as '--xml' to glfsheal.
Signed-off-by: Vishal Pandey <vpandey@redhat.com>
Moved lvm related functions from `$SRC/glusterd2/snapshot/lvm`
and `$SRC/plugins/device/deviceutils/` to `$SRC/pkg/lvmutils`
Also moved fs related functions from `plugins/deviceutils` to
`$SRC/pkg/fsutils`.
Fixes: #1187
Signed-off-by: Aravinda VK <avishwan@redhat.com>
This patch creates the arbiter brick for the smart volume as per the
calculation:
brick size = 4 KB * ( size in KB of largest data brick in volume or
replica set / average file size in KB)
Signed-off-by: Hari Gowtham <hgowtham@redhat.com>
During a snapshot restore, we need to delete the parent
lv's if it is auto provisioned or snapshot provisioned
Signed-off-by: Mohammed Rafi KC <rkavunga@redhat.com>
Correction for fixing the request and response for Methods: GeoReplicationConfigGet and GeoReplicationConfigSet. Fixed the code in plugins/georeplication/init.go. Also regenerated the endpoints.md file.
Signed-off-by: Sidharth Anupkrishnan <sanupkri@redhat.com>