1
0
mirror of https://github.com/gluster/glusterdocs.git synced 2026-02-05 15:47:01 +01:00

Editorial fixes (#452)

* Small wording correction.
* Fix presentation of bulleted list
* Additional minor wording fix
* Complete command syntax to set volume options
This commit is contained in:
Geert Janssens
2019-01-28 05:47:22 +01:00
committed by Nigel Babu
parent 9f2c979b50
commit 5ded555381
3 changed files with 17 additions and 15 deletions

View File

@@ -22,7 +22,7 @@ available.
> **Note**
>
> It is recommended that you to set server.allow-insecure option to ON if
> It is recommended to set server.allow-insecure option to ON if
> there are too many bricks in each volume or if there are too many
> services which have already utilized all the privileged ports in the
> system. Turning this option ON allows ports to accept/reject messages
@@ -31,7 +31,7 @@ available.
Tune volume options using the following command:
`# gluster volume set <VOLNAME>`
`# gluster volume set <VOLNAME> <OPT-NAME> <OPT-VALUE>`
For example, to specify the performance cache size for test-volume:
@@ -501,7 +501,7 @@ share of files.
A fix-layout rebalance will only fix the layout changes and does not
migrate data. If you want to migrate the existing data,
use `gluster volume rebalance start` command to rebalance data among
use `gluster volume rebalance <volume> start` command to rebalance data among
the servers.
**To rebalance a volume to fix layout**

View File

@@ -72,7 +72,7 @@ means each server will house a copy of the data.
important when you have more bricks.
It is possible (as of the most current release as of this writing, Gluster 3.3)
to specify the bricks in a such a way that you would make both copies of the data reside on a
to specify the bricks in such a way that you would make both copies of the data reside on a
single node. This would make for an embarrassing explanation to your
boss when your bulletproof, completely redundant, always on super
cluster comes to a grinding halt when a single point of failure occurs.

View File

@@ -75,17 +75,19 @@ this is accomplished without a centralized metadata server.
Most likely, yes. People use Gluster for all sorts of things. You are
encouraged to ask around in our IRC channel or Q&A forums to see if
anyone has tried something similar. That being said, there are a few
places where Gluster is going to need more consideration than others. -
Accessing Gluster from SMB/CIFS is often going to be slow by most
peoples standards. If you only moderate access by users, then it most
likely wont be an issue for you. On the other hand, adding enough
Gluster servers into the mix, some people have seen better performance
with us than other solutions due to the scale out nature of the
technology - Gluster does not support so called “structured data”,
meaning live, SQL databases. Of course, using Gluster to backup and
restore the database would be fine - Gluster is traditionally better
when using file sizes at of least 16KB (with a sweet spot around 128KB
or so).
places where Gluster is going to need more consideration than others.
- Accessing Gluster from SMB/CIFS is often going to be slow by most
peoples standards. If you only moderate access by users, then it most
likely wont be an issue for you. On the other hand, adding enough
Gluster servers into the mix, some people have seen better performance
with us than other solutions due to the scale out nature of the
technology
- Gluster does not support so called “structured data”, meaning
live, SQL databases. Of course, using Gluster to backup and
restore the database would be fine
- Gluster is traditionally better when using file sizes of at least 16KB
(with a sweet spot around 128KB or so).
#### What is the cost and complexity required to set up cluster?