diff --git a/docs/Administrator Guide/Managing Volumes.md b/docs/Administrator Guide/Managing Volumes.md index 66e7306..df05cf0 100644 --- a/docs/Administrator Guide/Managing Volumes.md +++ b/docs/Administrator Guide/Managing Volumes.md @@ -22,7 +22,7 @@ available. > **Note** > -> It is recommended that you to set server.allow-insecure option to ON if +> It is recommended to set server.allow-insecure option to ON if > there are too many bricks in each volume or if there are too many > services which have already utilized all the privileged ports in the > system. Turning this option ON allows ports to accept/reject messages @@ -31,7 +31,7 @@ available. Tune volume options using the following command: -`# gluster volume set ` +`# gluster volume set ` For example, to specify the performance cache size for test-volume: @@ -501,7 +501,7 @@ share of files. A fix-layout rebalance will only fix the layout changes and does not migrate data. If you want to migrate the existing data, -use `gluster volume rebalance start` command to rebalance data among +use `gluster volume rebalance start` command to rebalance data among the servers. **To rebalance a volume to fix layout** diff --git a/docs/Install-Guide/Configure.md b/docs/Install-Guide/Configure.md index 61a8c04..e4e2cb2 100644 --- a/docs/Install-Guide/Configure.md +++ b/docs/Install-Guide/Configure.md @@ -72,7 +72,7 @@ means each server will house a copy of the data. important when you have more bricks. It is possible (as of the most current release as of this writing, Gluster 3.3) -to specify the bricks in a such a way that you would make both copies of the data reside on a +to specify the bricks in such a way that you would make both copies of the data reside on a single node. This would make for an embarrassing explanation to your boss when your bulletproof, completely redundant, always on super cluster comes to a grinding halt when a single point of failure occurs. diff --git a/docs/Install-Guide/Overview.md b/docs/Install-Guide/Overview.md index 9d56eb7..0f6dd6b 100644 --- a/docs/Install-Guide/Overview.md +++ b/docs/Install-Guide/Overview.md @@ -75,17 +75,19 @@ this is accomplished without a centralized metadata server. Most likely, yes. People use Gluster for all sorts of things. You are encouraged to ask around in our IRC channel or Q&A forums to see if anyone has tried something similar. That being said, there are a few -places where Gluster is going to need more consideration than others. - -Accessing Gluster from SMB/CIFS is often going to be slow by most -people’s standards. If you only moderate access by users, then it most -likely won’t be an issue for you. On the other hand, adding enough -Gluster servers into the mix, some people have seen better performance -with us than other solutions due to the scale out nature of the -technology - Gluster does not support so called “structured data”, -meaning live, SQL databases. Of course, using Gluster to backup and -restore the database would be fine - Gluster is traditionally better -when using file sizes at of least 16KB (with a sweet spot around 128KB -or so). +places where Gluster is going to need more consideration than others. + +- Accessing Gluster from SMB/CIFS is often going to be slow by most + people’s standards. If you only moderate access by users, then it most + likely won’t be an issue for you. On the other hand, adding enough + Gluster servers into the mix, some people have seen better performance + with us than other solutions due to the scale out nature of the + technology +- Gluster does not support so called “structured data”, meaning + live, SQL databases. Of course, using Gluster to backup and + restore the database would be fine +- Gluster is traditionally better when using file sizes of at least 16KB + (with a sweet spot around 128KB or so). #### What is the cost and complexity required to set up cluster?