1
0
mirror of https://github.com/gluster/glusterdocs.git synced 2026-02-05 15:47:01 +01:00

Update Setup-Bare-metal.md, Setup-AWS.md, Configure.md, GeoReplication.md (#635)

* Update Setup-Bare-metal.md

* Updated Set-AWS.md

* Added fdisk -l command in Configure.md

* Command + grammatical issues

* Added  in Georeplication.md file as it was considered as a tag by the md

Co-authored-by: root <root@aujjwal.remote.csb>
This commit is contained in:
aujjwal-redhat
2021-01-28 09:48:28 +05:30
committed by GitHub
parent 769b8069a8
commit 4d49646b3d
4 changed files with 10 additions and 10 deletions

View File

@@ -166,7 +166,7 @@ disable that prefix,
Create a geo-rep session between Primary and Secondary volume using the
following command. The node in which this command is executed and the
<Secondary_host> specified in the command should have password less ssh
`<Secondary_host>` specified in the command should have password less ssh
setup between them. The push-pem option actually uses the secret pem
pub file created earlier and establishes geo-rep specific password
less ssh between each node in Primary to each node of Secondary.

View File

@@ -29,7 +29,7 @@ that the first node has already been added.
### Partition the disk
Assuming you have a empty disk at `/dev/sdb`:
Assuming you have an empty disk at `/dev/sdb`: *(You can check the partitions on your system using* `fdisk -l`*)*
```console
# fdisk /dev/sdb

View File

@@ -3,7 +3,7 @@
### Setup, Method 2 Setting up on physical servers
To set up Gluster on physical servers, we recommend two servers of very
modest specifications (2 CPUs, 2GB of RAM, 1GBE). Since we are dealing
modest specifications (2 CPUs, 2GB of RAM, 1GBE). Since we are dealing
with physical hardware here, keep in mind, what we are showing here is
for testing purposes. In the end, remember that forces beyond your
control (aka, your bosses boss...) can force you to take that the “just
@@ -28,20 +28,20 @@ practices we mentioned before:
With the explosion of commodity hardware, you dont need to be a
hardware expert these days to deploy a server. Although this is
generally a good thing, it also means that paying attention to some
important, performance impacting BIOS settings is commonly ignored. Several
important, performance-impacting BIOS settings is commonly ignored. Several
points that might cause issues when if you're unaware of them:
- Most manufacturers enable power saving mode by default. This is a
great idea for servers that do not have high-performance
requirements. For the average storage server, the performance impact
requirements. For the average storage server, the performance-impact
of the power savings is not a reasonable tradeoff
- Newer motherboards and processors have lots of nifty features!
Enhancements in virtualization, newer ways of doing predictive
algorithms and NUMA are just a few to mention. To be safe, many
manufactures ship hardware with settings meant to work with as
massive a variety of workloads and configurations as they have
customers. One issue you could face is when you set up that blazing
fast 10GBE card you were so thrilled about installing? In many
customers. One issue you could face is when you set up that blazing-fast
10GBE card you were so thrilled about installing? In many
cases, it would end up being crippled by a default 1x speed put in
place on the PCI-E bus by the motherboard.
@@ -68,5 +68,5 @@ resolved with a simple driver or firmware update. As often as not, these
updates affect the two most critical pieces of hardware on a machine you
want to use for networked storage - the RAID controller and the NIC's.
Once you have setup the servers and installed the OS, you are ready to
Once you have set up the servers and installed the OS, you are ready to
move on to the [install](./Install.md) section.

View File

@@ -37,7 +37,7 @@ Other notes:
anyone is interested in this please let us know since we are always
looking to write articles on the most requested features and
questions.
- Using EBS volumes and Elastic IPs is also recommended in
- Using EBS volumes and Elastic IPs are also recommended in
production. For testing, you can safely ignore these as long as you
are aware that the data could be lost at any moment, so make sure
your test deployment is just that, testing only.
@@ -52,7 +52,7 @@ Other notes:
get Gluster running again using the default EC2 configuration. If a
node is shut down, it can mean absolute loss of the node (depending
on how you set things up). This is well beyond the scope of this
document, but is discussed in any number of AWS related forums and
document but is discussed in any number of AWS-related forums and
posts. Since I found out the hard way myself (oh, so you read the
manual every time?!), I thought it worth at least mentioning.