From 638e7c1149a4b0282db773f277c365f160165fc7 Mon Sep 17 00:00:00 2001 From: Evan Verworn Date: Mon, 13 Jun 2016 13:13:58 -0400 Subject: [PATCH] clean up code examples --- Administrator Guide/Setting Up Volumes.md | 108 +++++++++++----------- 1 file changed, 54 insertions(+), 54 deletions(-) diff --git a/Administrator Guide/Setting Up Volumes.md b/Administrator Guide/Setting Up Volumes.md index 9e7f62a..0289af3 100644 --- a/Administrator Guide/Setting Up Volumes.md +++ b/Administrator Guide/Setting Up Volumes.md @@ -14,86 +14,86 @@ start it before attempting to mount it. To create a thinly provisioned logical volume, proceed with the following steps: - 1. Create a physical volume(PV) by using the pvcreate command. - For example: + 1. Create a physical volume(PV) by using the pvcreate command. + For example: - `pvcreate --dataalignment 1280K /dev/sdb` + `pvcreate --dataalignment 1280K /dev/sdb` - Here, /dev/sdb is a storage device. - Use the correct dataalignment option based on your device. + Here, /dev/sdb is a storage device. + Use the correct dataalignment option based on your device. - >**Note** - > - >The device name and the alignment value will vary based on the device you are using. + >**Note** + > + >The device name and the alignment value will vary based on the device you are using. - 2. Create a Volume Group (VG) from the PV using the vgcreate command: + 2. Create a Volume Group (VG) from the PV using the vgcreate command: For example: - `vgcreate --physicalextentsize 128K gfs_vg /dev/sdb` + `vgcreate --physicalextentsize 128K gfs_vg /dev/sdb` - It is recommended that only one VG must be created from one storage device. + It is recommended that only one VG must be created from one storage device. - 3. Create a thin-pool using the following commands: + 3. Create a thin-pool using the following commands: - 1. Create an LV to serve as the metadata device using the following command: + 1. Create an LV to serve as the metadata device using the following command: - `lvcreate -L metadev_sz --name metadata_device_name VOLGROUP` + `lvcreate -L metadev_sz --name metadata_device_name VOLGROUP` - For example: + For example: - `lvcreate -L 16776960K --name gfs_pool_meta gfs_vg` + `lvcreate -L 16776960K --name gfs_pool_meta gfs_vg` - 2. Create an LV to serve as the data device using the following command: + 2. Create an LV to serve as the data device using the following command: - `lvcreate -L datadev_sz --name thin_pool VOLGROUP` + `lvcreate -L datadev_sz --name thin_pool VOLGROUP` - For example: + For example: - `lvcreate -L 536870400K --name gfs_pool gfs_vg` + `lvcreate -L 536870400K --name gfs_pool gfs_vg` - 3. Create a thin pool from the data LV and the metadata LV using the following command: + 3. Create a thin pool from the data LV and the metadata LV using the following command: - `lvconvert --chunksize STRIPE_WIDTH --thinpool VOLGROUP/thin_pool --poolmetadata VOLGROUP/metadata_device_name` + `lvconvert --chunksize STRIPE_WIDTH --thinpool VOLGROUP/thin_pool --poolmetadata VOLGROUP/metadata_device_name` - For example: + For example: - `lvconvert --chunksize 1280K --thinpool gfs_vg/gfs_pool --poolmetadata gfs_vg/gfs_pool_meta` + `lvconvert --chunksize 1280K --thinpool gfs_vg/gfs_pool --poolmetadata gfs_vg/gfs_pool_meta` - >**Note** - > - >By default, the newly provisioned chunks in a thin pool are zeroed to prevent data leaking between different block devices. + >**Note** + > + >By default, the newly provisioned chunks in a thin pool are zeroed to prevent data leaking between different block devices. - `lvchange --zero n VOLGROUP/thin_pool` + `lvchange --zero n VOLGROUP/thin_pool` - For example: + For example: - `lvchange --zero n gfs_vg/gfs_pool` + `lvchange --zero n gfs_vg/gfs_pool` - 4. Create a thinly provisioned volume from the previously created pool using the lvcreate command: + 4. Create a thinly provisioned volume from the previously created pool using the lvcreate command: - For example: + For example: - `lvcreate -V 1G -T gfs_vg/gfs_pool -n gfs_lv` + `lvcreate -V 1G -T gfs_vg/gfs_pool -n gfs_lv` - It is recommended that only one LV should be created in a thin pool. + It is recommended that only one LV should be created in a thin pool. Format bricks using the supported XFS configuration, mount the bricks, and verify the bricks are mounted correctly. - 1. Run # mkfs.xfs -f -i size=512 -n size=8192 -d su=128K,sw=10 DEVICE to format the bricks to the supported XFS file system format. Here, DEVICE is the thin LV. The inode size is set to 512 bytes to accommodate for the extended attributes used by GlusterFS. + 1. Run `# mkfs.xfs -f -i size=512 -n size=8192 -d su=128K,sw=10 DEVICE` to format the bricks to the supported XFS file system format. Here, DEVICE is the thin LV. The inode size is set to 512 bytes to accommodate for the extended attributes used by GlusterFS. - Run # mkdir /mountpoint to create a directory to link the brick to. + Run `# mkdir /mountpoint` to create a directory to link the brick to. - Add an entry in /etc/fstab: + Add an entry in /etc/fstab: - `/dev/gfs_vg/gfs_lv /mountpoint xfs rw,inode64,noatime,nouuid 1 2` + `/dev/gfs_vg/gfs_lv /mountpoint xfs rw,inode64,noatime,nouuid 1 2` - Run # mount /mountpoint to mount the brick. + Run `# mount /mountpoint` to mount the brick. - Run the df -h command to verify the brick is successfully mounted: + Run the `df -h` command to verify the brick is successfully mounted: - `# df -h - /dev/gfs_vg/gfs_lv 16G 1.2G 15G 7% /exp1` + `# df -h + /dev/gfs_vg/gfs_lv 16G 1.2G 15G 7% /exp1` - Volumes of the following types can be created in your storage environment: @@ -165,7 +165,7 @@ Format bricks using the supported XFS configuration, mount the bricks, and verif Creation of test-volume has been successful Please start the volume to access data. -##Creating Distributed Volumes +## Creating Distributed Volumes In a distributed volumes files are spread randomly across the bricks in the volume. Use distributed volumes where you need to scale storage and @@ -223,7 +223,7 @@ hardware/software layers. > Make sure you start your volumes before you try to mount them or > else client operations after the mount will hang. -##Creating Replicated Volumes +## Creating Replicated Volumes Replicated volumes create copies of files across multiple bricks in the volume. You can use replicated volumes in environments where @@ -266,7 +266,7 @@ high-availability and high-reliability are critical. > Use the `force` option at the end of command if you want to create the volume in this case. -###Arbiter configuration for replica volumes +### Arbiter configuration for replica volumes Arbiter volumes are replica 3 volumes where the 3rd brick acts as the arbiter brick. This configuration has mechanisms that prevent occurrence of split-brains. @@ -278,7 +278,7 @@ More information about this configuration can be found at *Features : afr-arbite Note that the arbiter configuration for replica 3 can be used to create distributed-replicate volumes as well. -##Creating Striped Volumes +## Creating Striped Volumes Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency @@ -312,7 +312,7 @@ environments accessing very large files. > Make sure you start your volumes before you try to mount them or > else client operations after the mount will hang. -##Creating Distributed Striped Volumes +## Creating Distributed Striped Volumes Distributed striped volumes stripes files across two or more nodes in the cluster. For best results, you should use distributed striped @@ -348,7 +348,7 @@ concurrency environments accessing very large files is critical. > Make sure you start your volumes before you try to mount them or > else client operations after the mount will hang. -##Creating Distributed Replicated Volumes +## Creating Distributed Replicated Volumes Distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is @@ -406,7 +406,7 @@ environments. > Use the `force` option at the end of command if you want to create the volume in this case. -##Creating Distributed Striped Replicated Volumes +## Creating Distributed Striped Replicated Volumes Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use @@ -450,7 +450,7 @@ Map Reduce workloads. > Use the `force` option at the end of command if you want to create the volume in this case. -##Creating Striped Replicated Volumes +## Creating Striped Replicated Volumes Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in @@ -501,7 +501,7 @@ of this volume type is supported only for Map Reduce workloads. > Use the `force` option at the end of command if you want to create the volume in this case. -##Creating Dispersed Volumes +## Creating Dispersed Volumes Dispersed volumes are based on erasure codes. It stripes the encoded data of files, with some redundancy addedd, across multiple bricks in the volume. You @@ -619,7 +619,7 @@ a RMW cycle for many writes (of course this always depends on the use case). > Use the `force` option at the end of command if you want to create the volume in this case. -##Creating Distributed Dispersed Volumes +## Creating Distributed Dispersed Volumes Distributed dispersed volumes are the equivalent to distributed replicated volumes, but using dispersed subvolumes instead of replicated ones. @@ -656,7 +656,7 @@ volumes, but using dispersed subvolumes instead of replicated ones. > Use the `force` option at the end of command if you want to create the volume in this case. -##Starting Volumes +## Starting Volumes You must start your volumes before you try to mount them. @@ -669,4 +669,4 @@ You must start your volumes before you try to mount them. For example, to start test-volume: # gluster volume start test-volume - Starting test-volume has been successful \ No newline at end of file + Starting test-volume has been successful