mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
OSDOCS-1298: Separate Storage command and output blocks
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
19e01ec1be
commit
cf011be6ca
@@ -8,14 +8,14 @@
|
||||
To set a StorageClass as the cluster-wide default, add
|
||||
the following annotation to your StorageClass's metadata:
|
||||
|
||||
[source.yaml]
|
||||
[source,yaml]
|
||||
----
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
----
|
||||
|
||||
For example:
|
||||
|
||||
[source.yaml]
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
@@ -25,27 +25,27 @@ metadata:
|
||||
...
|
||||
----
|
||||
|
||||
This enables any Persistent Volume Claim (PVC) that does not specify a
|
||||
specific volume to automatically be provisioned through the
|
||||
This enables any Persistent Volume Claim (PVC) that does not specify a
|
||||
specific volume to automatically be provisioned through the
|
||||
default StorageClass.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The beta annotation `storageclass.beta.kubernetes.io/is-default-class` is
|
||||
The beta annotation `storageclass.beta.kubernetes.io/is-default-class` is
|
||||
still working; however, it will be removed in a future release.
|
||||
====
|
||||
|
||||
To set a StorageClass description, add the following annotation
|
||||
to your StorageClass's metadata:
|
||||
|
||||
[source.yaml]
|
||||
[source,yaml]
|
||||
----
|
||||
kubernetes.io/description: My StorageClass Description
|
||||
----
|
||||
|
||||
For example:
|
||||
|
||||
[source.yaml]
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
|
||||
@@ -29,8 +29,15 @@ rules:
|
||||
|
||||
. Add the ClusterRole to the ServiceAccount:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder
|
||||
$ oc adm policy add-cluster-role-to-user <persistent-volume-binder-role>
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
system:serviceaccount:kube-system:persistent-volume-binder
|
||||
----
|
||||
|
||||
. Create the Azure File StorageClass:
|
||||
|
||||
@@ -13,9 +13,14 @@ StorageClass from `gp2` to `standard`.
|
||||
|
||||
. List the StorageClass:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get storageclass
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME TYPE
|
||||
gp2 (default) kubernetes.io/aws-ebs <1>
|
||||
standard kubernetes.io/aws-ebs
|
||||
@@ -26,6 +31,7 @@ standard kubernetes.io/aws-ebs
|
||||
`storageclass.kubernetes.io/is-default-class` to `false` for the default
|
||||
StorageClass:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
|
||||
----
|
||||
@@ -33,15 +39,21 @@ $ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kube
|
||||
. Make another StorageClass the default by adding or modifying the
|
||||
annotation as `storageclass.kubernetes.io/is-default-class=true`.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
|
||||
----
|
||||
|
||||
. Verify the changes:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get storageclass
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME TYPE
|
||||
gp2 kubernetes.io/aws-ebs
|
||||
standard (default) kubernetes.io/aws-ebs
|
||||
|
||||
@@ -65,6 +65,7 @@ duplicate GIDs dispatched by the provisioner.
|
||||
When heketi authentication is used, a Secret containing the admin key must
|
||||
also exist.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
oc create secret generic heketi-secret --from-literal=key=<password> -n <namespace> --type=kubernetes.io/glusterfs
|
||||
----
|
||||
@@ -72,6 +73,7 @@ oc create secret generic heketi-secret --from-literal=key=<password> -n <namespa
|
||||
This results in the following configuration:
|
||||
|
||||
.heketi-secret.yaml
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
|
||||
@@ -39,6 +39,7 @@ spec:
|
||||
|
||||
. Create the object definition file that you saved in the previous step.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f my-csi-app.yaml
|
||||
----
|
||||
|
||||
@@ -42,6 +42,7 @@ provisioned. Changing this value can result in data loss and Pod failure.
|
||||
|
||||
. Create the object definition file you saved in the previous step.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f cinder-persistentvolume.yaml
|
||||
----
|
||||
|
||||
@@ -15,8 +15,13 @@ deployment configurations.
|
||||
|
||||
. Create a service account and add it to the SCC:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create serviceaccount <service_account>
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>
|
||||
----
|
||||
|
||||
|
||||
@@ -43,6 +43,7 @@ spec:
|
||||
+
|
||||
. Create the object you saved in the previous step by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pvc-clone.yaml
|
||||
----
|
||||
@@ -51,6 +52,7 @@ A new PVC `pvc-1-clone` is created.
|
||||
|
||||
. Verify that the volume clone was created and is ready by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pvc pvc-1-clone
|
||||
----
|
||||
|
||||
@@ -70,6 +70,7 @@ spec:
|
||||
+
|
||||
. Create the object you saved in the previous step by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pvc-manila.yaml
|
||||
----
|
||||
@@ -78,6 +79,7 @@ A new PVC is created.
|
||||
|
||||
. To verify that the volume was created and is ready, run the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pvc pvc-manila
|
||||
----
|
||||
|
||||
@@ -63,12 +63,14 @@ When the Operator installation is finished, the Manila CSI driver is deployed on
|
||||
.Verification steps
|
||||
. Verify that the ManilaDriver CR was created successfully by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get all -n openshift-manila-csi-driver
|
||||
----
|
||||
+
|
||||
Example output:
|
||||
.Example output
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/csi-nodeplugin-nfsplugin-lzvpm 1/1 Running 0 18h
|
||||
@@ -93,12 +95,14 @@ replicaset.apps/openstack-manila-csi-controllerplugin-7d4f5d985b 1 1
|
||||
|
||||
. Verify that the storage class was created successfully by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get storageclasses | grep -E "NAME|csi-manila-"
|
||||
----
|
||||
+
|
||||
Example output:
|
||||
.Example output
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
csi-manila-gold manila.csi.openstack.org Delete Immediate false 18h
|
||||
|
||||
@@ -17,12 +17,26 @@ changes to the template.
|
||||
|
||||
* Create the MySQL template:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# oc new-app mysql-persistent
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
--> Deploying template "openshift/mysql-persistent" to project default
|
||||
...
|
||||
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# oc get pvc
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS VOLUME CAPACITY
|
||||
ACCESS MODES STORAGECLASS AGE
|
||||
mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi
|
||||
|
||||
@@ -41,6 +41,7 @@ deletionPolicy: Delete
|
||||
+
|
||||
. Create the object you saved in the previous step by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f volumesnapshotclass.yaml
|
||||
----
|
||||
@@ -68,6 +69,7 @@ spec:
|
||||
|
||||
. Create the object you saved in the previous step by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f volumesnapshot-dynamic.yaml
|
||||
----
|
||||
@@ -92,6 +94,7 @@ spec:
|
||||
|
||||
. Create the object you saved in the previous step by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f volumesnapshot-manual.yaml
|
||||
----
|
||||
@@ -101,6 +104,7 @@ After the snapshot has been created in the cluster, additional details about the
|
||||
|
||||
. To display details about the volume snapshot that was created, enter the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe volumesnapshot mysnap
|
||||
----
|
||||
@@ -132,6 +136,7 @@ If the value is set to `false`, the snapshot was created. However, the storage b
|
||||
|
||||
. To verify that the volume snapshot was created, enter the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get volumesnapshotcontent
|
||||
----
|
||||
|
||||
@@ -44,6 +44,7 @@ spec:
|
||||
. Create a PVC by entering the following command:
|
||||
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pvc-restore.yaml
|
||||
----
|
||||
@@ -51,6 +52,7 @@ $ oc create -f pvc-restore.yaml
|
||||
. Verify that the restored PVC has been created by entering the following command:
|
||||
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pvc
|
||||
----
|
||||
|
||||
@@ -36,6 +36,7 @@ A Pod that uses a hostPath volume must be referenced by manual (static) provisio
|
||||
|
||||
. Create the PV from the file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pv.yaml
|
||||
----
|
||||
@@ -59,6 +60,7 @@ spec:
|
||||
|
||||
. Create the PVC from the file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pvc.yaml
|
||||
----
|
||||
|
||||
@@ -101,15 +101,21 @@ local volumes.
|
||||
. Create the local volume resource in your {product-title} cluster, specifying
|
||||
the file you just created:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f <local-volume>.yaml
|
||||
----
|
||||
|
||||
. Verify that the provisioner was created, and that the corresponding DaemonSets were created:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get all -n local-storage
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/local-disks-local-provisioner-h97hj 1/1 Running 0 46m
|
||||
pod/local-disks-local-provisioner-j4mnn 1/1 Running 0 46m
|
||||
@@ -138,9 +144,14 @@ count is `0`, it indicates that the label selectors were invalid.
|
||||
|
||||
. Verify that the PersistentVolumes were created:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pv
|
||||
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m
|
||||
local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m
|
||||
|
||||
@@ -15,6 +15,7 @@ The Local Storage Operator is not installed in {product-title} by default. Use t
|
||||
|
||||
. Create the `local-storage` project:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc new-project local-storage
|
||||
----
|
||||
@@ -25,10 +26,12 @@ You might want to use the Local Storage Operator to create volumes on master and
|
||||
+
|
||||
To allow local storage creation on master and infrastructure nodes, add a toleration to the DaemonSet by entering the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch ds local-storage-local-diskmaker -n local-storage -p '{"spec": {"template": {"spec": {"tolerations":[{"operator": "Exists"}]}}}}'
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch ds local-storage-local-provisioner -n local-storage -p '{"spec": {"template": {"spec": {"tolerations":[{"operator": "Exists"}]}}}}'
|
||||
----
|
||||
@@ -92,6 +95,7 @@ such as `local-storage.yaml`:
|
||||
|
||||
. Create the Local Storage Operator object by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f local-storage.yaml
|
||||
----
|
||||
@@ -102,16 +106,28 @@ At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local St
|
||||
|
||||
.. Check that all the required Pods have been created:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc -n local-storage get pods
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m
|
||||
----
|
||||
|
||||
.. Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the `local-storage` project:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get csvs -n local-storage
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME DISPLAY VERSION REPLACES PHASE
|
||||
local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded
|
||||
----
|
||||
|
||||
@@ -38,6 +38,7 @@ spec:
|
||||
. Create the resource in the {product-title} cluster, specifying the file
|
||||
you just created:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f <local-pod>.yaml
|
||||
----
|
||||
|
||||
@@ -39,6 +39,7 @@ spec:
|
||||
. Create the PVC in the {product-title} cluster, specifying the file
|
||||
you just created:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f <local-pvc>.yaml
|
||||
----
|
||||
|
||||
@@ -27,6 +27,7 @@ Deleting a PersistentVolume that is still in use can result in data loss or corr
|
||||
|
||||
.. Edit the cluster resource:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc edit localvolume <name> -n local-storage
|
||||
----
|
||||
@@ -35,6 +36,7 @@ $ oc edit localvolume <name> -n local-storage
|
||||
|
||||
. Delete any PersistentVolumes created.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete pv <pv-name>
|
||||
----
|
||||
@@ -42,18 +44,21 @@ $ oc delete pv <pv-name>
|
||||
. Delete any symlinks on the node.
|
||||
.. Create a debug pod on the node:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc debug node/<node-name>
|
||||
----
|
||||
|
||||
.. Change your root directory to the host:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ chroot /host
|
||||
----
|
||||
|
||||
.. Navigate to the directory containing the local volume symlinks.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cd /mnt/local-storage/<sc-name> <1>
|
||||
----
|
||||
@@ -61,6 +66,7 @@ $ cd /mnt/local-storage/<sc-name> <1>
|
||||
|
||||
.. Delete the symlink belonging to the removed device.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ rm <symlink>
|
||||
----
|
||||
|
||||
@@ -21,6 +21,7 @@ there might be indeterminate behavior if the Operator is uninstalled and reinsta
|
||||
|
||||
. Delete any local volume resources in the project:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete localvolume --all --all-namespaces
|
||||
----
|
||||
@@ -41,12 +42,14 @@ $ oc delete localvolume --all --all-namespaces
|
||||
|
||||
. The PVs created by the Local Storage Operator will remain in the cluster until deleted. Once these volumes are no longer in use, delete them by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete pv <pv-name>
|
||||
----
|
||||
|
||||
. Delete the `local-storage` project:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete project local-storage
|
||||
----
|
||||
|
||||
@@ -34,6 +34,7 @@ spec:
|
||||
|
||||
. Create the PersistentVolumeClaim from the file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pvc.yaml
|
||||
----
|
||||
|
||||
@@ -17,12 +17,14 @@ To statically provision VMware vSphere volumes you must create the virtual machi
|
||||
|
||||
* Create using `vmkfstools`. Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk
|
||||
----
|
||||
|
||||
* Create using `vmware-diskmanager`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk
|
||||
----
|
||||
@@ -58,6 +60,7 @@ Changing the value of the fsType parameter after the volume is formatted and pro
|
||||
|
||||
. Create the PersistentVolume from the file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pv1.yaml
|
||||
----
|
||||
@@ -85,6 +88,7 @@ spec:
|
||||
|
||||
. Create the PersistentVolumeClaim from the file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pvc1.yaml
|
||||
----
|
||||
|
||||
@@ -40,10 +40,10 @@ spec:
|
||||
<1> Updating `spec.resources.requests` to a larger amount will expand
|
||||
the PVC.
|
||||
|
||||
. Once the cloud provider object has finished resizing, the PVC is set to
|
||||
`FileSystemResizePending`. The following command is used to check
|
||||
the condition:
|
||||
. After the cloud provider object has finished resizing, the PVC is set to
|
||||
`FileSystemResizePending`. Check the condition by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe pvc <pvc_name>
|
||||
----
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="create-azure-file-secret_{context}"]
|
||||
= Create the Azure File share PersistentVolumeClaim
|
||||
|
||||
To create the PersistentVolumeClaim, you must first define a Secret that contains the Azure account and key. This Secret is used in the PersistentVolume definition, and will be referenced by the PersistentVolumeClaim for use in applications.
|
||||
To create the PersistentVolumeClaim, you must first define a Secret that contains the Azure account and key. This Secret is used in the PersistentVolume definition, and will be referenced by the PersistentVolumeClaim for use in applications.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -17,6 +17,7 @@ key, are available.
|
||||
|
||||
. Create a Secret that contains the Azure File credentials:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ <1>
|
||||
--from-literal=azurestorageaccountkey=<storage-account-key> <2>
|
||||
|
||||
@@ -14,11 +14,12 @@ provisioner.
|
||||
|
||||
. Create an `efs-provisioner` service account:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create serviceaccount efs-provisioner
|
||||
----
|
||||
|
||||
. Create a file, `clusterrole.yaml` that defines the necessary permissions:
|
||||
. Create a file, `clusterrole.yaml`, that defines the necessary permissions:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -63,7 +64,7 @@ roleRef:
|
||||
name: efs-provisioner-runner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
----
|
||||
<1> The namespace where the EFS provisioner pod will run. If the EFS
|
||||
<1> The namespace where the EFS provisioner Pod will run. If the EFS
|
||||
provisioner is running in a namespace other than `default`, this value must
|
||||
be updated.
|
||||
|
||||
@@ -100,12 +101,13 @@ roleRef:
|
||||
name: leader-locking-efs-provisioner
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
----
|
||||
<1> The namespace where the EFS provisioner pod will run. If the EFS
|
||||
<1> The namespace where the EFS provisioner Pod will run. If the EFS
|
||||
provisioner is running in a namespace other than `default`, this value must
|
||||
be updated.
|
||||
|
||||
. Create the resources inside the {product-title} cluster:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f clusterrole.yaml,clusterrolebinding.yaml,role.yaml,rolebinding.yaml
|
||||
----
|
||||
|
||||
@@ -35,6 +35,7 @@ EFS volume at `<file-system-id>.efs.<aws-region>.amazonaws.com`.
|
||||
. After the file has been configured, create it in your cluster
|
||||
by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f configmap.yaml -n <namespace>
|
||||
----
|
||||
|
||||
@@ -10,7 +10,7 @@ as an NFS share.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Create A ConfigMap that defines the EFS environment variables.
|
||||
* Create a ConfigMap that defines the EFS environment variables.
|
||||
* Create a service account that contains the necessary cluster
|
||||
and role permissions.
|
||||
* Create a StorageClass for provisioning volumes.
|
||||
@@ -21,7 +21,7 @@ SSH traffic from all sources.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Define the EFS provisioner by creating a `provisioner.yaml` with the
|
||||
. Define the EFS provisioner by creating a `provisioner.yaml` file with the
|
||||
following contents:
|
||||
+
|
||||
[source,yaml]
|
||||
@@ -78,6 +78,7 @@ directory that does not exist results in an error.
|
||||
. After the file has been configured, create it in your cluster
|
||||
by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f provisioner.yaml
|
||||
----
|
||||
|
||||
@@ -71,6 +71,7 @@ spec:
|
||||
|
||||
. After the file has been configured, create it in your cluster by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f pvc.yaml
|
||||
----
|
||||
|
||||
@@ -38,6 +38,7 @@ created volumes. The default value is `true`.
|
||||
. After the file has been configured, create it in your cluster
|
||||
by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f storageclass.yaml
|
||||
----
|
||||
|
||||
@@ -4,12 +4,13 @@
|
||||
|
||||
= Export settings
|
||||
|
||||
In order to enable arbitrary container users to read and write the volume,
|
||||
To enable arbitrary container users to read and write the volume,
|
||||
each exported volume on the NFS server should conform to the following
|
||||
conditions:
|
||||
|
||||
* Every export must be exported using the following format:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
/<example_fs> *(rw,root_squash)
|
||||
----
|
||||
@@ -18,6 +19,7 @@ conditions:
|
||||
** For NFSv4, configure the default port `2049` (*nfs*).
|
||||
+
|
||||
.NFSv4
|
||||
[source,terminal]
|
||||
----
|
||||
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT
|
||||
----
|
||||
@@ -26,13 +28,22 @@ conditions:
|
||||
`2049` (*nfs*), `20048` (*mountd*), and `111` (*portmapper*).
|
||||
+
|
||||
.NFSv3
|
||||
[source,terminal]
|
||||
----
|
||||
# iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
|
||||
----
|
||||
|
||||
* The NFS export and directory must be set up so that they are accessible
|
||||
by the target Pods. Either set the export to be owned by the container's
|
||||
primary UID, or supply the Pod group access using `supplementalGroups`,
|
||||
as shown in group IDs above.
|
||||
as shown in the group IDs above.
|
||||
|
||||
@@ -48,8 +48,14 @@ Each NFS volume must be mountable by all schedulable nodes in the cluster.
|
||||
|
||||
. Verify that the PV was created:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pv
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
|
||||
pv0001 <none> 5Gi RWO Available 31s
|
||||
----
|
||||
|
||||
@@ -20,6 +20,7 @@ the `virt_use_nfs` SELinux boolean.
|
||||
* Enable the `virt_use_nfs` boolean using the following command.
|
||||
The `-P` option makes this boolean persistent across reboots.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# setsebool -P virt_use_nfs 1
|
||||
----
|
||||
|
||||
@@ -23,17 +23,28 @@ owner of the NFS mount, which is the desired behavior.
|
||||
As an example, if the target NFS directory appears on the NFS server as:
|
||||
|
||||
[[nfs-export]]
|
||||
[source,terminal]
|
||||
----
|
||||
$ ls -lZ /opt/nfs -d
|
||||
drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs
|
||||
----
|
||||
[source,terminal]
|
||||
----
|
||||
$ id nfsnobody
|
||||
----
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)
|
||||
----
|
||||
|
||||
Then the container must match SELinux labels, and either run with a UID of
|
||||
`65534`, the `nfsnobody` owner, or with `5555` in its supplemental groups
|
||||
in order to access the directory.
|
||||
`65534`, the `nfsnobody` owner, or with `5555` in its supplemental groups to access the directory.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -228,6 +228,7 @@ cluster.
|
||||
|
||||
You can view the name of the PVC bound to the PV by running:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pv <pv-claim>
|
||||
----
|
||||
|
||||
Reference in New Issue
Block a user