1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/storage-persistent-storage-pv.adoc
Erin Donnelly fa17000dd2 Rebase
Fix missed instances of OCS and replaced with ODF

Minor fixes

fix build error

Add IBM Power and Z

Fix build error

Fix errors

attempt at build fix
2022-04-01 21:25:41 +00:00

280 lines
8.3 KiB
Plaintext

// Module included in the following assemblies:
//
// * storage/understanding-persistent-storage.adoc
[id="persistent-volumes_{context}"]
= Persistent volumes
Each PV contains a `spec` and `status`, which is the specification and
status of the volume, for example:
.`PersistentVolume` object definition example
[source,yaml]
----
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001 <1>
spec:
capacity:
storage: 5Gi <2>
accessModes:
- ReadWriteOnce <3>
persistentVolumeReclaimPolicy: Retain <4>
...
status:
...
----
<1> Name of the persistent volume.
<2> The amount of storage available to the volume.
<3> The access mode, defining the read-write and mount permissions.
<4> The reclaim policy, indicating how the resource should be handled
once it is released.
[id="types-of-persistent-volumes_{context}"]
== Types of PVs
{product-title} supports the following persistent volume plug-ins:
// - GlusterFS
// - Ceph RBD
// - OpenStack Cinder
- AWS Elastic Block Store (EBS)
ifdef::openshift-enterprise,openshift-webscale,openshift-origin,openshift-aro[]
- Azure Disk
- Azure File
endif::openshift-enterprise,openshift-webscale,openshift-origin,openshift-aro[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
- Cinder
- Fibre Channel
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
- GCE Persistent Disk
ifdef::openshift-enterprise,openshift-webscale,openshift-origin,openshift-aro[]
- HostPath
- iSCSI
- Local volume
- NFS
- OpenStack Manila
- {rh-storage-first}
endif::openshift-enterprise,openshift-webscale,openshift-origin,openshift-aro[]
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
- VMware vSphere
// - Local
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
[id="pv-capacity_{context}"]
== Capacity
Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the `capacity` attribute of the PV.
Currently, storage capacity is the only resource that can be set or
requested. Future attributes may include IOPS, throughput, and so on.
[id="pv-access-modes_{context}"]
== Access modes
A persistent volume can be mounted on a host in any way supported by the
resource provider. Providers have different capabilities and each PV's
access modes are set to the specific modes supported by that particular
volume. For example, NFS can support multiple read-write clients, but a
specific NFS PV might be exported on the server as read-only. Each PV gets
its own set of access modes describing that specific PV's capabilities.
Claims are matched to volumes with similar access modes. The only two
matching criteria are access modes and size. A claim's access modes
represent a request. Therefore, you might be granted more, but never less.
For example, if a claim requests RWO, but the only volume available is an
NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports
RWO.
Direct matches are always attempted first. The volume's modes must match or
contain more modes than you requested. The size must be greater than or
equal to what is expected. If two types of volumes, such as NFS and iSCSI,
have the same set of access modes, either of them can match a claim with
those modes. There is no ordering between types of volumes and no way to
choose one type over another.
All volumes with the same modes are grouped, and then sorted by size,
smallest to largest. The binder gets the group with matching modes and
iterates over each, in size order, until one size matches.
The following table lists the access modes:
.Access modes
[cols="1,1,3",options="header"]
|===
|Access Mode |CLI abbreviation |Description
|ReadWriteOnce
|`RWO`
|The volume can be mounted as read-write by a single node.
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
|ReadOnlyMany
|`ROX`
|The volume can be mounted as read-only by many nodes.
|ReadWriteMany
|`RWX`
|The volume can be mounted as read-write by many nodes.
endif::[]
|===
[IMPORTANT]
====
Volume access modes are descriptors of volume capabilities. They
are not enforced constraints. The storage provider is responsible for
runtime errors resulting from invalid use of the resource.
For example, NFS offers `ReadWriteOnce` access mode. You must
mark the claims as `read-only` if you want to use the volume's
ROX capability. Errors in the provider show up at runtime as mount errors.
iSCSI and Fibre Channel volumes do not currently have any fencing
mechanisms. You must ensure the volumes are only used by one node at a
time. In certain situations, such as draining a node, the volumes can be
used simultaneously by two nodes. Before draining the node, first ensure
the pods that use these volumes are deleted.
====
.Supported access modes for PVs
[cols=",^v,^v,^v", width="100%",options="header"]
|===
|Volume plug-in |ReadWriteOnce ^[1]^ |ReadOnlyMany |ReadWriteMany
|AWS EBS ^[2]^ | ✅ | - | -
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
|Azure File | ✅ | ✅ | ✅
|Azure Disk | ✅ | - | -
//|Ceph RBD | ✅ | ✅ | -
//|CephFS | ✅ | ✅ | ✅
|Cinder | ✅ | - | -
|Fibre Channel | ✅ | ✅ | -
|GCE Persistent Disk | ✅ | - | -
//|GlusterFS | ✅ | ✅ | ✅
|HostPath | ✅ | - | -
|iSCSI | ✅ | ✅ | -
|Local volume | ✅ | - | -
|NFS | ✅ | ✅ | ✅
|OpenStack Manila | - | - | ✅
|{rh-storage-first} | ✅ | - | ✅
|VMware vSphere | ✅ | - | -
endif::[]
|===
[.small]
--
1. ReadWriteOnce (RWO) volumes cannot be mounted on multiple nodes. If a node fails, the system does not allow the attached RWO volume to be mounted on a new node because it is already assigned to the failed node. If you encounter a multi-attach error message as a result, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached.
2. Use a recreate deployment strategy for pods that rely on AWS EBS.
// GCE Persistent Disks, or Openstack Cinder PVs.
--
ifdef::openshift-online[]
[id="pv-restrictions_{context}"]
== Restrictions
The following restrictions apply when using PVs with {product-title}:
endif::[]
ifdef::openshift-online[]
* PVs are provisioned with EBS volumes (AWS).
* Only RWO access mode is applicable, as EBS volumes and GCE Persistent
Disks cannot be mounted to multiple nodes.
* Docker volumes are disabled.
** VOLUME directive without a mapped external volume fails to be
instantiated
.
* *emptyDir* is restricted to 512 Mi per project (group) per node.
** A single pod for a project on a particular node can use up to 512 Mi
of *emptyDir* storage.
** Multiple pods for a project on a particular node share the 512 Mi of
*emptyDir* storage.
* *emptyDir* has the same lifecycle as the pod:
** *emptyDir* volumes survive container crashes/restarts.
** *emptyDir* volumes are deleted when the pod is deleted.
endif::[]
[id="pv-phase_{context}"]
== Phase
Volumes can be found in one of the following phases:
.Volume phases
[cols="1,2",options="header"]
|===
|Phase
|Description
|Available
|A free resource not yet bound to a claim.
|Bound
|The volume is bound to a claim.
|Released
|The claim was deleted, but the resource is not yet reclaimed by the
cluster.
|Failed
|The volume has failed its automatic reclamation.
|===
You can view the name of the PVC bound to the PV by running:
[source,terminal]
----
$ oc get pv <pv-claim>
----
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
[id="pv-mount-options_{context}"]
=== Mount options
You can specify mount options while mounting a PV by using the attribute
`mountOptions`.
For example:
.Mount options example
[source,yaml]
----
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
mountOptions: <1>
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
persistentVolumeReclaimPolicy: Retain
claimRef:
name: claim1
namespace: default
----
<1> Specified mount options are used while mounting the PV to the disk.
The following PV types support mount options:
// - GlusterFS
// - Ceph RBD
- AWS Elastic Block Store (EBS)
- Azure Disk
- Azure File
- Cinder
- GCE Persistent Disk
- iSCSI
- Local volume
- NFS
- {rh-storage-first} (Ceph RBD only)
- VMware vSphere
[NOTE]
====
Fibre Channel and HostPath PVs do not support mount options.
====
endif::openshift-enterprise,openshift-webscale,openshift-origin[]