1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 15:46:57 +01:00

Terminology style updates for storage book

This commit is contained in:
Andrea Hoffer
2020-11-30 15:20:25 -05:00
parent f51e7e4b37
commit f7a24b3969
82 changed files with 298 additions and 311 deletions

View File

@@ -4,10 +4,10 @@
// * post_installation_configuration/storage-configuration.adoc
[id="storage-class-annotations_{context}"]
= StorageClass annotations
= Storage class annotations
To set a StorageClass as the cluster-wide default, add
the following annotation to your StorageClass's metadata:
To set a storage class as the cluster-wide default, add
the following annotation to your storage class metadata:
[source,yaml]
----
@@ -26,9 +26,9 @@ metadata:
...
----
This enables any Persistent Volume Claim (PVC) that does not specify a
specific StorageClass to automatically be provisioned through the
default StorageClass.
This enables any persistent volume claim (PVC) that does not specify a
specific storage class to automatically be provisioned through the
default storage class.
[NOTE]
====
@@ -36,12 +36,12 @@ The beta annotation `storageclass.beta.kubernetes.io/is-default-class` is
still working; however, it will be removed in a future release.
====
To set a StorageClass description, add the following annotation
to your StorageClass's metadata:
To set a storage class description, add the following annotation
to your storage class metadata:
[source,yaml]
----
kubernetes.io/description: My StorageClass Description
kubernetes.io/description: My Storage Class Description
----
For example:
@@ -52,6 +52,6 @@ apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubernetes.io/description: My StorageClass Description
kubernetes.io/description: My Storage Class Description
...
----

View File

@@ -43,8 +43,8 @@ not through a provisioner plug-in.
|Azure File
|`kubernetes.io/azure-file`
|The `persistent-volume-binder` ServiceAccount requires permissions to create
and get Secrets to store the Azure storage account and keys.
|The `persistent-volume-binder` service account requires permissions to create
and get secrets to store the Azure storage account and keys.
|GCE Persistent Disk (gcePD)
|`kubernetes.io/gce-pd`

View File

@@ -6,7 +6,7 @@
[id="azure-file-considerations_{context}"]
= Considerations when using Azure File
The following file system features are not supported by the default Azure File StorageClass:
The following file system features are not supported by the default Azure File storage class:
* Symlinks
* Hard links
@@ -14,10 +14,10 @@ The following file system features are not supported by the default Azure File S
* Sparse files
* Named pipes
Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The `uid` mount option can be specified in the StorageClass to define
Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The `uid` mount option can be specified in the `StorageClass` object to define
a specific user identifier to use for the mounted directory.
The following StorageClass demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory.
The following `StorageClass` object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory.
[source,yaml]
----

View File

@@ -7,13 +7,13 @@
[id="azure-file-definition_{context}"]
= Azure File object definition
The Azure File StorageClass uses secrets to store the Azure storage account name
The Azure File storage class uses secrets to store the Azure storage account name
and the storage account key that are required to create an Azure Files share. These
permissions are created as part of the following procedure.
.Procedure
. Define a ClusterRole that allows access to create and view secrets:
. Define a `ClusterRole` object that allows access to create and view secrets:
+
[source,yaml]
----
@@ -27,9 +27,9 @@ rules:
resources: ['secrets']
verbs: ['get','create']
----
<1> The name of the ClusterRole to view and create secrets.
<1> The name of the cluster role to view and create secrets.
. Add the ClusterRole to the ServiceAccount:
. Add the cluster role to the service account:
+
[source,terminal]
----
@@ -42,7 +42,7 @@ $ oc adm policy add-cluster-role-to-user <persistent-volume-binder-role>
system:serviceaccount:kube-system:persistent-volume-binder
----
. Create the Azure File StorageClass:
. Create the Azure File `StorageClass` object:
+
[source,yaml]
----
@@ -58,10 +58,10 @@ parameters:
reclaimPolicy: Delete
volumeBindingMode: Immediate
----
<1> Name of the StorageClass. The PersistentVolumeClaim uses this StorageClass for provisioning the associated PersistentVolumes.
<1> Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes.
<2> Location of the Azure storage account, such as `eastus`. Default is empty, meaning that a new Azure storage account will be created in the {product-title} cluster's location.
<3> SKU tier of the Azure storage account, such as `Standard_LRS`. Default is empty, meaning that a new Azure storage account will be created with the `Standard_LRS` SKU.
<4> Name of the Azure storage account. If a storage account is provided, then
`skuName` and `location` are ignored. If no storage account is provided, then
the StorageClass searches for any storage account that is associated with the
the storage class searches for any storage account that is associated with the
resource group for any accounts that match the defined `skuName` and `location`.

View File

@@ -6,14 +6,14 @@
[id="change-default-storage-class_{context}"]
= Changing the default StorageClass
= Changing the default storage class
If you are using AWS, use the following process to change the default
StorageClass. This process assumes you have two StorageClasses
storage class. This process assumes you have two storage classes
defined, `gp2` and `standard`, and you want to change the default
StorageClass from `gp2` to `standard`.
storage class from `gp2` to `standard`.
. List the StorageClass:
. List the storage class:
+
[source,terminal]
----
@@ -27,18 +27,18 @@ NAME TYPE
gp2 (default) kubernetes.io/aws-ebs <1>
standard kubernetes.io/aws-ebs
----
<1> `(default)` denotes the default StorageClass.
<1> `(default)` denotes the default storage class.
. Change the value of the annotation
`storageclass.kubernetes.io/is-default-class` to `false` for the default
StorageClass:
storage class:
+
[source,terminal]
----
$ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
----
. Make another StorageClass the default by adding or modifying the
. Make another storage class the default by adding or modifying the
annotation as `storageclass.kubernetes.io/is-default-class=true`.
+
[source,terminal]

View File

@@ -4,19 +4,19 @@
// * post_installation_configuration/storage-configuration.adoc
[id="defining-storage-classes_{context}"]
= Defining a StorageClass
= Defining a storage class
`StorageClass` objects are currently a globally scoped object and must be
created by `cluster-admin` or `storage-admin` users.
[IMPORTANT]
====
The ClusterStorageOperator may install a default StorageClass depending
on the platform in use. This StorageClass is owned and controlled by the
The Cluster Storage Operator might install a default storage class depending
on the platform in use. This storage class is owned and controlled by the
operator. It cannot be deleted or modified beyond defining annotations
and labels. If different behavior is desired, you must define a custom
StorageClass.
storage class.
====
The following sections describe the basic object definition for a
StorageClass and specific examples for each of the supported plug-in types.
The following sections describe the basic definition for a
`StorageClass` object and specific examples for each of the supported plug-in types.

View File

@@ -7,11 +7,11 @@
= Basic `StorageClass` object definition
The following resource shows the parameters and default values that you
use to configure a StorageClass. This example uses the AWS
use to configure a storage class. This example uses the AWS
ElasticBlockStore (EBS) object definition.
.Sample StorageClass definition
.Sample `StorageClass` definition
[source,yaml]
----
kind: StorageClass <1>
@@ -28,8 +28,8 @@ parameters: <6>
----
<1> (required) The API object type.
<2> (required) The current apiVersion.
<3> (required) The name of the StorageClass.
<4> (optional) Annotations for the StorageClass
<3> (required) The name of the storage class.
<4> (optional) Annotations for the storage class.
<5> (required) The type of provisioner associated with this storage class.
<6> (optional) The parameters required for the specific provisioner, this
will change from plug-in to plug-in.

View File

@@ -7,7 +7,7 @@
Traditionally, volumes that are backed by Container Storage Interface (CSI) drivers can only be used with a `PersistentVolume` and `PersistentVolumeClaim` object combination.
This feature allows you to specify CSI volumes directly in the `Pod` specification, rather than in a PersistentVolume. Inline volumes are ephemeral and do not persist across pod restarts.
This feature allows you to specify CSI volumes directly in the `Pod` specification, rather than in a `PersistentVolume` object. Inline volumes are ephemeral and do not persist across pod restarts.
== Support limitations

View File

@@ -37,7 +37,7 @@ spec:
[IMPORTANT]
====
Do not change the `fstype` parameter value after the volume is formatted and
provisioned. Changing this value can result in data loss and Pod failure.
provisioned. Changing this value can result in data loss and pod failure.
====
. Create the object definition file you saved in the previous step.

View File

@@ -54,9 +54,9 @@ spec:
securityContext:
fsGroup: 7777 <7>
----
<1> The number of copies of the Pod to run.
<2> The label selector of the Pod to run.
<3> A template for the Pod that the controller creates.
<1> The number of copies of the pod to run.
<2> The label selector of the pod to run.
<3> A template for the pod that the controller creates.
<4> The labels on the pod. They must include labels from the label selector.
<5> The maximum name length after expanding any parameters is 63 characters.
<6> Specifies the service account you created.

View File

@@ -7,7 +7,7 @@
CSI drivers are typically shipped as container images. These containers
are not aware of {product-title} where they run. To use CSI-compatible
storage backend in {product-title}, the cluster administrator must deploy
storage back end in {product-title}, the cluster administrator must deploy
several components that serve as a bridge between {product-title} and the
storage driver.
@@ -16,6 +16,6 @@ running in pods in the {product-title} cluster.
image::csi-arch.png["Architecture of CSI components"]
It is possible to run multiple CSI drivers for different storage backends.
Each driver needs its own external controllers' deployment and DaemonSet
It is possible to run multiple CSI drivers for different storage back ends.
Each driver needs its own external controllers deployment and daemon set
with the driver and CSI registrar.

View File

@@ -3,7 +3,7 @@
// * storage/container_storage_interface/persistent_storage-csi.adoc
[id="csi-driver-daemonset_{context}"]
= CSI Driver DaemonSet
= CSI driver daemon set
The CSI driver daemon set runs a pod on every node that allows
{product-title} to mount storage provided by the CSI driver to the node
@@ -17,6 +17,6 @@ UNIX Domain Socket available on the node.
* A CSI driver.
The CSI driver deployed on the node should have as few credentials to the
storage backend as possible. {product-title} will only use the node plug-in
storage back end as possible. {product-title} will only use the node plug-in
set of CSI calls such as `NodePublish`/`NodeUnpublish` and
`NodeStage`/`NodeUnstage`, if these calls are implemented.

View File

@@ -3,14 +3,14 @@
// * storage/container_storage_interface/persistent-storage-csi.adoc
[id="csi-dynamic-provisioning_{context}"]
= Dynamic Provisioning
= Dynamic provisioning
Dynamic provisioning of persistent storage depends on the capabilities of
the CSI driver and underlying storage backend. The provider of the CSI
driver should document how to create a StorageClass in {product-title} and
the CSI driver and underlying storage back end. The provider of the CSI
driver should document how to create a storage class in {product-title} and
the parameters available for configuration.
The created StorageClass can be configured to enable dynamic provisioning.
The created storage class can be configured to enable dynamic provisioning.
.Procedure
@@ -30,5 +30,5 @@ provisioner: <provisioner-name> <2>
parameters:
EOF
----
<1> The name of the StorageClass that will be created.
<1> The name of the storage class that will be created.
<2> The name of the CSI driver that has been installed

View File

@@ -43,7 +43,7 @@ Use RWX if you want the persistent volume (PV) that fulfills this PVC to be moun
. Define the size of the storage claim.
. Click *Create* to create the PersistentVolumeClaim and generate a PersistentVolume.
. Click *Create* to create the persistent volume claim and generate a persistent volume.
.Procedure (CLI)

View File

@@ -11,7 +11,7 @@ changes to the template.
.Prerequisites
* The CSI driver has been deployed.
* A StorageClass has been created for dynamic provisioning.
* A storage class has been created for dynamic provisioning.
.Procedure

View File

@@ -12,7 +12,7 @@ The CSI snapshot controller and sidecar provide volume snapshotting through the
The external controller is deployed by the CSI Snapshot Controller Operator.
== External controller
The CSI snapshot controller binds VolumeSnapshot and VolumeSnapshotContent objects. The controller manages dynamic provisioning by creating and deleting VolumeSnapshotContent objects.
The CSI snapshot controller binds `VolumeSnapshot` and `VolumeSnapshotContent` objects. The controller manages dynamic provisioning by creating and deleting `VolumeSnapshotContent` objects.
== External sidecar
Your CSI driver vendor provides the `csi-external-snapshotter` sidecar. This is a separate helper container that is deployed with the CSI driver. The sidecar manages snapshots by triggering CreateSnapshot and DeleteSnapshot operations. Follow the installation instructions provided by your vendor.
Your CSI driver vendor provides the `csi-external-snapshotter` sidecar. This is a separate helper container that is deployed with the CSI driver. The sidecar manages snapshots by triggering `CreateSnapshot` and `DeleteSnapshot` operations. Follow the installation instructions provided by your vendor.

View File

@@ -5,25 +5,25 @@
[id="persistent-storage-csi-snapshots-create_{context}"]
= Creating a volume snapshot
When you create a VolumeSnapshot object, {product-title} creates a volume snapshot.
When you create a `VolumeSnapshot` object, {product-title} creates a volume snapshot.
.Prerequisites
* Logged in to a running {product-title} cluster.
* A PVC created using a CSI driver that supports VolumeSnapshot objects.
* A storage class to provision the storage backend.
* A PVC created using a CSI driver that supports `VolumeSnapshot` objects.
* A storage class to provision the storage back end.
* No pods are using the persistent volume claim (PVC) that you want to take a snapshot of.
+
[NOTE]
====
Do not create a volume snapshot of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Be sure to first tear down a running Pod to ensure consistent snapshots.
Do not create a volume snapshot of a PVC if a pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Be sure to first tear down a running pod to ensure consistent snapshots.
====
.Procedure
To dynamically create a volume snapshot:
. Create a file with the VolumeSnapshotClass object described by the following YAML:
. Create a file with the `VolumeSnapshotClass` object described by the following YAML:
+
.volumesnapshotclass.yaml
@@ -36,7 +36,7 @@ metadata:
driver: hostpath.csi.k8s.io
deletionPolicy: Delete
----
<1> Allows you to specify different attributes belonging to a VolumeSnapshot.
<1> Allows you to specify different attributes belonging to a volume snapshot.
+
. Create the object you saved in the previous step by entering the following command:
@@ -46,7 +46,7 @@ deletionPolicy: Delete
$ oc create -f volumesnapshotclass.yaml
----
. Create a VolumeSnapshot object:
. Create a `VolumeSnapshot` object:
+
.volumesnapshot-dynamic.yaml
@@ -77,7 +77,7 @@ $ oc create -f volumesnapshot-dynamic.yaml
To manually provision a snapshot:
. Provide a value for the volumeSnapshotContentName parameter as the source for the snapshot, in addition to defining volume snapshot class as shown above.
. Provide a value for the `volumeSnapshotContentName` parameter as the source for the snapshot, in addition to defining volume snapshot class as shown above.
+
.volumesnapshot-manual.yaml
[source,yaml]
@@ -90,7 +90,7 @@ spec:
source:
volumeSnapshotContentName: mycontent <1>
----
<1> The volumeSnapshotContentName parameter is required for pre-provisioned snapshots.
<1> The `volumeSnapshotContentName` parameter is required for pre-provisioned snapshots.
. Create the object you saved in the previous step by entering the following command:
+
@@ -132,7 +132,7 @@ status:
<2> The time when the snapshot was created. The snapshot contains the volume content that was available at this indicated time.
<3> If the value is set to `true`, the snapshot can be used to restore as a new PVC.
+
If the value is set to `false`, the snapshot was created. However, the storage backend needs to perform additional tasks to make the snapshot usable so that it can be restored as a new volume. For example, Amazon Elastic Block Store data might be moved to a different, less expensive location, which can take several minutes.
If the value is set to `false`, the snapshot was created. However, the storage back end needs to perform additional tasks to make the snapshot usable so that it can be restored as a new volume. For example, Amazon Elastic Block Store data might be moved to a different, less expensive location, which can take several minutes.
. To verify that the volume snapshot was created, enter the following command:
+
@@ -141,6 +141,6 @@ If the value is set to `false`, the snapshot was created. However, the storage b
$ oc get volumesnapshotcontent
----
+
The pointer to the actual content is displayed. If the `boundVolumeSnapshotContentName` field is populated, a VolumeSnapshotContent object exists and the snapshot was created.
The pointer to the actual content is displayed. If the `boundVolumeSnapshotContentName` field is populated, a `VolumeSnapshotContent` object exists and the snapshot was created.
. To verify that the snapshot is ready, confirm that the VolumeSnapshot has `readyToUse: true`.
. To verify that the snapshot is ready, confirm that the `VolumeSnapshot` object has `readyToUse: true`.

View File

@@ -11,7 +11,7 @@ You can configure how {product-title} deletes volume snapshots.
To enable deletion of a volume snapshot in a cluster:
. Specify the deletion policy that you require in the VolumeSnapshotClass object, as shown in the following example:
. Specify the deletion policy that you require in the `VolumeSnapshotClass` object, as shown in the following example:
+
.volumesnapshot.yaml
@@ -24,6 +24,6 @@ metadata:
driver: hostpath.csi.k8s.io
deletionPolicy: Delete <1>
----
<1> If the `Delete` value is set, the underlying snapshot will be deleted, along with the VolumeSnapshotContent object. If the `Retain` value is set, both the underlying snapshot and VolumeSnapshotContent remain.
<1> If the `Delete` value is set, the underlying snapshot will be deleted, along with the `VolumeSnapshotContent` object. If the `Retain` value is set, both the underlying snapshot and `VolumeSnapshotContent` object remain.
+
If the `Retain` value is set, and the VolumeSnapshot object is deleted without deleting the corresponding VolumeSnapshotContent, then the content will remain. The snapshot itself is also retained in the storage backend.
If the `Retain` value is set, and the `VolumeSnapshot` object is deleted without deleting the corresponding `VolumeSnapshotContent` object, then the content will remain. The snapshot itself is also retained in the storage back end.

View File

@@ -9,31 +9,31 @@ The CSI Snapshot Controller Operator runs in the `openshift-cluster-storage-oper
The CSI Snapshot Controller Operator installs the CSI snapshot controller, which runs in the `csi-snapshot-controller` namespace.
== VolumeSnapshot CRDs
== Volume snapshot CRDs
During {product-title} installation, the CSI Snapshot Controller Operator creates the following snapshot Custom Resource Definitions (CRDs) in the `snapshot.storage.k8s.io/` API group:
During {product-title} installation, the CSI Snapshot Controller Operator creates the following snapshot custom resource definitions (CRDs) in the `snapshot.storage.k8s.io/` API group:
VolumeSnapshotContent::
`VolumeSnapshotContent`::
A snapshot taken of a volume in the cluster that has been provisioned by a cluster administrator.
+
Similar to the PersistentVolume CRD, the VolumeSnapshotContent CRD is a cluster resource that points to a real snapshot in the storage backend.
Similar to the `PersistentVolume` CRD, the `VolumeSnapshotContent` CRD is a cluster resource that points to a real snapshot in the storage back end.
+
For manually pre-provisioned snapshots, a cluster administrator creates a number of VolumeSnapshotContent objects. These carry the details of the real volume snapshot in the storage system.
For manually pre-provisioned snapshots, a cluster administrator creates a number of `VolumeSnapshotContent` objects. These carry the details of the real volume snapshot in the storage system.
+
The VolumeSnapshotContent CRD is not namespaced and is for use by a cluster administrator.
The `VolumeSnapshotContent` CRD is not namespaced and is for use by a cluster administrator.
VolumeSnapshot::
`VolumeSnapshot`::
Similar to PersistentVolumeClaim, the VolumeSnapshot CRD defines a developer request for a snapshot. The CSI Snapshot Controller Operator runs the CSI snapshot controller, which handles the binding of a VolumeSnapshot object with an appropriate VolumeSnapshotContent object. The binding is a one-to-one mapping.
Similar to the `PersistentVolumeClaim` CRD, the `VolumeSnapshot` CRD defines a developer request for a snapshot. The CSI Snapshot Controller Operator runs the CSI snapshot controller, which handles the binding of a `VolumeSnapshot` object with an appropriate `VolumeSnapshotContent` object. The binding is a one-to-one mapping.
+
The VolumeSnapshot CRD is namespaced. A developer uses the CRD as a distinct request for a snapshot.
The `VolumeSnapshot` CRD is namespaced. A developer uses the CRD as a distinct request for a snapshot.
VolumeSnapshotClass::
`VolumeSnapshotClass`::
Allows a cluster administrator to specify different attributes belonging to a VolumeSnapshot object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same StorageClass of a PersistentVolumeClaim.
Allows a cluster administrator to specify different attributes belonging to a `VolumeSnapshot` object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim.
+
The VolumeSnapshotClass CRD defines the parameters for the `csi-external-snapshotter` sidecar to use when creating a snapshot. This allows the storage backend to know what kind of snapshot to dynamically create if multiple options are supported.
The `VolumeSnapshotClass` CRD defines the parameters for the `csi-external-snapshotter` sidecar to use when creating a snapshot. This allows the storage back end to know what kind of snapshot to dynamically create if multiple options are supported.
+
Dynamically provisioned snapshots use the VolumeSnapshotClass to specify storage-provider-specific parameters to use when creating a snapshot.
Dynamically provisioned snapshots use the `VolumeSnapshotClass` CRD to specify storage-provider-specific parameters to use when creating a snapshot.
+
The VolumeSnapshotContentClass CRD is not namespaced and is for use by a cluster administrator to enable global configuration options for their storage backend.
The `VolumeSnapshotContentClass` CRD is not namespaced and is for use by a cluster administrator to enable global configuration options for their storage back end.

View File

@@ -5,7 +5,7 @@
[id="persistent-storage-csi-snapshots-overview_{context}"]
= Overview of CSI volume snapshots
A _snapshot_ represents the state of the storage volume in a cluster at a particular point in time. VolumeSnapshots can be used to provision a new volume.
A _snapshot_ represents the state of the storage volume in a cluster at a particular point in time. Volume snapshots can be used to provision a new volume.
{product-title} supports CSI volume snapshots by default. However, a specific CSI driver is required.
@@ -23,10 +23,10 @@ With CSI volume snapshots, an app developer can:
* Rapidly rollback to a previous development version.
* Use storage more efficiently by not having to make a full copy each time.
Be aware of the following when using VolumeSnapshot:
Be aware of the following when using volume snapshots:
* Support is only available for CSI drivers. In-tree and FlexVolumes are not supported.
* {product-title} does not ship with any CSI drivers. It is recommended to use the CSI drivers provided by
link:https://kubernetes-csi.github.io/docs/drivers.html[community or storage vendors]. Follow the installation instructions provided by the CSI driver.
* CSI drivers may or may not have implemented the volume snapshot functionality. CSI drivers that have provided support for VolumeSnapshots will likely use the `csi-external-snapshotter` sidecar. See documentation provided by the CSI driver for details.
* CSI drivers may or may not have implemented the volume snapshot functionality. CSI drivers that have provided support for volume snapshots will likely use the `csi-external-snapshotter` sidecar. See documentation provided by the CSI driver for details.
* {product-title} {product-version} supports version 1.1.0 of the link:https://github.com/container-storage-interface/spec[CSI specification].

View File

@@ -10,9 +10,9 @@ There are two ways to provision snapshots: dynamically and manually.
[id="snapshots-dynamic-provisioning_{context}"]
== Dynamic provisioning
Instead of using a preexisting snapshot, you can request that a snapshot be taken dynamically from a PersistentVolumeClaim. Parameters are specified using VolumeSnapshotClass.
Instead of using a preexisting snapshot, you can request that a snapshot be taken dynamically from a persistent volume claim. Parameters are specified using a `VolumeSnapshotClass` CRD.
[id="snapshots-manual-provisioning_{context}"]
== Manual provisioning
As a cluster administrator, you can manually pre-provision a number of VolumeSnapshotContent objects. These carry the real volume snapshot details available to cluster users.
As a cluster administrator, you can manually pre-provision a number of `VolumeSnapshotContent` objects. These carry the real volume snapshot details available to cluster users.

View File

@@ -5,18 +5,18 @@
[id="persistent-storage-csi-snapshots-restore_{context}"]
= Restoring a volume snapshot
After your VolumeSnapshot object is bound, you can use that object to provision a new volume that is pre-populated with data from the snapshot.
After your `VolumeSnapshot` object is bound, you can use that object to provision a new volume that is pre-populated with data from the snapshot.
The volume snapshot content object is used to restore the existing volume to a previous state.
.Prerequisites
* Logged in to a running {product-title} cluster.
* A persistent volume claim (PVC) created using a Container Storage Interface (CSI) driver that supports VolumeSnapshots.
* A storage class to provision the storage backend.
* A persistent volume claim (PVC) created using a Container Storage Interface (CSI) driver that supports volume snapshots.
* A storage class to provision the storage back end.
.Procedure
. Specify a VolumeSnapshot data source on a PVC as shown in the following:
. Specify a `VolumeSnapshot` data source on a PVC as shown in the following:
+
.pvc-restore.yaml
[source,yaml]
@@ -37,7 +37,7 @@ spec:
requests:
storage: 1Gi
----
<1> Name of the VolumeSnapshot object representing the snapshot to use as source.
<1> Name of the `VolumeSnapshot` object representing the snapshot to use as source.
<2> Must be set to the `VolumeSnapshot` value.
<3> Must be set to the `snapshot.storage.k8s.io` value.

View File

@@ -5,8 +5,8 @@
[id="enforcing-disk-quota_{context}"]
= Enforcing disk quotas
Use LUN partitions to enforce disk quotas and size constraints.
Each LUN is mapped to a single PersistentVolume, and unique
names must be used for PersistentVolumes.
Each LUN is mapped to a single persistent volume, and unique
names must be used for persistent volumes.
Enforcing quotas in this way allows the end user to request persistent storage
by a specific amount, such as 10Gi, and be matched with a corresponding volume

View File

@@ -4,7 +4,7 @@
[id="provisioning-fibre_{context}"]
= Provisioning
To provision Fibre Channel volumes using the PersistentVolume API
To provision Fibre Channel volumes using the `PersistentVolume` API
the following must be available:
* The `targetWWNs` (array of Fibre Channel target's World Wide
@@ -12,13 +12,13 @@ Names).
* A valid LUN number.
* The filesystem type.
A PersistentVolume and a LUN have a one-to-one mapping between them.
A persistent volume and a LUN have a one-to-one mapping between them.
.Prerequisites
* Fibre Channel LUNs must exist in the underlying infrastructure.
.PersistentVolume Object Definition
.`PersistentVolume` object definition
[source,yaml]
----

View File

@@ -4,9 +4,9 @@
[id="fibre-volume-security_{context}"]
= Fibre Channel volume security
Users request storage with a PersistentVolumeClaim. This claim only lives in
Users request storage with a persistent volume claim. This claim only lives in
the user's namespace, and can only be referenced by a pod within that same
namespace. Any attempt to access a PersistentVolume across a namespace causes
namespace. Any attempt to access a persistent volume across a namespace causes
the pod to fail.
Each Fibre Channel LUN must be accessible by all nodes in the cluster.

View File

@@ -5,7 +5,7 @@
[id="flexvolume-drivers_{context}"]
= About FlexVolume drivers
A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. {product-title} calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a `PersistentVolume` with `flexVolume` as the source.
A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. {product-title} calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a `PersistentVolume` object with `flexVolume` as the source.
[IMPORTANT]
====

View File

@@ -58,8 +58,8 @@ To install the FlexVolume driver:
. Ensure that the executable file exists on all nodes in the cluster.
. Place the executable file at the volume plug-in path:
*_/etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver>_*.
`/etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver>`.
For example, to install the FlexVolume driver for the storage `foo`, place the
executable file at:
*_/etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo_*.
`/etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo`.

View File

@@ -7,6 +7,6 @@
{product-title} supports hostPath mounting for development and testing on a single-node cluster.
In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of StorageClasses to set up dynamic provisioning.
In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning.
A hostPath volume must be provisioned statically.

View File

@@ -3,16 +3,16 @@
// * storage/persistent_storage/persistent-storage-hostpath.adoc
[id="persistent-storage-hostpath-pod_{context}"]
= Mounting the hostPath share in a privileged Pod
= Mounting the hostPath share in a privileged pod
After the PersistentVolumeClaim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
.Prerequisites
* A PersistentVolumeClaim exists that is mapped to the underlying hostPath share.
* A persistent volume claim exists that is mapped to the underlying hostPath share.
.Procedure
* Create a privileged Pod that mounts the existing PersistentVolumeClaim:
* Create a privileged pod that mounts the existing persistent volume claim:
+
[source,yaml]
----
@@ -36,6 +36,6 @@ spec:
claimName: task-pvc-volume <4>
----
<1> The name of the pod.
<2> The Pod must run as privileged to access the node's storage.
<2> The pod must run as privileged to access the node's storage.
<3> The path to mount the hostPath share inside the privileged pod.
<4> The name of the PersistentVolumeClaim that has been previously created.
<4> The name of the `PersistentVolumeClaim` object that has been previously created.

View File

@@ -5,7 +5,7 @@
[id="hostpath-static-provisioning_{context}"]
= Statically provisioning hostPath volumes
A Pod that uses a hostPath volume must be referenced by manual (static) provisioning.
A pod that uses a hostPath volume must be referenced by manual (static) provisioning.
.Procedure
@@ -29,8 +29,8 @@ A Pod that uses a hostPath volume must be referenced by manual (static) provisio
hostPath:
path: "/mnt/data" <4>
----
<1> The name of the volume. This name is how it is identified by PersistentVolumeClaims or pods.
<2> Used to bind PersistentVolumeClaim requests to this PersistentVolume.
<1> The name of the volume. This name is how it is identified by persistent volume claims or pods.
<2> Used to bind persistent volume claim requests to this persistent volume.
<3> The volume can be mounted as `read-write` by a single node.
<4> The configuration file specifies that the volume is at `/mnt/data` on the clusters node.

View File

@@ -3,10 +3,10 @@
// * storage/persistent_storage/persistent-storage-local.adoc
[id="local-volume-cr_{context}"]
= Provision the local volumes
= Provisioning the local volumes
Local volumes cannot be created by dynamic provisioning. Instead,
PersistentVolumes must be created by the Local Storage Operator. This
persistent volumes must be created by the Local Storage Operator. This
provisioner will look for any devices, both file system and block volumes,
at the paths specified in defined resource.
@@ -23,7 +23,7 @@ and paths to the local volumes.
+
[NOTE]
====
Do not use different StorageClass names for the same device. Doing so will create multiple persistent volumes (PV)s.
Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PV)s.
====
+
@@ -142,7 +142,7 @@ replicaset.apps/local-storage-operator-54564d9988 1 1 1
Note the desired and current number of daemon set processes. If the desired
count is `0`, it indicates that the label selectors were invalid.
. Verify that the PersistentVolumes were created:
. Verify that the persistent volumes were created:
+
[source,terminal]
----
@@ -160,5 +160,5 @@ local-pv-3fa1c73 100Gi RWO Delete Available
[IMPORTANT]
====
Editing the LocalVolume object does not change the `fsType` or `volumeMode` of existing PersistentVolumes because doing so might result in a destructive operation.
Editing the `LocalVolume` object does not change the `fsType` or `volumeMode` of existing persistent volumes because doing so might result in a destructive operation.
====

View File

@@ -3,18 +3,18 @@
// * storage/persistent_storage/persistent-storage-local.adoc
[id="create-local-pvc_{context}"]
= Create the local volume PersistentVolumeClaim
= Creating the local volume persistent volume claim
Local volumes must be statically created as a PersistentVolumeClaim (PVC)
Local volumes must be statically created as a persistent volume claim (PVC)
to be accessed by the pod.
.Prerequisite
* PersistentVolumes have been created using the local volume provisioner.
* Persistent volumes have been created using the local volume provisioner.
.Procedure
. Create the PVC using the corresponding StorageClass:
. Create the PVC using the corresponding storage class:
+
[source,yaml]
----
@@ -34,7 +34,7 @@ spec:
<1> Name of the PVC.
<2> The type of the PVC. Defaults to `Filesystem`.
<3> The amount of storage available to the PVC.
<4> Name of the StorageClass required by the claim.
<4> Name of the storage class required by the claim.
. Create the PVC in the {product-title} cluster, specifying the file
you just created:

View File

@@ -5,7 +5,7 @@
[id="local-removing-device_{context}"]
= Removing a local volume or local volume set
Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different StorageClass, then additional steps are needed.
Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed.
[NOTE]
====
@@ -14,16 +14,16 @@ The following procedure outlines an example for removing a local volume. The sam
.Prerequisite
* The PersistentVolume must be in a `Released` or `Available` state.
* The persistent volume must be in a `Released` or `Available` state.
+
[WARNING]
====
Deleting a PersistentVolume that is still in use can result in data loss or corruption.
Deleting a persistent volume that is still in use can result in data loss or corruption.
====
.Procedure
. Edit the previously created LocalVolume to remove any unwanted disks.
. Edit the previously created local volume to remove any unwanted disks.
.. Edit the cluster resource:
+
@@ -34,7 +34,7 @@ $ oc edit localvolume <name> -n openshift-local-storage
.. Navigate to the lines under `devicePaths`, and delete any representing unwanted disks.
. Delete any PersistentVolumes created.
. Delete any persistent volumes created.
+
[source,terminal]
----
@@ -68,7 +68,7 @@ $ chroot /host
----
$ cd /mnt/openshift-local-storage/<sc-name> <1>
----
<1> The name of the StorageClass used to create the local volumes.
<1> The name of the storage class used to create the local volumes.
.. Delete the symlink belonging to the removed device.
+

View File

@@ -7,7 +7,7 @@
Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the `Pod` or `DaemonSet` definition. This allows the created resources to run on these tainted nodes.
You apply tolerations to the Local Storage Operator pod through the LocalVolume resource
You apply tolerations to the Local Storage Operator pod through the `LocalVolume` resource
and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node.
[IMPORTANT]
@@ -48,7 +48,7 @@ To configure local volumes for scheduling on tainted nodes:
----
<1> Specify the key that you added to the node.
<2> Specify the `Equal` operator to require the `key`/`value` parameters to match. If operator is 'Exists`, the system checks that the key exists and ignores the value. If operator is `Equal`, then the key and value must match.
<2> Specify the `Equal` operator to require the `key`/`value` parameters to match. If operator is `Exists`, the system checks that the key exists and ignores the value. If operator is `Equal`, then the key and value must match.
<3> Specify the value `local` of the tainted node.
<4> The volume mode, either `Filesystem` or `Block`, defining the type of the local volumes.
<5> The path containing a list of local storage devices to choose from.

View File

@@ -28,11 +28,11 @@ spec:
requests:
storage: 1Gi <3>
----
<1> A unique name that represents the PersistentVolumeClaim.
<2> The PersistentVolumeClaim's access mode. With `ReadWriteOnce`, the volume can be mounted with read and write permissions by a single node.
<3> The size of the PersistentVolumeClaim.
<1> A unique name that represents the persistent volume claim.
<2> The access mode of the persistent volume claim. With `ReadWriteOnce`, the volume can be mounted with read and write permissions by a single node.
<3> The size of the persistent volume claim.
. Create the PersistentVolumeClaim from the file:
. Create the `PersistentVolumeClaim` object from the file:
+
[source,terminal]
----

View File

@@ -5,7 +5,7 @@
[id="vsphere-dynamic-provisioning_{context}"]
= Dynamically provisioning VMware vSphere volumes using the UI
{product-title} installs a default StorageClass, named `thin`, that uses the `thin` disk format for provisioning volumes.
{product-title} installs a default storage class, named `thin`, that uses the `thin` disk format for provisioning volumes.
.Prerequisites
@@ -19,7 +19,7 @@
. Define the required options on the resulting page.
.. Select the `thin` StorageClass.
.. Select the `thin` storage class.
.. Enter a unique name for the storage claim.
@@ -27,4 +27,4 @@
.. Define the size of the storage claim.
. Click *Create* to create the PersistentVolumeClaim and generate a PersistentVolume.
. Click *Create* to create the persistent volume claim and generate a persistent volume.

View File

@@ -5,6 +5,6 @@
[id="vsphere-formatting-volumes_{context}"]
= Formatting VMware vSphere volumes
Before {product-title} mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the `fsType` parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system.
Before {product-title} mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the `fsType` parameter value in the `PersistentVolume` (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system.
Because {product-title} formats them before the first use, you can use unformatted vSphere volumes as PVs.

View File

@@ -29,7 +29,7 @@ $ vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk
$ shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk
----
. Create a PersistentVolume that references the VMDKs. Create a file, `pv1.yaml`, with the `PersistentVolume` object definition:
. Create a persistent volume that references the VMDKs. Create a file, `pv1.yaml`, with the `PersistentVolume` object definition:
+
[source,yaml]
----
@@ -47,7 +47,7 @@ spec:
volumePath: "[datastore1] volumes/myDisk" <4>
fsType: ext4 <5>
----
<1> The name of the volume. This name is how it is identified by PersistentVolumeClaims or pods.
<1> The name of the volume. This name is how it is identified by persistent volume claims or pods.
<2> The amount of storage allocated to this volume.
<3> The volume type used, with `vsphereVolume` for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore.
<4> The existing VMDK volume to use. If you used `vmkfstools`, you must enclose the datastore name in square brackets, `[]`, in the volume definition, as shown previously.
@@ -55,17 +55,17 @@ spec:
+
[IMPORTANT]
====
Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and Pod failure.
Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure.
====
. Create the PersistentVolume from the file:
. Create the `PersistentVolume` object from the file:
+
[source,terminal]
----
$ oc create -f pv1.yaml
----
. Create a PersistentVolumeClaim that maps to the PersistentVolume you created in the previous step. Create a file, `pvc1.yaml`, with the `PersistentVolumeClaim` object definition:
. Create a persistent volume claim that maps to the persistent volume you created in the previous step. Create a file, `pvc1.yaml`, with the `PersistentVolumeClaim` object definition:
+
[source,yaml]
----
@@ -81,12 +81,12 @@ spec:
storage: "1Gi" <3>
volumeName: pv1 <4>
----
<1> A unique name that represents the PersistentVolumeClaim.
<2> The PersistentVolumeClaims access mode. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
<3> The size of the PersistentVolumeClaim.
<4> The name of the existing PersistentVolume.
<1> A unique name that represents the persistent volume claim.
<2> The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node.
<3> The size of the persistent volume claim.
<4> The name of the existing persistent volume.
. Create the PersistentVolumeClaim from the file:
. Create the `PersistentVolumeClaim` object from the file:
+
[source,terminal]
----

View File

@@ -7,14 +7,10 @@
[id="storage-create-azure-storage-class_{context}"]
= Creating the Azure storage class
StorageClasses are used to differentiate and delineate storage levels and
Storage classes are used to differentiate and delineate storage levels and
usages. By defining a storage class, users can obtain dynamically provisioned
persistent volumes.
.Additional References
* https://kubernetes.io/docs/concepts/storage/storage-classes/#new-azure-disk-storage-class-starting-from-v1-7-2[Azure Disk Storage Class]
.Procedure
. In the {product-title} console, click *Storage* -> *Storage Classes*.
@@ -41,3 +37,7 @@ and `managed`.
.. Enter additional parameters for the storage class as desired.
. Click *Create* to create the storage class.
.Additional resources
* https://kubernetes.io/docs/concepts/storage/storage-classes/#new-azure-disk-storage-class-starting-from-v1-7-2[Azure Disk Storage Class]

View File

@@ -10,9 +10,9 @@
[id="storage-create-{StorageClass}-storage-class_{context}"]
= Creating the {StorageClass} Storage Class
= Creating the {StorageClass} storage class
StorageClasses are used to differentiate and delineate storage levels and
Storage classes are used to differentiate and delineate storage levels and
usages. By defining a storage class, users can obtain dynamically provisioned
persistent volumes.

View File

@@ -5,14 +5,14 @@
[id="add-volume-expansion_{context}"]
= Enabling volume expansion support
Before you can expand persistent volumes, the StorageClass must
Before you can expand persistent volumes, the `StorageClass` object must
have the `allowVolumeExpansion` field set to `true`.
.Procedure
* Edit the StorageClass and add the `allowVolumeExpansion` attribute.
* Edit the `StorageClass` object and add the `allowVolumeExpansion` attribute.
The following example demonstrates adding this line at the bottom
of the StorageClass's configuration.
of the storage class configuration.
+
[source,yaml]
----

View File

@@ -3,7 +3,7 @@
// * storage/expanding-persistent-volume.adoc
[id="expanding-pvc-filesystem_{context}"]
= Expanding Persistent Volume Claims (PVCs) with a file system
= Expanding persistent volume claims (PVCs) with a file system
Expanding PVCs based on volume types that need file system resizing,
such as GCE PD, EBS, and Cinder, is a two-step process.
@@ -15,7 +15,7 @@ with the volume.
.Prerequisites
* The controlling StorageClass must have `allowVolumeExpansion` set
* The controlling `StorageClass` object must have `allowVolumeExpansion` set
to `true`.
.Procedure
@@ -49,7 +49,7 @@ $ oc describe pvc <pvc_name>
----
. When the cloud provider object has finished resizing, the
persistent volume object reflects the newly requested size in
`PersistentVolume` object reflects the newly requested size in
`PersistentVolume.Spec.Capacity`. At this point, you can create or
recreate a new pod from the PVC to finish the file system resizing.
Once the pod is running, the newly requested size is available and the

View File

@@ -5,9 +5,9 @@
[id="expanding-flexvolume_{context}"]
= Expanding FlexVolume with a supported driver
When using FlexVolume to connect to your backend storage system, you can expand persistent storage volumes after they have already been created. This is done by manually updating the persistent volume claim (PVC) in {product-title}.
When using FlexVolume to connect to your back-end storage system, you can expand persistent storage volumes after they have already been created. This is done by manually updating the persistent volume claim (PVC) in {product-title}.
FlexVolume allows expansion if the driver is set with `RequiresFSResize` to `true`. The FlexVolume can be expanded on Pod restart.
FlexVolume allows expansion if the driver is set with `RequiresFSResize` to `true`. The FlexVolume can be expanded on pod restart.
Similar to other volume types, FlexVolume volumes can also be expanded when in use by a pod.
@@ -16,7 +16,7 @@ Similar to other volume types, FlexVolume volumes can also be expanded when in u
* The underlying volume driver supports resize.
* The driver is set with the `RequiresFSResize` capability to `true`.
* Dynamic provisioning is used.
* The controlling StorageClass has `allowVolumeExpansion` set to `true`.
* The controlling `StorageClass` object has `allowVolumeExpansion` set to `true`.
.Procedure

View File

@@ -3,24 +3,24 @@
// * storage/expanding-persistent-volumes.adoc
[id="expanding-recovering-from-failure_{context}"]
= Recovering from Failure when Expanding Volumes
= Recovering from failure when expanding volumes
If expanding underlying storage fails, the {product-title} administrator
can manually recover the Persistent Volume Claim (PVC) state and cancel
the resize requests. Otherwise, the resize requests are continuously
If expanding underlying storage fails, the {product-title} administrator
can manually recover the persistent volume claim (PVC) state and cancel
the resize requests. Otherwise, the resize requests are continuously
retried by the controller without administrator intervention.
.Procedure
. Mark the persistent volume (PV) that is bound to the PVC with the
`Retain` reclaim policy. This can be done by editing the PV and changing
. Mark the persistent volume (PV) that is bound to the PVC with the
`Retain` reclaim policy. This can be done by editing the PV and changing
`persistentVolumeReclaimPolicy` to `Retain`.
. Delete the PVC. This will be recreated later.
. To ensure that the newly created PVC can bind to the PV marked `Retain`,
manually edit the PV and delete the `claimRef` entry from the PV specs.
This marks the PV as `Available`.
. Re-create the PVC in a smaller size, or a size that can be allocated by
the underlying storage provider.
. Set the `volumeName` field of the PVC to the name of the PV. This binds
. To ensure that the newly created PVC can bind to the PV marked `Retain`,
manually edit the PV and delete the `claimRef` entry from the PV specs.
This marks the PV as `Available`.
. Re-create the PVC in a smaller size, or a size that can be allocated by
the underlying storage provider.
. Set the `volumeName` field of the PVC to the name of the PV. This binds
the PVC to the provisioned PV only.
. Restore the reclaim policy on the PV.

View File

@@ -3,17 +3,17 @@
// * storage/persistent_storage/persistent-storage-azure-file.adoc
[id="create-azure-file-pod_{context}"]
= Mount the Azure File share in a Pod
= Mount the Azure File share in a pod
After the PersistentVolumeClaim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod.
.Prerequisites
* A PersistentVolumeClaim exists that is mapped to the underlying Azure File share.
* A persistent volume claim exists that is mapped to the underlying Azure File share.
.Procedure
* Create a Pod that mounts the existing PersistentVolumeClaim:
* Create a pod that mounts the existing persistent volume claim:
+
[source,yaml]
----
@@ -34,4 +34,4 @@ spec:
----
<1> The name of the pod.
<2> The path to mount the Azure File share inside the pod.
<3> The name of the PersistentVolumeClaim that has been previously created.
<3> The name of the `PersistentVolumeClaim` object that has been previously created.

View File

@@ -3,9 +3,9 @@
// * storage/persistent_storage/persistent-storage-azure-file.adoc
[id="create-azure-file-secret_{context}"]
= Create the Azure File share PersistentVolumeClaim
= Create the Azure File share persistent volume claim
To create the PersistentVolumeClaim, you must first define a Secret that contains the Azure account and key. This Secret is used in the PersistentVolume definition, and will be referenced by the PersistentVolumeClaim for use in applications.
To create the persistent volume claim, you must first define a `Secret` object that contains the Azure account and key. This secret is used in the `PersistentVolume` definition, and will be referenced by the persistent volume claim for use in applications.
.Prerequisites
@@ -15,7 +15,7 @@ key, are available.
.Procedure
. Create a Secret that contains the Azure File credentials:
. Create a `Secret` object that contains the Azure File credentials:
+
[source,terminal]
----
@@ -25,7 +25,7 @@ $ oc create secret generic <secret-name> --from-literal=azurestorageaccountname=
<1> The Azure File storage account name.
<2> The Azure File storage account key.
. Create a PersistentVolume that references the Secret you created:
. Create a `PersistentVolume` object that references the `Secret` object you created:
+
[source,yaml]
----
@@ -44,12 +44,12 @@ spec:
shareName: share-1 <4>
readOnly: false
----
<1> The name of the PersistentVolume.
<2> The size of this PersistentVolume.
<3> The name of the Secret that contains the Azure File share credentials.
<1> The name of the persistent volume.
<2> The size of this persistent volume.
<3> The name of the secret that contains the Azure File share credentials.
<4> The name of the Azure File share.
. Create a PersistentVolumeClaim that maps to the PersistentVolume you created:
. Create a `PersistentVolumeClaim` object that maps to the persistent volume you created:
+
[source,yaml]
----
@@ -66,9 +66,9 @@ spec:
storageClassName: azure-file-sc <3>
volumeName: "pv0001" <4>
----
<1> The name of the PersistentVolumeClaim.
<2> The size of this PersistentVolumeClaim.
<3> The name of the StorageClass that is used to provision the PersistentVolume.
Specify the StorageClass used in the PersistentVolume definition.
<4> The name of the existing PersistentVolume that references the
<1> The name of the persistent volume claim.
<2> The size of this persistent volume claim.
<3> The name of the storage class that is used to provision the persistent volume.
Specify the storage class used in the `PersistentVolume` definition.
<4> The name of the existing `PersistentVolume` object that references the
Azure File share.

View File

@@ -77,7 +77,7 @@ physical device where the raw block is mapped to the system.
<3> The volume source must be of type `persistentVolumeClaim` and must
match the name of the PVC as expected.
.Accepted values for `VolumeMode`
.Accepted values for `volumeMode`
[cols="1,2",options="header"]
|===
@@ -95,9 +95,9 @@ match the name of the PVC as expected.
[cols="1,2,3",options="header"]
|===
|PV VolumeMode
|PVC VolumeMode
|Binding Result
|PV `volumeMode`
|PVC `volumeMode`
|Binding result
|Filesystem
|Filesystem

View File

@@ -2,7 +2,7 @@
//
// * storage/persistent_storage-aws.adoc
= Creating the Persistent Volume Claim
= Creating the persistent volume claim
.Prerequisites

View File

@@ -3,14 +3,14 @@
// storage/persistent_storage/persistent-storage-efs.adoc
[id="efs-creating-configmap_{context}"]
= Store the EFS variables in a ConfigMap
= Store the EFS variables in a config map
It is recommended to use a ConfigMap to contain all the environment
It is recommended to use a config map to contain all the environment
variables that are required for the EFS provisioner.
.Procedure
. Define an {product-title} ConfigMap that contains the environment
. Define an {product-title} `ConfigMap` object that contains the environment
variables by creating a `configmap.yaml` file that contains following contents:
+
[source,yaml]
@@ -27,7 +27,7 @@ data:
----
<1> Defines the Amazon Web Services (AWS) EFS file system ID.
<2> The AWS region of the EFS file system, such as `us-east-1`.
<3> The name of the provisioner for the associated StorageClass.
<3> The name of the provisioner for the associated storage class.
<4> An optional argument that specifies the new DNS name where the EFS volume
is located. If no DNS name is provided, the provisioner will search for the
EFS volume at `<file-system-id>.efs.<aws-region>.amazonaws.com`.

View File

@@ -5,15 +5,15 @@
[id="efs-provisioner_{context}"]
= Create the EFS provisioner
The EFS provisioner is an {product-title} Pod that mounts the EFS volume
The EFS provisioner is an {product-title} pod that mounts the EFS volume
as an NFS share.
.Prerequisites
* Create a ConfigMap that defines the EFS environment variables.
* Create a config map that defines the EFS environment variables.
* Create a service account that contains the necessary cluster
and role permissions.
* Create a StorageClass for provisioning volumes.
* Create a storage class for provisioning volumes.
* Configure the Amazon Web Services (AWS) security groups to allow incoming
NFS traffic on all {product-title} nodes.
* Configure the AWS EFS volume security groups to allow incoming
@@ -67,7 +67,7 @@ spec:
path: / <2>
----
<1> Contains the DNS name of the EFS volume. This field must be updated
for the Pod to discover the EFS volume.
for the pod to discover the EFS volume.
<2> The mount path of the EFS volume. Each persistent volume is created
as a separate subdirectory on the EFS volume. If this EFS volume is used
for other projects outside of {product-title}, then it is recommended to

View File

@@ -3,9 +3,9 @@
// storage/persistent_storage/persistent-storage-efs.adoc
[id="efs-pvc_{context}"]
= Create the EFS PersistentVolumeClaim
= Create the EFS persistent volume claim
EFS PersistentVolumeClaims are created to allow pods
EFS persistent volume claims are created to allow pods
to mount the underlying EFS storage.
.Prerequisites
@@ -32,7 +32,7 @@ created storage claim.
+
[NOTE]
====
Although you must enter a size, every Pod that access the EFS volume has
Although you must enter a size, every pod that access the EFS volume has
unlimited storage. Define a value, such as `1Mi`, that will remind you that
the storage size is unlimited.
====
@@ -42,7 +42,7 @@ persistent volume.
.Procedure (CLI)
. Alternately, you can define EFS PersistentVolumeClaims by creating a file, `pvc.yaml`, with the following contents:
. Alternately, you can define EFS persistent volume claims by creating a file, `pvc.yaml`, with the following contents:
+
[source,yaml]
----
@@ -67,7 +67,7 @@ spec:
<1> A unique name for the PVC.
<2> The access mode to determine the read and write access for the created PVC.
<3> Defines the size of the PVC.
<4> Name of the StorageClass for the EFS provisioner.
<4> Name of the storage class for the EFS provisioner.
. After the file has been configured, create it in your cluster by running the following command:
+

View File

@@ -3,15 +3,15 @@
// storage/persistent_storage/persistent-storage-efs.adoc
[id="efs-storage-class_{context}"]
= Create the EFS StorageClass
= Create the EFS storage class
Before PersistentVolumeClaims can be created, a StorageClass
Before persistent volume claims can be created, a storage class
must exist in the {product-title} cluster. The following instructions
create the StorageClass for the EFS provisioner.
create the storage class for the EFS provisioner.
.Procedure
. Define an {product-title} ConfigMap that contains the environment
. Define an {product-title} config map that contains the environment
variables by creating a `storageclass.yaml` with the following contents:
+
[source,yaml]

View File

@@ -3,14 +3,13 @@
// * storage/persistent_storage-iscsi.adoc
[id="iscsi-custom-iqn_{context}"]
= iSCSI Custom Initiator IQN
= iSCSI custom initiator IQN
Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI
targets are restricted to certain IQNs, but the nodes that the iSCSI PVs
are attached to are not guaranteed to have these IQNs.
To specify a custom initiator IQN, use `initiatorName` field.
====
[source,yaml]
----
apiVersion: v1
@@ -32,4 +31,3 @@ spec:
readOnly: false
----
<1> Specify the name of the initiator.
====

View File

@@ -3,7 +3,7 @@
// * storage/persistent_storage-iscsi.adoc
[id="enforcing-disk-quotas-iscsi_{context}"]
= Enforcing Disk Quotas
= Enforcing disk quotas
Use LUN partitions to enforce disk quotas and size constraints. Each LUN
is one persistent volume. Kubernetes enforces unique names for persistent
volumes.

View File

@@ -3,7 +3,7 @@
// * storage/persistent_storage-iscsi.adoc
[id="iscsi-multipath_{context}"]
= iSCSI Multipathing
= iSCSI multipathing
For iSCSI-based storage, you can configure multiple paths by using the
same IQN for more than one target portal IP address. Multipathing ensures
access to the persistent volume when one or more of the components in a
@@ -12,7 +12,6 @@ path fail.
To specify multi-paths in the pod specification use the `portals` field.
For example:
====
[source,yaml]
----
apiVersion: v1
@@ -33,4 +32,3 @@ spec:
readOnly: false
----
<1> Add additional target portals using the `portals` field.
====

View File

@@ -9,9 +9,7 @@ mounting it as a volume in {product-title}. All that is required for the
iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN),
a valid LUN number, the filesystem type, and the `PersistentVolume` API.
.Persistent Volume Object Definition
====
.`PersistentVolume` object definition
[source,yaml]
----
apiVersion: v1
@@ -29,4 +27,3 @@ spec:
lun: 0
fsType: 'ext4'
----
====

View File

@@ -3,8 +3,8 @@
// * storage/persistent_storage-iscsi.adoc
[id="volume-security-iscsi_{context}"]
= iSCSI Volume Security
Users request storage with a `PersistentVolumeClaim`. This claim only
= iSCSI volume security
Users request storage with a `PersistentVolumeClaim` object. This claim only
lives in the user's namespace and can only be referenced by a pod within
that same namespace. Any attempt to access a persistent volume claim across a
namespace causes the pod to fail.
@@ -15,7 +15,6 @@ Each iSCSI LUN must be accessible by all nodes in the cluster.
Optionally, OpenShift can use CHAP to authenticate itself to iSCSI targets:
====
[source,yaml]
----
apiVersion: v1
@@ -40,6 +39,5 @@ spec:
----
<1> Enable CHAP authentication of iSCSI discovery.
<2> Enable CHAP authentication of iSCSI session.
<3> Specify name of Secrets object with user name + password. This Secrets
<3> Specify name of Secrets object with user name + password. This `Secret`
object must be available in all namespaces that can use the referenced volume.
====

View File

@@ -73,7 +73,7 @@ If a user deletes a PVC that is in active use by a pod, the PVC is not removed i
endif::openshift-origin,openshift-enterprise,openshift-webscale[]
[id="releasing_{context}"]
== Release a PersistentVolume
== Release a persistent volume
When you are finished with a volume, you can delete the PVC object from
the API, which allows reclamation of the resource. The volume is
@@ -82,9 +82,9 @@ for another claim. The previous claimant's data remains on the volume and
must be handled according to policy.
[id="reclaiming_{context}"]
== Reclaim policy for PersistentVolumes
== Reclaim policy for persistent volumes
The reclaim policy of a PersistentVolume tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be
The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be
`Retain`, `Recycle`, or `Delete`.
* `Retain` reclaim policy allows manual reclamation of the resource for

View File

@@ -4,8 +4,8 @@
= Additional configuration and troubleshooting
Depending on what version of NFS is being used and how it is configured,
there may be additional configuration steps needed for proper export and
Depending on what version of NFS is being used and how it is configured,
there may be additional configuration steps needed for proper export and
security mapping. The following are some that may apply:
[cols="1,2"]
@@ -18,8 +18,8 @@ a|- Could be attributed to the ID mapping settings, found in `/etc/idmapd.conf`
|Disabling ID mapping on NFSv4
a|- On both the NFS client and server, run:
+
[source,terminal]
----
# echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping
----
|===

View File

@@ -45,5 +45,5 @@ conditions:
* The NFS export and directory must be set up so that they are accessible
by the target pods. Either set the export to be owned by the container's
primary UID, or supply the Pod group access using `supplementalGroups`,
primary UID, or supply the pod group access using `supplementalGroups`,
as shown in the group IDs above.

View File

@@ -9,9 +9,8 @@
The recommended way to handle NFS access, assuming it is not an option to
change permissions on the NFS export, is to use supplemental groups.
Supplemental groups in {product-title} are used for shared storage, of
which NFS is an example. In contrast block storage, such as
iSCSI, use the `fsGroup` SCC strategy and the `fsGroup` value in the
Pod's `securityContext`.
which NFS is an example. In contrast, block storage such as
iSCSI uses the `fsGroup` SCC strategy and the `fsGroup` value in the `securityContext` of the pod.
[NOTE]
====
@@ -20,7 +19,7 @@ To gain access to persistent storage, it is generally preferable to use suppleme
Because the group ID on the example target NFS directory
is `5555`, the Pod can define that group ID using `supplementalGroups`
under the Pod's `securityContext` definition. For example:
under the `securityContext` definition of the pod. For example:
[source,yaml]
----
@@ -31,17 +30,17 @@ spec:
securityContext: <1>
supplementalGroups: [5555] <2>
----
<1> `securityContext` must be defined at the Pod level, not under a
<1> `securityContext` must be defined at the pod level, not under a
specific container.
<2> An array of GIDs defined for the pod. In this case, there is
one element in the array. Additional GIDs would be comma-separated.
Assuming there are no custom SCCs that might satisfy the Pod's
requirements, the Pod likely matches the `restricted` SCC. This SCC has
Assuming there are no custom SCCs that might satisfy the pod
requirements, the pod likely matches the `restricted` SCC. This SCC has
the `supplementalGroups` strategy set to `RunAsAny`, meaning that any
supplied group ID is accepted without range checking.
As a result, the above Pod passes admissions and is launched. However,
As a result, the above pod passes admissions and is launched. However,
if group ID range checking is desired, a custom SCC is the preferred
solution. A custom SCC can be created such that minimum
and maximum group IDs are defined, group ID range checking is enforced,

View File

@@ -5,7 +5,7 @@
[id="nfs-user-id_{context}"]
= User IDs
User IDs can be defined in the container image or in the Pod definition.
User IDs can be defined in the container image or in the `Pod` definition.
[NOTE]
====
@@ -15,7 +15,7 @@ persistent storage versus using user IDs.
In the example target NFS directory shown above, the container
needs its UID set to `65534`, ignoring group IDs for the moment, so the
following can be added to the Pod definition:
following can be added to the `Pod` definition:
[source,yaml]
----
@@ -26,14 +26,12 @@ spec:
securityContext:
runAsUser: 65534 <2>
----
<1> Pods contain a `securityContext` specific to each container and
a Pod's `securityContext` which applies to all containers defined in
<1> Pods contain a `securityContext` definition specific to each container and
a pod's `securityContext` which applies to all containers defined in
the pod.
<2> `65534` is the `nfsnobody` user.
Assuming the `default` project and the `restricted` SCC, the Pod's requested
user ID of `65534` is not allowed, and therefore the Pod fails. The
Pod fails for the following reasons:
Assuming that the project is `default` and the SCC is `restricted`, the user ID of `65534` as requested by the pod is not allowed. Therefore, the pod fails for the following reasons:
* It requests `65534` as its user ID.
* All SCCs available to the Pod are examined to see which SCC allows a

View File

@@ -10,7 +10,7 @@ SELinux considerations. The user is expected to understand the basics of
POSIX permissions, process UIDs, supplemental groups, and SELinux.
Developers request NFS storage by referencing either a PVC by name or the
NFS volume plug-in directly in the `volumes` section of their Pod
NFS volume plug-in directly in the `volumes` section of their `Pod`
definition.
The `/etc/exports` file on the NFS server contains the accessible NFS

View File

@@ -8,7 +8,7 @@
Each PV contains a `spec` and `status`, which is the specification and
status of the volume, for example:
.PV object definition example
.`PersistentVolume` object definition example
[source,yaml]
----
apiVersion: v1
@@ -34,7 +34,7 @@ once it is released.
[id="types-of-persistent-volumes_{context}"]
== Types of PVs
{product-title} supports the following `PersistentVolume` plug-ins:
{product-title} supports the following persistent volume plug-ins:
// - GlusterFS
// - Ceph RBD
@@ -65,8 +65,7 @@ endif::openshift-enterprise,openshift-webscale,openshift-origin[]
[id="pv-capacity_{context}"]
== Capacity
Generally, a PV has a specific storage capacity. This is set by using the
PV's `capacity` attribute.
Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the `capacity` attribute of the PV.
Currently, storage capacity is the only resource that can be set or
requested. Future attributes may include IOPS, throughput, and so on.
@@ -74,7 +73,7 @@ requested. Future attributes may include IOPS, throughput, and so on.
[id="pv-access-modes_{context}"]
== Access modes
A `PersistentVolume` can be mounted on a host in any way supported by the
A persistent volume can be mounted on a host in any way supported by the
resource provider. Providers have different capabilities and each PV's
access modes are set to the specific modes supported by that particular
volume. For example, NFS can support multiple read-write clients, but a
@@ -120,11 +119,11 @@ endif::[]
[IMPORTANT]
====
A volume's `AccessModes` are descriptors of the volume's capabilities. They
Volume access modes are descriptors of volume capabilities. They
are not enforced constraints. The storage provider is responsible for
runtime errors resulting from invalid use of the resource.
For example, NFS offers *ReadWriteOnce* access mode. You must
For example, NFS offers `ReadWriteOnce` access mode. You must
mark the claims as `read-only` if you want to use the volume's
ROX capability. Errors in the provider show up at runtime as mount errors.
@@ -138,7 +137,7 @@ the pods that use these volumes are deleted.
.Supported access modes for PVs
[cols=",^v,^v,^v", width="100%",options="header"]
|===
|Volume Plug-in |ReadWriteOnce ^[1]^ |ReadOnlyMany |ReadWriteMany
|Volume plug-in |ReadWriteOnce ^[1]^ |ReadOnlyMany |ReadWriteMany
|AWS EBS ^[2]^ | ✅ | - | -
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
|Azure File | ✅ | ✅ | ✅

View File

@@ -5,10 +5,10 @@
[id="persistent-volume-claims_{context}"]
= Persistent volume claims
Each persistent volume claim (PVC) contains a `spec` and `status`, which
is the specification and status of the claim, for example:
Each `PersistentVolumeClaim` object contains a `spec` and `status`, which
is the specification and status of the persistent volume claim (PVC), for example:
.PVC object definition example
.`PersistentVolumeClaim` object definition example
[source,yaml]
----
kind: PersistentVolumeClaim
@@ -44,11 +44,11 @@ in the PVC.
[IMPORTANT]
====
The ClusterStorageOperator may install a default StorageClass depending
on the platform in use. This StorageClass is owned and controlled by the
The Cluster Storage Operator might install a default storage class depending
on the platform in use. This storage class is owned and controlled by the
operator. It cannot be deleted or modified beyond defining annotations
and labels. If different behavior is desired, you must define a custom
StorageClass.
storage class.
====
The cluster administrator can also set a default storage class for all PVCs.
@@ -58,7 +58,7 @@ to a PV without a storage class.
[NOTE]
====
If more than one StorageClass is marked as default, a PVC can only be created if the `storageClassName` is explicitly specified. Therefore, only one StorageClass should be set as the default.
If more than one storage class is marked as default, a PVC can only be created if the `storageClassName` is explicitly specified. Therefore, only one storage class should be set as the default.
====
[id="pvc-access-modes_{context}"]

View File

@@ -3,9 +3,9 @@
// * storage/understanding-persistent-storage.adoc
[id="reclaim-manual_{context}"]
= Reclaiming a PersistentVolume manually
= Reclaiming a persistent volume manually
When a PersistentVolumeClaim (PVC) is deleted, the PersistentVolume (PV) still exists and is considered "released". However, the PV is not yet available for another claim because the previous claimant's data remains on the volume.
When a persistent volume claim (PVC) is deleted, the persistent volume (PV) still exists and is considered "released". However, the PV is not yet available for another claim because the data of the previous claimant remains on the volume.
.Procedure
To manually reclaim the PV as a cluster administrator:

View File

@@ -3,11 +3,11 @@
// * storage/understanding-persistent-storage.adoc
[id="reclaim-policy_{context}"]
= Changing the reclaim policy of a PersistentVolume
= Changing the reclaim policy of a persistent volume
To change the reclaim policy of a PersistentVolume:
To change the reclaim policy of a persistent volume:
. List the PersistentVolumes in your cluster:
. List the persistent volumes in your cluster:
+
[source,terminal]
----
@@ -23,7 +23,7 @@ NAME CAPACITY ACCESSMODES RECLAIMPOLIC
pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s
----
. Choose one of your PersistentVolumes and change its reclaim policy:
. Choose one of your persistent volumes and change its reclaim policy:
+
[source,terminal]
----
@@ -31,7 +31,7 @@ $ oc patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retai
----
+
. Verify that your chosen PersistentVolume has the right policy:
. Verify that your chosen persistent volume has the right policy:
+
[source,terminal]
----

View File

@@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]
toc::[]
Container Storage Interface (CSI) inline ephemeral volumes allow you to define a Pod spec that creates inline ephemeral volumes when a Pod is deployed and delete them when a Pod is destroyed.
Container Storage Interface (CSI) inline ephemeral volumes allow you to define a `Pod` spec that creates inline ephemeral volumes when a pod is deployed and delete them when a pod is destroyed.
This feature is only available with supported Container Storage Interface (CSI) drivers.

View File

@@ -16,7 +16,7 @@ Familiarity with xref:../../storage/understanding-persistent-storage.adoc#unders
To create CSI-provisioned PVs that mount to AWS EBS storage assets, {product-title} installs the AWS EBS CSI Driver Operator and the AWS EBS CSI driver by default in the `openshift-cluster-csi-drivers` namespace.
* The _AWS EBS CSI Driver Operator_ provides a StorageClass by default that you can use to create PVCs. You also have the option to create the AWS EBS StorageClass as described in xref:../../storage/persistent_storage/persistent-storage-aws.adoc#persistent-storage-aws[Persistent Storage Using AWS Elastic Block Store].
* The _AWS EBS CSI Driver Operator_ provides a StorageClass by default that you can use to create PVCs. You also have the option to create the AWS EBS StorageClass as described in xref:../../storage/persistent_storage/persistent-storage-aws.adoc#persistent-storage-aws[Persistent storage using AWS Elastic Block Store].
* The _AWS EBS CSI driver_ enables you to create and mount AWS EBS PVs.
@@ -32,8 +32,8 @@ include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]
{product-title} defaults to using an in-tree, or non-CSI, driver to provision AWS EBS storage. This in-tree driver will be removed in a subsequent update of {product-title}. Volumes provisioned using the existing in-tree driver are planned for migration to the CSI driver at that time.
====
For information about dynamically provisioning AWS EBS persistent volumes in {product-title}, see xref:../../storage/persistent_storage/persistent-storage-aws.adoc#persistent-storage-aws[Persistent Storage Using AWS Elastic Block Store].
For information about dynamically provisioning AWS EBS persistent volumes in {product-title}, see xref:../../storage/persistent_storage/persistent-storage-aws.adoc#persistent-storage-aws[Persistent storage using AWS Elastic Block Store].
.Additional resources
* xref:../../storage/persistent_storage/persistent-storage-aws.adoc#persistent-storage-aws[Persistent Storage Using AWS Elastic Block Store]
* xref:../../storage/persistent_storage/persistent-storage-aws.adoc#persistent-storage-aws[Persistent storage using AWS Elastic Block Store]
* xref:../../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-csi[Configuring CSI volumes]

View File

@@ -13,7 +13,7 @@ Familiarity with xref:../../storage/understanding-persistent-storage.adoc#unders
To create CSI-provisioned PVs that mount to Red Hat Virtualization (oVirt) storage assets, {product-title} installs the oVirt CSI Driver Operator and the oVirt CSI driver by default in the `openshift-cluster-csi-drivers` namespace.
* The _oVirt CSI Driver Operator_ provides a default StorageClass that you can use to create PVCs.
* The _oVirt CSI Driver Operator_ provides a default storage class that you can use to create PVCs.
* The _oVirt CSI driver_ enables you to create and mount oVirt PVs.

View File

@@ -5,9 +5,9 @@ include::modules/common-attributes.adoc[]
toc::[]
This document describes how to use VolumeSnapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in {product-title}. Familiarity with xref:../../storage/understanding-persistent-storage.adoc#persistent-volumes_understanding-persistent-storage[persistent volumes] is suggested.
This document describes how to use volume snapshots with supported Container Storage Interface (CSI) drivers to help protect against data loss in {product-title}. Familiarity with xref:../../storage/understanding-persistent-storage.adoc#persistent-volumes_understanding-persistent-storage[persistent volumes] is suggested.
:FeatureName: CSI VolumeSnapshot
:FeatureName: CSI volume snapshot
include::modules/technology-preview.adoc[leveloffset=+0]

View File

@@ -1,5 +1,5 @@
[id="persistent-storage-aws"]
= Persistent Storage Using AWS Elastic Block Store
= Persistent storage using AWS Elastic Block Store
include::modules/common-attributes.adoc[]
:context: persistent-storage-aws
@@ -24,7 +24,7 @@ High-availability of storage in the infrastructure is left to the underlying
storage provider.
====
== Additional References
== Additional resources
* See xref:../../storage/container_storage_interface/persistent-storage-csi-ebs.adoc#persistent-storage-csi-ebs[AWS Elastic Block Store CSI Driver Operator] for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins.

View File

@@ -13,9 +13,9 @@ cluster with persistent storage and gives users a way to request those
resources without having any knowledge of the underlying infrastructure.
Azure File volumes can be provisioned dynamically.
PersistentVolumes are not bound to a single project or namespace; they can be
Persistent volumes are not bound to a single project or namespace; they can be
shared across the {product-title} cluster.
PersistentVolumeClaims are specific to a project or namespace and can be
Persistent volume claims are specific to a project or namespace and can be
requested by users for use in applications.
[IMPORTANT]
@@ -24,7 +24,7 @@ High availability of storage in the infrastructure is left to the underlying
storage provider.
====
.Additional references
.Additional resources
* link:https://azure.microsoft.com/en-us/services/storage/files/[Azure Files]

View File

@@ -22,7 +22,7 @@ High availability of storage in the infrastructure is left to the underlying
storage provider.
====
.Additional references
.Additional resources
* link:https://azure.microsoft.com/en-us/services/storage/disks[Microsoft Azure Disk]

View File

@@ -16,9 +16,9 @@ The Kubernetes persistent volume framework allows administrators to provision a
cluster with persistent storage and gives users a way to request those
resources without having any knowledge of the underlying infrastructure.
AWS Elastic File System volumes can be provisioned dynamically.
PersistentVolumes are not bound to a single project or namespace; they can be
Persistent volumes are not bound to a single project or namespace; they can be
shared across the {product-title} cluster.
PersistentVolumeClaims are specific to a project or namespace and can be
Persistent volume claims are specific to a project or namespace and can be
requested by users.
== Prerequisites
@@ -26,7 +26,7 @@ requested by users.
from the EFS volume's security group.
* Configure the AWS EFS volume to allow incoming SSH traffic from any host.
.Additional references
.Additional resources
* link:https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html[Amazon
EFS]

View File

@@ -12,9 +12,9 @@ Some familiarity with Kubernetes and Fibre Channel is assumed.
The Kubernetes persistent volume framework allows administrators to provision a
cluster with persistent storage and gives users a way to request those
resources without having any knowledge of the underlying infrastructure.
PersistentVolumes are not bound to a single project or namespace; they can be
Persistent volumes are not bound to a single project or namespace; they can be
shared across the {product-title} cluster.
PersistentVolumeClaims are specific to a project or namespace and can be
Persistent volume claims are specific to a project or namespace and can be
requested by users.
[IMPORTANT]
@@ -23,7 +23,7 @@ High availability of storage in the infrastructure is left to the underlying
storage provider.
====
.Additional references
.Additional resources
* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-fibrechanel[Fibre Channel]

View File

@@ -11,7 +11,7 @@ To use storage from a back-end that does not have a built-in plug-in, you can ex
Pods interact with FlexVolume drivers through the `flexvolume` in-tree plugin.
.Additional References
.Additional resources
* xref:../../storage/expanding-persistent-volumes.adoc#expanding-persistent-volumes[Expanding persistent volumes]

View File

@@ -28,7 +28,7 @@ High availability of storage in the infrastructure is left to the underlying
storage provider.
====
.Additional references
.Additional resources
* link:https://cloud.google.com/compute/docs/disks/[GCE Persistent Disk]

View File

@@ -5,11 +5,11 @@ include::modules/common-attributes.adoc[]
toc::[]
A hostPath volume in an {product-title} cluster mounts a file or directory from the host nodes filesystem into your Pod. Most Pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.
A hostPath volume in an {product-title} cluster mounts a file or directory from the host nodes filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it.
[IMPORTANT]
====
The cluster administrator must configure Pods to run as privileged. This grants access to Pods in the same node.
The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node.
====
include::modules/persistent-storage-hostpath-about.adoc[leveloffset=+1]

View File

@@ -1,5 +1,5 @@
[id="persistent-storage-using-iscsi"]
= Persistent Storage Using iSCSI
= Persistent storage using iSCSI
include::modules/common-attributes.adoc[]
:context: persistent-storage-iscsi

View File

@@ -9,7 +9,7 @@ toc::[]
using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs)
provide a convenient method for sharing a volume across a project. While the
NFS-specific information contained in a PV definition could also be defined
directly in a Pod definition, doing so does not create the volume as a
directly in a `Pod` definition, doing so does not create the volume as a
distinct cluster resource, making the volume more susceptible to conflicts.
.Additional resources

View File

@@ -13,12 +13,12 @@ The Kubernetes persistent volume framework allows administrators to provision a
cluster with persistent storage and gives users a way to request those
resources without having any knowledge of the underlying infrastructure.
PersistentVolumes are not bound to a single project or namespace; they can be
Persistent volumes are not bound to a single project or namespace; they can be
shared across the {product-title} cluster.
PersistentVolumeClaims are specific to a project or namespace and can be
Persistent volume claims are specific to a project or namespace and can be
requested by users.
.Additional references
.Additional resources
* link:https://www.vmware.com/au/products/vsphere.html[VMware vSphere]
@@ -29,7 +29,7 @@ Dynamically provisioning VMware vSphere volumes is the recommended method.
== Prerequisites
* An {product-title} cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See xref:../../installing/installing_vsphere/installing-vsphere.adoc[Installing a cluster on vSphere] for information about vSphere version support.
You can use either of the following procedures to dynamically provision these volumes using the default StorageClass.
You can use either of the following procedures to dynamically provision these volumes using the default storage class.
include::modules/persistent-storage-vsphere-dynamic-provisioning.adoc[leveloffset=+2]