mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OSDOCS-5187: Revise storage section to MicroShift-specific
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
7e6a8e0591
commit
f81b54f72f
@@ -117,25 +117,22 @@ Topics:
|
||||
File: index
|
||||
- Name: Understanding ephemeral storage for MicroShift
|
||||
File: understanding-ephemeral-storage-microshift
|
||||
- Name: Understanding persistent storage for MicroShift
|
||||
File: understanding-persistent-storage-microshift
|
||||
- Name: Configuring persistent storage for MicroShift
|
||||
Dir: persistent_storage_microshift
|
||||
Topics:
|
||||
- Name: MicroShift storage plugin overview
|
||||
File: microshift-storage-plugin-overview
|
||||
- Name: Using container storage interface (CSI) for MicroShift
|
||||
Dir: container_storage_interface_microshift
|
||||
Distros: microshift
|
||||
Topics:
|
||||
- Name: Configuring CSI volumes for MicroShift
|
||||
File: microshift-persistent-storage-csi
|
||||
- Name: Generic ephemeral volumes for MicroShift
|
||||
File: generic-ephemeral-volumes-microshift
|
||||
- Name: Understanding persistent storage for MicroShift
|
||||
File: understanding-persistent-storage-microshift
|
||||
- Name: Expanding persistent volumes for MicroShift
|
||||
File: expanding-persistent-volumes-microshift
|
||||
- Name: Dynamic provisioning for MicroShift
|
||||
File: dynamic-provisioning-microshift
|
||||
- Name: Dynamic storage using the LVMS plugin
|
||||
File: microshift-storage-plugin-overview
|
||||
#- Name: Using container storage interface (CSI) for MicroShift
|
||||
# Dir: container_storage_interface_microshift
|
||||
# Distros: microshift
|
||||
# Topics:
|
||||
# - Name: Configuring CSI volumes for MicroShift
|
||||
# File: microshift-persistent-storage-csi
|
||||
#- Name: Dynamic provisioning for MicroShift
|
||||
# File: dynamic-provisioning-microshift
|
||||
---
|
||||
Name: Running applications
|
||||
Dir: microshift_running_apps
|
||||
|
||||
@@ -29,6 +29,6 @@ include::modules/microshift-mDNS.adoc[leveloffset=+1]
|
||||
[id="additional-resources_microshift-applying-networking-settings"]
|
||||
.Additional resources
|
||||
|
||||
. xref:../microshift_troubleshooting/microshift-troubleshooting.adoc#microshift-version[Troubleshooting]
|
||||
. xref:../microshift_troubleshooting/microshift-troubleshooting.adoc#microshift-troubleshooting-nodeport[Troubleshooting the NodePort service].
|
||||
. xref:../microshift_troubleshooting/microshift-troubleshooting.adoc#microshift-nodeport-unreachable-workaround[NodePort unreachable workround].
|
||||
* xref:../microshift_troubleshooting/microshift-troubleshooting.adoc#microshift-version[Troubleshooting]
|
||||
* xref:../microshift_troubleshooting/microshift-troubleshooting.adoc#microshift-troubleshooting-nodeport[Troubleshooting the NodePort service]
|
||||
* xref:../microshift_troubleshooting/microshift-troubleshooting.adoc#microshift-nodeport-unreachable-workaround[NodePort unreachable workround]
|
||||
|
||||
@@ -16,8 +16,10 @@ as persistent storage.
|
||||
{product-title} {product-version} supports version 1.5.0 of the link:https://github.com/container-storage-interface/spec[CSI specification].
|
||||
====
|
||||
|
||||
include::modules/persistent-storage-csi-architecture.adoc[leveloffset=+1]
|
||||
include::modules/persistent-storage-csi-external-controllers.adoc[leveloffset=+2]
|
||||
include::modules/persistent-storage-csi-driver-daemonset.adoc[leveloffset=+2]
|
||||
include::modules/persistent-storage-csi-dynamic-provisioning.adoc[leveloffset=+1]
|
||||
include::modules/persistent-storage-csi-mysql-example.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/storage/using-container-storage-interface-csi#persistent-storage-csi[{ocp} CSI Overview]
|
||||
@@ -1,17 +1,15 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="expanding-persistent-volumes-microshift"]
|
||||
= Expanding persistent volumes for {product-title}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: expanding-persistent-volumes-microshift
|
||||
|
||||
toc::[]
|
||||
|
||||
//include::modules/storage-expanding-add-volume-expansion.adoc[leveloffset=+1]
|
||||
Learn how to expand persistent volumes in {product-title}.
|
||||
|
||||
include::modules/storage-expanding-csi-volumes.adoc[leveloffset=+1]
|
||||
|
||||
//include::modules/storage-expanding-flexvolume.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/storage-expanding-local-volumes.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/storage-expanding-filesystem-pvc.adoc[leveloffset=+1]
|
||||
|
||||
@@ -16,20 +16,19 @@ toc::[]
|
||||
[id="microshift-ephemeral-storage"]
|
||||
=== Ephemeral storage
|
||||
|
||||
Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, refer to xref:../microshift_storage/understanding-ephemeral-storage-microshift.adoc#understanding-ephemeral-storage-microshift[Understanding ephemeral storage].
|
||||
Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. To read details about ephemeral storage, click xref:../microshift_storage/understanding-ephemeral-storage-microshift.adoc#understanding-ephemeral-storage-microshift[Understanding ephemeral storage].
|
||||
|
||||
[id="microshift-persistent-storage"]
|
||||
=== Persistent storage
|
||||
|
||||
Stateful applications deployed in containers require persistent storage. {product-title} uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, refer to xref:../microshift_storage/understanding-persistent-storage-microshift.adoc#understanding-persistent-storage-microshift[Understanding persistent storage].
|
||||
|
||||
[id="microshift-container-storage-interface"]
|
||||
== Container Storage Interface (CSI)
|
||||
|
||||
CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, refer to xref:../microshift_storage/container_storage_interface_microshift/microshift-persistent-storage-csi.adoc#persistent-storage-csi-microshift[Using Container Storage Interface (CSI) for MicroShift].
|
||||
Stateful applications deployed in containers require persistent storage. {product-title} uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For persistent storage details, read xref:../microshift_storage/understanding-persistent-storage-microshift.adoc#understanding-persistent-storage-microshift[Understanding persistent storage].
|
||||
|
||||
[id="microshift-dynamic-provisioning-overview"]
|
||||
== Dynamic Provisioning
|
||||
=== Dynamic storage provisioning
|
||||
|
||||
Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, refer to xref:../microshift_storage/dynamic-provisioning-microshift.adoc#dynamic-provisioning-microshift[Dynamic provisioning].
|
||||
Using dynamic provisioning allows you to create storage volumes on-demand, eliminating the need for pre-provisioned storage. For more information about how dynamic provisioning works in {product-title}, read xref:../microshift_storage/microshift-storage-plugin-overview.adoc#microshift-storage-plugin-overview[Dynamic provisioning].
|
||||
|
||||
//[id="microshift-container-storage-interface"]
|
||||
//== Container Storage Interface (CSI)
|
||||
|
||||
//CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, read ../microshift_storage/container_storage_interface_microshift/microshift-persistent-storage-csi.adoc#persistent-storage-csi-microshift[Using Container Storage Interface (CSI) for MicroShift].
|
||||
@@ -1,7 +1,7 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="microshift-storage-plugin-overview"]
|
||||
= {product-title} storage plugin overview
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
= Dynamic storage using the LVMS plugin
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-storage-plugin-overview
|
||||
|
||||
toc::[]
|
||||
@@ -10,12 +10,12 @@ toc::[]
|
||||
|
||||
LVMS provisions new logical volume management (LVM) logical volumes (LVs) for container workloads with appropriately configured persistent volume claims (PVC). Each PVC references a storage class that represents an LVM Volume Group (VG) on the host node. LVs are only provisioned for scheduled pods.
|
||||
|
||||
include::modules/microshift-lvms-system-requirements.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-lvms-deployment.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-lvms-setting-path.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/microshift-lvms-configuring.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-setting-lvms-path.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/microshift-lvms-system-requirements.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-using-lvms.adoc[leveloffset=+1]
|
||||
@@ -1 +0,0 @@
|
||||
../_attributes
|
||||
@@ -1 +0,0 @@
|
||||
../../images
|
||||
@@ -1 +0,0 @@
|
||||
../modules
|
||||
@@ -1 +0,0 @@
|
||||
../snippets/
|
||||
@@ -6,6 +6,8 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Ephemeral storage is unstructured and temporary. It is often used with immutable applications. This guide discusses how ephemeral storage works for {product-title}.
|
||||
|
||||
include::modules/storage-ephemeral-storage-overview.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/storage-ephemeral-storage-types.adoc[leveloffset=+1]
|
||||
|
||||
@@ -6,8 +6,15 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Managing storage is a distinct problem from managing compute resources. {product-title} uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure.
|
||||
|
||||
include::modules/storage-persistent-storage-overview.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_understanding-persistent-storage-microshift"]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/storage/understanding-persistent-storage#pv-access-modes_understanding-persistent-storage[Access modes for persistent storage]
|
||||
|
||||
include::modules/storage-persistent-storage-lifecycle.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/storage-persistent-storage-reclaim-manual.adoc[leveloffset=+2]
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-config-OVN-K_{context}"]
|
||||
= OVN-Kubernetes configuration options
|
||||
|
||||
An OVN-Kubernetes config file can be written to `/etc/microshift/ovn.yaml`. {product-title} will use default OVN-Kubernetes configuration values if an OVN-Kubernetes config file is not customized.
|
||||
An OVN-Kubernetes config file can be written to `/etc/microshift/ovn.yaml`. {product-title} uses default OVN-Kubernetes configuration values if an OVN-Kubernetes config file is not customized.
|
||||
|
||||
.Default `ovn.yaml` config values:
|
||||
[source,yaml]
|
||||
@@ -17,10 +17,10 @@ ovsInit:
|
||||
externalGatewayInterface: "" <2>
|
||||
mtu: 1400
|
||||
----
|
||||
<1> Default value is an empty string, which means "not-specified." The CNI network plugin auto-detects to interface with the default route.
|
||||
<2> Default value is an empty string, which means disabled.
|
||||
<1> The default value is an empty string that means "not-specified." The CNI network plugin auto-detects to interface with the default route.
|
||||
<2> The default value is an empty string that means "disabled."
|
||||
|
||||
To customize your configuration, use the following table to find valid values that you can use in your `ovn.yaml` config file:
|
||||
To customize your configuration, use the following table that lists the valid values you can use in your `ovn.yaml` config file:
|
||||
|
||||
.Supported optional OVN-Kubernetes configurations for {product-title}.
|
||||
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * microshift_networking/microshift-storage-plugin-overview.adoc
|
||||
// * microshift_storage/microshift-storage-plugin-overview.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="lvms-configuring_{context}"]
|
||||
= Configuring the LVMS
|
||||
|
||||
{product-title} supports passing through a user's LVMS configuration and allows users to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. The LVMS configuration file can be edited at any time. You must restart {product-title} to deploy configuration changes.
|
||||
{product-title} supports passing through your LVM configuration and allows you to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. You can edit the LVMS configuration file at any time. You must restart {product-title} to deploy configuration changes after editing the file.
|
||||
|
||||
The following `config.yaml` file shows a basic LVMS configuration:
|
||||
The following `lvmd.yaml` example file shows a basic LVMS configuration:
|
||||
|
||||
.LVMS YAML configuration example
|
||||
[source,yaml]
|
||||
@@ -36,15 +36,15 @@ device-classes: <2>
|
||||
<2> `map[string]DeviceClass`. The `device-class` settings.
|
||||
<3> String. The name of the `device-class`.
|
||||
<4> String. The group where the `device-class` creates the logical volumes.
|
||||
<5> unit64. Storage capacity in GiB to be left unallocated in the volume group. Defaults to `10`.
|
||||
<5> Unit64. Storage capacity in GiB to be left unallocated in the volume group. Defaults to `10`.
|
||||
<6> Boolean. Indicates that the `device-class` is used by default. Defaults to `false`.
|
||||
<7> unit. The number of stripes in the logical volume.
|
||||
<7> Unit. The number of stripes in the logical volume.
|
||||
<8> String. The amount of data that is written to one device before moving to the next device.
|
||||
<9> String. Extra arguments to pas `lvcreate`, for example, `[--type=raid1"`].
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
There is a race condition that prevents LVMS from accurately tracking the allocated space and preserving the `spare-gb` for a device class when multiple PVCs are created simultaneously. Use separate volume groups and device classes to protect the storage of highly dynamic workloads from each other.
|
||||
A race condition prevents LVMS from accurately tracking the allocated space and preserving the `spare-gb` for a device class when multiple PVCs are created simultaneously. Use separate volume groups and device classes to protect the storage of highly dynamic workloads from each other.
|
||||
====
|
||||
|
||||
Striping can be configured by using the dedicated options (`stripe` and `stripe-size`) and `lvcreate-options`. Either option can be used, but they cannot be used together. Using `stripe` and `stripe-size` with `lvcreate-options` leads to duplicate arguments to `lvcreate`. You should never set `lvcreate-options: ["--stripes=n"]` and `stripe: n` at the same time. You can, however, use both, when `lvcreate-options` is not used for striping. For example:
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * microshift_networking/microshift-storage-plugin-overview.adoc
|
||||
// * microshift_storage/microshift-storage-plugin-overview.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="lvms-deployment_{context}"]
|
||||
= LVMS Deployment
|
||||
|
||||
LVMS is automatically deployed on to the cluster in the `openshift-storage` namespace after {product-title} boots.
|
||||
//Q: is this correct, or should it be `microshift-namespace`?
|
||||
|
||||
LVMS uses `StorageCapacity` tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the volume group's remaining free storage. For more information about `StorageCapacity` tracking, refer to link:https://kubernetes.io/docs/concepts/storage/storage-capacity/[Storage Capacity].
|
||||
LVMS uses `StorageCapacity` tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the free storage of the volume group. For more information about `StorageCapacity` tracking, read link:https://kubernetes.io/docs/concepts/storage/storage-capacity/[Storage Capacity].
|
||||
|
||||
17
modules/microshift-lvms-setting-path.adoc
Normal file
17
modules/microshift-lvms-setting-path.adoc
Normal file
@@ -0,0 +1,17 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * microshift_storage/microshift-storage-plugin-overview.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="microshift-lvms-setting-path_{context}"]
|
||||
= Setting the LVMS path
|
||||
|
||||
The `lvmd.yaml` file for the LMVS should be written to the same directory as the {product-title} `config.yaml` file. When the `global administrator` user runs {product-title}, the `/etc/microshift/lvmd.yaml` directory and file are accessed.
|
||||
|
||||
//.LVMS paths
|
||||
//[options="header",cols="1,3"]
|
||||
//|===
|
||||
//|{product-title} user | Configuration directory
|
||||
//|Global administrator | `/etc/microshift/lvmd.yaml`
|
||||
//|===
|
||||
//can leave table here for future expansion
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * microshift_networking/microshift-storage-plugin-overview.adoc
|
||||
// * microshift_storage/microshift-storage-plugin-overview.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="lvms-system-requirements_{context}"]
|
||||
@@ -11,12 +11,14 @@ Using LVMS in {product-title} requires the following system specifications.
|
||||
[id="lvms-volume-group-name_{context}"]
|
||||
== Volume Group Name
|
||||
|
||||
The default integration of LVMS assumes a volume group named `rhel`. Prior to launching, the `lvmd.yaml` configuration file must specify an existing volume group on the node with sufficient capacity for workload storage. If the volume group does not exist, the node controller will fail to start and enter a `CrashLoopBackoff` state.
|
||||
The default integration of LVMS uses a volume group named `rhel`. You can change the default name of the volume group in the configuration file. For details, read the "Configuring the LVMS" section of this document.
|
||||
|
||||
Prior to launching, the `lvmd.yaml` configuration file must specify an existing volume group on the node with sufficient capacity for workload storage. If the volume group does not exist, the node controller fails to start and enters a `CrashLoopBackoff` state.
|
||||
|
||||
[id="lvms-volume-size-increments_{context}"]
|
||||
== Volume size increments
|
||||
|
||||
The LVMS provisions storage in increments of 1 GB. Storage requests are rounded up to the nearest gigabyte (GB). When a volume group's capacity is less than 1 GB, the `PersistentVolumeClaim` registers a `ProvisioningFailed` event, for example:
|
||||
The LVMS provisions storage in increments of 1 gigabyte (GB). Storage requests are rounded up to the nearest GB. When the capacity of a volume group is less than 1 GB, the `PersistentVolumeClaim` registers a `ProvisioningFailed` event, for example:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * microshift_networking/microshift-storage-plugin-overview.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="setting-lvms-path"]
|
||||
= Setting the LVMS path
|
||||
|
||||
The `config.yaml` file for the LMVS should be written to the same directory as the {product-title} `config.yaml` file. If a {product-title} `config.yaml` file does not exist, {product-title} will create an LVMS YAML and automatically populate the configuration fields with the default settings. The following paths are checked for the `config.yaml` file, depending on which user runs {product-title}:
|
||||
|
||||
.LVMS paths
|
||||
[options="header",cols="1,3"]
|
||||
|===
|
||||
|{product-title} user | Configuration directory
|
||||
|Global administrator | `/etc/microshift/lvmd.yaml`
|
||||
|===
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * microshift_networking/microshift-storage-plugin-overview.adoc
|
||||
// * microshift_storage/microshift-storage-plugin-overview.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="using-lvms_{context}"]
|
||||
|
||||
@@ -8,30 +8,21 @@
|
||||
[id=storage-ephemeral-storage-overview_{context}]
|
||||
= Overview
|
||||
|
||||
In addition to persistent storage, pods and containers can require
|
||||
ephemeral or transient local storage for their operation. The lifetime
|
||||
of this ephemeral storage does not extend beyond the life of the
|
||||
individual pod, and this ephemeral storage cannot be shared across
|
||||
pods.
|
||||
In addition to persistent storage, pods and containers can require ephemeral or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods.
|
||||
|
||||
Pods use ephemeral local storage for scratch space, caching, and logs. Issues related to
|
||||
the lack of local storage accounting and isolation include the following:
|
||||
Pods use ephemeral local storage for scratch space, caching, and logs. Issues related to the lack of local storage accounting and isolation include the following:
|
||||
|
||||
* Pods do not know how much local storage is available to them.
|
||||
* Pods cannot detect how much local storage is available to them.
|
||||
* Pods cannot request guaranteed local storage.
|
||||
* Local storage is a best effort resource.
|
||||
* Pods can be evicted due to other pods filling the local storage,
|
||||
after which new pods are not admitted until sufficient storage
|
||||
has been reclaimed.
|
||||
* Local storage is a best-effort resource.
|
||||
* Pods can be evicted due to other pods filling the local storage, after which new pods are not admitted until sufficient storage is reclaimed.
|
||||
|
||||
Unlike persistent volumes, ephemeral storage is unstructured and
|
||||
the space is shared between all pods running on a
|
||||
node, in addition to other uses by the system, the container runtime,
|
||||
and {product-title}. The ephemeral storage framework allows pods to
|
||||
specify their transient local storage needs. It also allows {product-title} to
|
||||
schedule pods where appropriate, and to protect the node against excessive
|
||||
use of local storage.
|
||||
ifndef::microshift[]
|
||||
Unlike persistent volumes, ephemeral storage is unstructured and the space is shared between all pods running on a node, in addition to other uses by the system, the container runtime, and {product-title}. The ephemeral storage framework allows pods to specify their transient local storage needs. It also allows {product-title} to schedule pods where appropriate, and to protect the node against excessive use of local storage.
|
||||
endif::microshift[]
|
||||
|
||||
While the ephemeral storage framework allows administrators and
|
||||
developers to better manage this local storage, it does not provide
|
||||
any promises related to I/O throughput and latency.
|
||||
ifdef::microshift[]
|
||||
Unlike persistent volumes, ephemeral storage is unstructured and the space is shared between all pods running on the node, other uses by the system, and {product-title}. The ephemeral storage framework allows pods to specify their transient local storage needs. It also allows {product-title} to protect the node against excessive use of local storage.
|
||||
endif::microshift[]
|
||||
|
||||
While the ephemeral storage framework allows administrators and developers to better manage local storage, I/O throughput and latency are not directly effected.
|
||||
@@ -1,8 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * storage/generic-ephemeral-vols.adoc
|
||||
//* microshift_storage/generic-ephemeral-volumes-microshift.adoc
|
||||
|
||||
// * microshift_storage/generic-ephemeral-volumes-microshift.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="generic-ephemeral-vols-pvc-naming_{context}"]
|
||||
@@ -10,9 +9,9 @@
|
||||
|
||||
Automatically created persistent volume claims (PVCs) are named by a combination of the pod name and the volume name, with a hyphen (-) in the middle. This naming convention also introduces a potential conflict between different pods, and between pods and manually created PVCs.
|
||||
|
||||
For example: a pod "pod-a" with volume "scratch" and another pod with name "pod" and volume: "a-scratch" both end up with the same PVC name: "pod-a-scratch".
|
||||
For example, `pod-a` with volume `scratch` and `pod` with volume `a-scratch` both end up with the same PVC name, `pod-a-scratch`.
|
||||
|
||||
Such conflicts are detected, and a PVC is only used for an ephemeral volume if it was created for the pod. This check is based on the ownership relationship. An existing PVC is not overwritten or modified, but this does not resolve the conflict because without the right PVC, the pod cannot start.
|
||||
Such conflicts are detected, and a PVC is only used for an ephemeral volume if it was created for the pod. This check is based on the ownership relationship. An existing PVC is not overwritten or modified, but this does not resolve the conflict. Without the right PVC, a pod cannot start.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
|
||||
@@ -3,11 +3,10 @@
|
||||
// * storage/generic-ephemeral-vols.adoc
|
||||
//* microshift_storage/generic-ephemeral-volumes-microshift.adoc
|
||||
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="generic-ephemeral-security_{context}"]
|
||||
= Security
|
||||
|
||||
Enabling the generic ephemeral volume feature allows users to create persistent volume claims (PVCs) indirectly if they can create pods, even if they do not have permission to create PVCs directly. Cluster administrators must be aware of this. If this does not fit their security model, they should use an admission webhook that rejects objects like pods that have a generic ephemeral volume.
|
||||
You can enable the generic ephemeral volume feature to allows users who can create pods to also create persistent volume claims (PVCs) indirectly. This feature works even if these users do not have permission to create PVCs directly. Cluster administrators must be aware of this. If this does not fit your security model, use an admission webhook that rejects objects such as pods that have a generic ephemeral volume.
|
||||
|
||||
The normal namespace quota for PVCs still applies, so even if users are allowed to use this new mechanism, they cannot use it to circumvent other policies.
|
||||
|
||||
@@ -22,7 +22,7 @@ CSI volume expansion does not support the following:
|
||||
|
||||
* Dynamic provisioning is used.
|
||||
|
||||
* The controlling `StorageClass` object has `allowVolumeExpansion` set to `true`. For more information, see "Enabling volume expansion support."
|
||||
* The controlling `StorageClass` object has `allowVolumeExpansion` set to `true`. For more information, see "Enabling volume expansion support."
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -3,28 +3,27 @@
|
||||
// * storage/expanding-persistent-volume.adoc
|
||||
//* microshift_storage/expanding-persistent-volumes-microshift.adoc
|
||||
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="expanding-pvc-filesystem_{context}"]
|
||||
= Expanding persistent volume claims (PVCs) with a file system
|
||||
|
||||
Expanding PVCs based on volume types that need file system resizing,
|
||||
such as GCE PD, EBS, and Cinder, is a two-step process.
|
||||
This process involves expanding volume objects in the cloud provider, and
|
||||
then expanding the file system on the actual node.
|
||||
ifndef::microshift[]
|
||||
Expanding PVCs based on volume types that need file system resizing, such as GCE, EBS, and Cinder, is a two-step process. First, expand the volume objects in the cloud provider. Second, expand the file system on the node.
|
||||
endif::microshift[]
|
||||
|
||||
Expanding the file system on the node only happens when a new pod is started
|
||||
with the volume.
|
||||
ifdef::microshift[]
|
||||
Expanding PVCs based on volume types that need file system resizing, such as GCE Persistent Disk volumes (gcePD), AWS Elastic Block Store EBS (EBS), and Cinder, is a two-step process. First, expand the volume objects in the cloud provider. Second, expand the file system on the node.
|
||||
endif::microshift[]
|
||||
|
||||
Expanding the file system on the node only happens when a new pod is started with the volume.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The controlling `StorageClass` object must have `allowVolumeExpansion` set
|
||||
to `true`.
|
||||
* The controlling `StorageClass` object must have `allowVolumeExpansion` set to `true`.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Edit the PVC and request a new size by editing `spec.resources.requests`.
|
||||
For example, the following expands the `ebs` PVC to 8 Gi.
|
||||
. Edit the PVC and request a new size by editing `spec.resources.requests`. For example, the following expands the `ebs` PVC to 8 Gi:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -40,20 +39,14 @@ spec:
|
||||
requests:
|
||||
storage: 8Gi <1>
|
||||
----
|
||||
<1> Updating `spec.resources.requests` to a larger amount will expand
|
||||
the PVC.
|
||||
[.small]
|
||||
<1> Updating `spec.resources.requests` to a larger amount expands the PVC.
|
||||
|
||||
. After the cloud provider object has finished resizing, the PVC is set to
|
||||
`FileSystemResizePending`. Check the condition by entering the following command:
|
||||
. After the cloud provider object has finished resizing, the PVC is set to `FileSystemResizePending`. Check the condition by entering the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe pvc <pvc_name>
|
||||
----
|
||||
|
||||
. When the cloud provider object has finished resizing, the
|
||||
`PersistentVolume` object reflects the newly requested size in
|
||||
`PersistentVolume.Spec.Capacity`. At this point, you can create or
|
||||
recreate a new pod from the PVC to finish the file system resizing.
|
||||
Once the pod is running, the newly requested size is available and the
|
||||
`FileSystemResizePending` condition is removed from the PVC.
|
||||
. When the cloud provider object has finished resizing, the `PersistentVolume` object reflects the newly requested size in `PersistentVolume.Spec.Capacity`. At this point, you can create or recreate a new pod from the PVC to finish the file system resizing. Once the pod is running, the newly requested size is available and the `FileSystemResizePending` condition is removed from the PVC.
|
||||
|
||||
@@ -12,7 +12,7 @@ You can manually expand persistent volumes (PVs) and persistent volume claims (P
|
||||
|
||||
.Procedure
|
||||
|
||||
. Expand the underlying devices, and ensure that appropriate capacity is available on theses devices.
|
||||
. Expand the underlying devices. Ensure that appropriate capacity is available on these devices.
|
||||
|
||||
. Update the corresponding PV objects to match the new device sizes by editing the `.spec.capacity` field of the PV.
|
||||
|
||||
|
||||
@@ -3,27 +3,17 @@
|
||||
// * storage/expanding-persistent-volumes.adoc
|
||||
//* microshift_storage/expanding-persistent-volumes-microshift.adoc
|
||||
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="expanding-recovering-from-failure_{context}"]
|
||||
= Recovering from failure when expanding volumes
|
||||
|
||||
If expanding underlying storage fails, the {product-title} administrator
|
||||
can manually recover the persistent volume claim (PVC) state and cancel
|
||||
the resize requests. Otherwise, the resize requests are continuously
|
||||
retried by the controller without administrator intervention.
|
||||
If expanding underlying storage fails, the {product-title} administrator can manually recover the persistent volume claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Mark the persistent volume (PV) that is bound to the PVC with the
|
||||
`Retain` reclaim policy. This can be done by editing the PV and changing
|
||||
`persistentVolumeReclaimPolicy` to `Retain`.
|
||||
. Delete the PVC. This will be recreated later.
|
||||
. To ensure that the newly created PVC can bind to the PV marked `Retain`,
|
||||
manually edit the PV and delete the `claimRef` entry from the PV specs.
|
||||
This marks the PV as `Available`.
|
||||
. Re-create the PVC in a smaller size, or a size that can be allocated by
|
||||
the underlying storage provider.
|
||||
. Set the `volumeName` field of the PVC to the name of the PV. This binds
|
||||
the PVC to the provisioned PV only.
|
||||
. Mark the persistent volume (PV) that is bound to the PVC with the `Retain` reclaim policy. This can be done by editing the PV and changing `persistentVolumeReclaimPolicy` to `Retain`.
|
||||
. Delete the PVC.
|
||||
. Manually edit the PV and delete the `claimRef` entry from the PV specs to ensure that the newly created PVC can bind to the PV marked `Retain`. This marks the PV as `Available`.
|
||||
. Re-create the PVC in a smaller size, or a size that can be allocated by the underlying storage provider.
|
||||
. Set the `volumeName` field of the PVC to the name of the PV. This binds the PVC to the provisioned PV only.
|
||||
. Restore the reclaim policy on the PV.
|
||||
|
||||
@@ -1,45 +1,41 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// storage/understanding-persistent-storage.adoc[leveloffset=+1]
|
||||
//* microshift_storage/understanding-persistent-storage-microshift.adoc
|
||||
|
||||
// storage/understanding-persistent-storage.adoc
|
||||
// microshift_storage/understanding-persistent-storage-microshift.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id=persistent-storage-overview_{context}]
|
||||
= Persistent storage overview
|
||||
|
||||
Managing storage is a distinct problem from managing compute resources.
|
||||
{product-title} uses the Kubernetes persistent volume (PV) framework to
|
||||
allow cluster administrators to provision persistent storage for a cluster.
|
||||
Developers can use persistent volume claims (PVCs) to request PV resources
|
||||
without having specific knowledge of the underlying storage infrastructure.
|
||||
ifndef::microshift[]
|
||||
Managing storage is a distinct problem from managing compute resources. {product-title} uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure.
|
||||
|
||||
PVCs are specific to a project, and are created and used by developers as
|
||||
a means to use a PV. PV resources on their own are not scoped to any
|
||||
single project; they can be shared across the entire {product-title}
|
||||
cluster and claimed from any project. After a PV is bound to a PVC,
|
||||
that PV can not then be bound to additional PVCs. This has the effect of
|
||||
scoping a bound PV to a single namespace, that of the binding project.
|
||||
PVCs are specific to a project, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single project; they can be shared across the entire {product-title} cluster and claimed from any project. After a PV is bound to a PVC, that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace, that of the binding project.
|
||||
endif::microshift[]
|
||||
|
||||
PVs are defined by a `PersistentVolume` API object, which represents a
|
||||
piece of existing storage in the cluster that was either statically provisioned
|
||||
by the cluster administrator or dynamically provisioned using a `StorageClass` object. It is a resource in the cluster just like a
|
||||
node is a cluster resource.
|
||||
ifdef::microshift[]
|
||||
PVCs are specific to a namespace, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single namespace; they can be shared across the entire {product-title} cluster and claimed from any namespace. After a PV is bound to a PVC, that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace.
|
||||
endif::microshift[]
|
||||
|
||||
PVs are volume plugins like `Volumes` but
|
||||
have a lifecycle that is independent of any individual pod that uses the
|
||||
PV. PV objects capture the details of the implementation of the storage,
|
||||
be that NFS, iSCSI, or a cloud-provider-specific storage system.
|
||||
PVs are defined by a `PersistentVolume` API object, which represents a piece of existing storage in the cluster that was either statically provisioned by the cluster administrator or dynamically provisioned using a `StorageClass` object. It is a resource in the cluster just like a node is a cluster resource.
|
||||
|
||||
ifndef::microshift[]
|
||||
PVs are volume plugins like `Volumes` but have a lifecycle that is independent of any individual pod that uses the PV. PV objects capture the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
|
||||
endif::microshift[]
|
||||
|
||||
ifdef::microshift[]
|
||||
PVs are volume plugins like `Volumes` but have a lifecycle that is independent of any individual pod that uses the PV. PV objects capture the details of the implementation of the storage, be that LVM, the host filesystem such as hostpath, or raw block devices.
|
||||
endif::microshift[]
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
High availability of storage in the infrastructure is left to the underlying
|
||||
storage provider.
|
||||
High availability of storage in the infrastructure is left to the underlying storage provider.
|
||||
====
|
||||
|
||||
PVCs are defined by a `PersistentVolumeClaim` API object, which represents a
|
||||
request for storage by a developer. It is similar to a pod in that pods
|
||||
consume node resources and PVCs consume PV resources. For example, pods
|
||||
can request specific levels of resources, such as CPU and memory, while
|
||||
PVCs can request specific storage capacity and access modes. For example,
|
||||
they can be mounted once read-write or many times read-only.
|
||||
ifndef::microshift[]
|
||||
PVCs are defined by a `PersistentVolumeClaim` API object, which represents a request for storage by a developer. It is similar to a pod in that pods consume node resources and PVCs consume PV resources. For example, pods can request specific levels of resources, such as CPU and memory, while PVCs can request specific storage capacity and access modes. For example, they can be mounted once read-write or many times read-only.
|
||||
endif::microshift[]
|
||||
|
||||
ifdef::microshift[]
|
||||
Like `PersistentVolumes`, `PersistentVolumeClaims` (PVCs) are API objects, which represents a request for storage by a developer. It is similar to a pod in that pods consume node resources and PVCs consume PV resources. For example, pods can request specific levels of resources, such as CPU and memory, while PVCs can request specific storage capacity and access modes. Access modes supported by {OCP} are also definable in {product-title}. However, because {product-title} does not support multi-node deployments, only ReadWriteOnce (RWO) is pertinent.
|
||||
endif::microshift[]
|
||||
@@ -6,8 +6,7 @@
|
||||
[id="persistent-volumes_{context}"]
|
||||
= Persistent volumes
|
||||
|
||||
Each PV contains a `spec` and `status`, which is the specification and
|
||||
status of the volume, for example:
|
||||
Each PV contains a `spec` and `status`, which is the specification and status of the volume, for example:
|
||||
|
||||
.`PersistentVolume` object definition example
|
||||
[source,yaml]
|
||||
@@ -29,9 +28,9 @@ status:
|
||||
<1> Name of the persistent volume.
|
||||
<2> The amount of storage available to the volume.
|
||||
<3> The access mode, defining the read-write and mount permissions.
|
||||
<4> The reclaim policy, indicating how the resource should be handled
|
||||
once it is released.
|
||||
<4> The reclaim policy, indicating how the resource should be handled once it is released.
|
||||
|
||||
ifndef::microshift[]
|
||||
[id="types-of-persistent-volumes_{context}"]
|
||||
== Types of PVs
|
||||
|
||||
@@ -63,42 +62,26 @@ ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
|
||||
- VMware vSphere
|
||||
// - Local
|
||||
endif::openshift-enterprise,openshift-webscale,openshift-origin[]
|
||||
endif::microshift[]
|
||||
|
||||
[id="pv-capacity_{context}"]
|
||||
== Capacity
|
||||
|
||||
Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the `capacity` attribute of the PV.
|
||||
|
||||
Currently, storage capacity is the only resource that can be set or
|
||||
requested. Future attributes may include IOPS, throughput, and so on.
|
||||
Currently, storage capacity is the only resource that can be set or requested. Future attributes may include IOPS, throughput, and so on.
|
||||
|
||||
ifndef::microshift[]
|
||||
[id="pv-access-modes_{context}"]
|
||||
== Access modes
|
||||
|
||||
A persistent volume can be mounted on a host in any way supported by the
|
||||
resource provider. Providers have different capabilities and each PV's
|
||||
access modes are set to the specific modes supported by that particular
|
||||
volume. For example, NFS can support multiple read-write clients, but a
|
||||
specific NFS PV might be exported on the server as read-only. Each PV gets
|
||||
its own set of access modes describing that specific PV's capabilities.
|
||||
A persistent volume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities.
|
||||
|
||||
Claims are matched to volumes with similar access modes. The only two
|
||||
matching criteria are access modes and size. A claim's access modes
|
||||
represent a request. Therefore, you might be granted more, but never less.
|
||||
For example, if a claim requests RWO, but the only volume available is an
|
||||
NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports
|
||||
RWO.
|
||||
Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim's access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO.
|
||||
|
||||
Direct matches are always attempted first. The volume's modes must match or
|
||||
contain more modes than you requested. The size must be greater than or
|
||||
equal to what is expected. If two types of volumes, such as NFS and iSCSI,
|
||||
have the same set of access modes, either of them can match a claim with
|
||||
those modes. There is no ordering between types of volumes and no way to
|
||||
choose one type over another.
|
||||
Direct matches are always attempted first. The volume's modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another.
|
||||
|
||||
All volumes with the same modes are grouped, and then sorted by size,
|
||||
smallest to largest. The binder gets the group with matching modes and
|
||||
iterates over each, in size order, until one size matches.
|
||||
All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches.
|
||||
|
||||
The following table lists the access modes:
|
||||
|
||||
@@ -118,36 +101,26 @@ ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
|
||||
|The volume can be mounted as read-write by many nodes.
|
||||
endif::[]
|
||||
|===
|
||||
endif::microshift[]
|
||||
|
||||
ifndef::microshift[]
|
||||
[IMPORTANT]
|
||||
====
|
||||
Volume access modes are descriptors of volume capabilities. They
|
||||
are not enforced constraints. The storage provider is responsible for
|
||||
runtime errors resulting from invalid use of the resource.
|
||||
Volume access modes are descriptors of volume capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource.
|
||||
|
||||
For example, NFS offers `ReadWriteOnce` access mode. You must
|
||||
mark the claims as `read-only` if you want to use the volume's
|
||||
ROX capability. Errors in the provider show up at runtime as mount errors.
|
||||
For example, NFS offers `ReadWriteOnce` access mode. You must mark the claims as `read-only` if you want to use the volume's ROX capability. Errors in the provider show up at runtime as mount errors.
|
||||
|
||||
iSCSI and Fibre Channel volumes do not currently have any fencing
|
||||
mechanisms. You must ensure the volumes are only used by one node at a
|
||||
time. In certain situations, such as draining a node, the volumes can be
|
||||
used simultaneously by two nodes. Before draining the node, first ensure
|
||||
the pods that use these volumes are deleted.
|
||||
iSCSI and Fibre Channel volumes do not currently have any fencing mechanisms. You must ensure the volumes are only used by one node at a time. In certain situations, such as draining a node, the volumes can be used simultaneously by two nodes. Before draining the node, first ensure the pods that use these volumes are deleted.
|
||||
====
|
||||
endif::microshift[]
|
||||
|
||||
ifndef::microshift[]
|
||||
.Supported access modes for PVs
|
||||
[cols=",^v,^v,^v", width="100%",options="header"]
|
||||
|===
|
||||
|Volume plugin |ReadWriteOnce ^[1]^ |ReadOnlyMany |ReadWriteMany
|
||||
ifdef::microshift[]
|
||||
|Local volume| ✅ | - | -
|
||||
endif::[]
|
||||
ifndef::microshift[]
|
||||
|AWS EBS ^[2]^ | ✅ | - | -
|
||||
endif::microshift[]
|
||||
ifdef::openshift-enterprise,openshift-webscale,openshift-origin,microshift[]
|
||||
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
|
||||
|Azure File | ✅ | ✅ | ✅
|
||||
|Azure Disk | ✅ | - | -
|
||||
//|Ceph RBD | ✅ | ✅ | -
|
||||
@@ -163,7 +136,7 @@ ifdef::openshift-enterprise,openshift-webscale,openshift-origin,microshift[]
|
||||
|NFS | ✅ | ✅ | ✅
|
||||
|OpenStack Manila | - | - | ✅
|
||||
|{rh-storage-first} | ✅ | - | ✅
|
||||
|VMware vSphere | ✅ | - | ✅ ^[3]^
|
||||
|VMware vSphere | ✅ | - | ✅ ^[3]^
|
||||
endif::[]
|
||||
|
||||
|===
|
||||
@@ -177,6 +150,11 @@ endif::[]
|
||||
--
|
||||
endif::microshift[]
|
||||
|
||||
ifdef::microshift[]
|
||||
== Supported access modes
|
||||
LVMS is the only CSI plugin {product-title} supports. The hostPath and LVs built in to {OCP} also support RWO.
|
||||
endif::microshift[]
|
||||
|
||||
ifdef::openshift-online[]
|
||||
[id="pv-restrictions_{context}"]
|
||||
== Restrictions
|
||||
@@ -186,8 +164,7 @@ endif::[]
|
||||
|
||||
ifdef::openshift-online[]
|
||||
* PVs are provisioned with EBS volumes (AWS).
|
||||
* Only RWO access mode is applicable, as EBS volumes and GCE Persistent
|
||||
Disks cannot be mounted to multiple nodes.
|
||||
* Only RWO access mode is applicable, as EBS volumes and GCE Persistent Disks cannot be mounted to multiple nodes.
|
||||
* Docker volumes are disabled.
|
||||
** VOLUME directive without a mapped external volume fails to be
|
||||
instantiated
|
||||
@@ -240,8 +217,7 @@ ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
|
||||
[id="pv-mount-options_{context}"]
|
||||
=== Mount options
|
||||
|
||||
You can specify mount options while mounting a PV by using the attribute
|
||||
`mountOptions`.
|
||||
You can specify mount options while mounting a PV by using the attribute `mountOptions`.
|
||||
|
||||
For example:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user