From a003846f82b14211fa5cdd1d5e2ffcb6a455a1d0 Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Tue, 17 Dec 2024 10:20:47 -0500 Subject: [PATCH] OSDOCS-12890 and 12891#GCP PD support for C3 and N4 instance types --- ...sistent-storage-csi-drivers-supported.adoc | 9 +- ...storage-csi-gcp-hyperdisk-limitations.adoc | 25 ++ ...-gcp-hyperdisk-storage-pools-overview.adoc | 9 + ...gcp-hyperdisk-storage-pools-procedure.adoc | 249 ++++++++++++++++++ .../persistent-storage-csi-gcp-pd.adoc | 22 +- 5 files changed, 311 insertions(+), 3 deletions(-) create mode 100644 modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc create mode 100644 modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc create mode 100644 modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc diff --git a/modules/persistent-storage-csi-drivers-supported.adoc b/modules/persistent-storage-csi-drivers-supported.adoc index 5cc5c20ad9..f055d83835 100644 --- a/modules/persistent-storage-csi-drivers-supported.adoc +++ b/modules/persistent-storage-csi-drivers-supported.adoc @@ -41,7 +41,7 @@ endif::openshift-rosa,openshift-aro[] |AWS EBS | ✅ | | ✅| |AWS EFS | | | | ifndef::openshift-rosa[] -|Google Compute Platform (GCP) persistent disk (PD)| ✅| ✅ | ✅| +|Google Compute Platform (GCP) persistent disk (PD)| ✅| ✅^[5]^ | ✅| |GCP Filestore | ✅ | | ✅| endif::openshift-rosa[] ifndef::openshift-dedicated,openshift-rosa[] @@ -85,6 +85,11 @@ ifndef::openshift-dedicated,openshift-rosa[] * Azure File cloning and snapshot are Technology Preview features: :FeatureName: Azure File CSI cloning and snapshot -include::snippets/technology-preview.adoc[leveloffset=+1] +include::snippets/technology-preview.adoc[leveloffset=+2] + +5. + +* Cloning is not supported on hyperdisk-balanced disks with storage pools. + -- endif::openshift-dedicated,openshift-rosa[] \ No newline at end of file diff --git a/modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc b/modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc new file mode 100644 index 0000000000..269d9bcd3a --- /dev/null +++ b/modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc @@ -0,0 +1,25 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc + +:_mod-docs-content-type: CONCEPT +[id="persistent-storage-csi-gcp-hyperdisk-limitations_{context}"] += C3 and N4 instance type limitations +The GCP PD CSI driver support for the C3 instance type for bare metal and N4 machine series have the following limitations: + +* Cloning volumes is not supported when using storage pools. + +* For cloning or resizing, hyperdisk-balanced disks original volume size must be 6Gi or greater. + +* The default storage class is standard-csi. ++ +[IMPORTANT] +==== +You need to manually create a storage class. + +For information about creating the storage class, see Step 2 in Section _Setting up hyperdisk-balanced disks_. +==== + +* Clusters with mixed virtual machines (VMs) that use different storage types, for example, N2 and N4, are not supported. This is due to hyperdisks-balanced disks not being usable on most legacy VMs. Similarly, regular persistent disks are not usable on N4/C3 VMs. + +* A GCP cluster with c3-standard-2, c3-standard-4, n4-standard-2, and n4-standard-4 nodes can erroneously exceed the maximum attachable disk number, which should be 16 (link:https://issues.redhat.com/browse/OCPBUGS-39258[JIRA link]). \ No newline at end of file diff --git a/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc b/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc new file mode 100644 index 0000000000..e8f632449b --- /dev/null +++ b/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc @@ -0,0 +1,9 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc + +:_mod-docs-content-type: CONCEPT +[id="persistent-storage-csi-gcp-hyperdisk-storage-pools-overview_{context}"] += Storage pools for hyperdisk-balanced disks overview + +Hyperdisk storage pools can be used with Compute Engine for large-scale storage. A hyperdisk storage pool is a purchased collection of capacity, throughput, and IOPS, which you can then provision for your applications as needed. You can use hyperdisk storage pools to create and manage disks in pools and use the disks across multiple workloads. By managing disks in aggregate, you can save costs while achieving expected capacity and performance growth. By using only the storage that you need in hyperdisk storage pools, you reduce the complexity of forecasting capacity and reduce management by going from managing hundreds of disks to managing a single storage pool. \ No newline at end of file diff --git a/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc b/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc new file mode 100644 index 0000000000..9d9e983a01 --- /dev/null +++ b/modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc @@ -0,0 +1,249 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc + +:_mod-docs-content-type: PROCEDURE +[id="persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure_{context}"] += Setting up hyperdisk-balanced disks + +.Prerequisites +* Access to the cluster with administrative privileges + +.Procedure +To set up hyperdisk-balanced disks: + +ifndef::openshift-dedicated[] +. Create GCP cluster with attached disks provisioned with hyperdisk-balanced disks. +endif::openshift-dedicated[] + +ifndef::openshift-dedicated[] +. Create a storage class specifying the hyperdisk-balanced disks during installation: +endif::openshift-dedicated[] + +ifndef::openshift-dedicated[] +.. Follow the procedure in the _Installing a cluster on GCP with customizations_ section. ++ +For your install-config.yaml file, use the following example file: ++ +.Example install-config YAML file +[source, yaml] +---- +apiVersion: v1 +metadata: + name: ci-op-9976b7t2-8aa6b + +sshKey: | + XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +baseDomain: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +platform: + gcp: + projectID: XXXXXXXXXXXXXXXXXXXXXX + region: us-central1 +controlPlane: + architecture: amd64 + name: master + platform: + gcp: + type: n4-standard-4 <1> + osDisk: + diskType: hyperdisk-balanced <2> + diskSizeGB: 200 + replicas: 3 +compute: +- architecture: amd64 + name: worker + replicas: 3 + platform: + gcp: + type: n4-standard-4 <1> + osDisk: + diskType: hyperdisk-balanced <2> +---- +<1> Specifies the node type as n4-standard-4. +<2> Specifies the node has the root disk backed by hyperdisk-balanced disk type. All nodes in the cluster should use the same disk type, either hyperdisks-balanced or pd-*. ++ +[NOTE] +==== +All nodes in the cluster must support hyperdisk-balanced volumes. Clusters with mixed nodes are not supported, for example N2 and N3 using hyperdisk-balanced disks. +==== +endif::openshift-dedicated[] + +ifndef::openshift-dedicated[] +.. After step 3 in _Incorporating the Cloud Credential Operator utility manifests_ section, copy the following manifests into the manifests directory created by the installation program: ++ +* cluster_csi_driver.yaml - specifies opting out of the default storage class creation +* storageclass.yaml - creates a hyperdisk-specific storage class ++ +.Example cluster CSI driver YAML file +[source, yaml] +---- +apiVersion: operator.openshift.io/v1 +kind: "ClusterCSIDriver" +metadata: + name: "pd.csi.storage.gke.io" +spec: + logLevel: Normal + managementState: Managed + operatorLogLevel: Normal + storageClassState: Unmanaged <1> +---- +<1> Specifies disabling creation of the default {product-title} storage classes. ++ +.Example storage class YAML file +[source, yaml] +---- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: hyperdisk-sc <1> + annotations: + storageclass.kubernetes.io/is-default-class: "true" +provisioner: pd.csi.storage.gke.io <2> +volumeBindingMode: WaitForFirstConsumer +allowVolumeExpansion: true +reclaimPolicy: Delete +parameters: + type: hyperdisk-balanced <3> + replication-type: none + provisioned-throughput-on-create: "140Mi" <4> + provisioned-iops-on-create: "3000" <5> + storage-pools: projects/my-project/zones/us-east4-c/storagePools/pool-us-east4-c <6> +allowedTopologies: <7> +- matchLabelExpressions: + - key: topology.kubernetes.io/zone + values: + - us-east4-c +... +---- +<1> Specify the name for your storage class. In this example, it is `hyperdisk-sc`. +<2> `pd.csi.storage.gke.io` specifies GCP CSI provisioner. +<3> Specifies using hyperdisk-balanced disks. +<4> Specifies the throughput value in MiBps using the "Mi" qualifier. For example, if your required throughput is 250 MiBps, specify "250Mi". If you do not specify a value, the capacity is based upon the disk type default. +<5> Specifies the IOPS value without any qualifiers. For example, if you require 7,000 IOPS, specify "7000". If you do not specify a value, the capacity is based upon the disk type default. +<6> If using storage pools, specify a list of specific storage pools that you want to use in the format: projects/PROJECT_ID/zones/ZONE/storagePools/STORAGE_POOL_NAME. +<7> If using storage pools, set `allowedTopologies` to restrict the topology of provisioned volumes to where the storage pool exists. In this example, `us-east4-c`. +endif::openshift-dedicated[] + +. Create a persistent volume claim (PVC) that uses the hyperdisk-specific storage class using the following example YAML file: ++ +.Example PVC YAML file +[source, yaml] +---- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: my-pvc +spec: + storageClassName: hyperdisk-sc <1> + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 2048Gi <2> +---- +<1> PVC references the the storage pool-specific storage class. In this example, `hyperdisk-sc`. +<2> Target storage capacity of the hyperdisk-balanced volume. In this example, `2048Gi`. + +. Create a deployment that uses the PVC that you just created. Using a deployment helps ensure that your application has access to the persistent storage even after the pod restarts and rescheduling: + +.. Ensure a node pool with the specified machine series is up and running before creating the deployment. Otherwise, the pod fails to schedule. + +.. Use the following example YAML file to create the deployment: ++ +.Example deployment YAML file +[source, yaml] +---- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: postgres +spec: + selector: + matchLabels: + app: postgres + template: + metadata: + labels: + app: postgres + spec: + nodeSelector: + cloud.google.com/machine-family: n4 <1> + containers: + - name: postgres + image: postgres:14-alpine + args: [ "sleep", "3600" ] + volumeMounts: + - name: sdk-volume + mountPath: /usr/share/data/ + volumes: + - name: sdk-volume + persistentVolumeClaim: + claimName: my-pvc <2> +---- +<1> Specifies the machine family. In this example, it is `n4`. +<2> Specifies the name of the PVC created in the preceding step. In this example, it is `my-pfc`. + +.. Confirm that the deployment was successfully created by running the following command: ++ +[source, terminal] +---- +$ oc get deployment +---- ++ +.Example output +[source, terminal] +---- +NAME READY UP-TO-DATE AVAILABLE AGE +postgres 0/1 1 0 42s +---- ++ +It might take a few minutes for hyperdisk instances to complete provisioning and display a READY status. + +.. Confirm that PVC `my-pvc` has been successfully bound to a persistent volume (PV) by running the following command: ++ +[source, terminal] +---- +$ oc get pvc my-pvc +---- ++ +.Example output ++ +[source, terminal] +---- +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +my-pvc Bound pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 2Ti RWO hyperdisk-sc 2m24s +---- + +.. Confirm the expected configuration of your hyperdisk-balanced disk: ++ +[source, terminal] +---- +$ gcloud compute disks list +---- ++ +.Example output ++ +[source, terminal] +---- +NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS +instance-20240914-173145-boot us-central1-a zone 150 pd-standard READY +instance-20240914-173145-data-workspace us-central1-a zone 100 pd-balanced READY +c4a-rhel-vm us-central1-a zone 50 hyperdisk-balanced READY <1> +---- +<1> Hyperdisk-balanced disk. + +.. If using storage pools, check that the volume is provisioned as specified in your storage class and PVC by running the following command: ++ +[source, terminal] +---- +$ gcloud compute storage-pools list-disks pool-us-east4-c --zone=us-east4-c +---- ++ +.Example output ++ +[source, terminal] +---- +NAME STATUS PROVISIONED_IOPS PROVISIONED_THROUGHPUT SIZE_GB +pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 READY 3000 140 2048 +---- + diff --git a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc index 60670a5edd..0de2013920 100644 --- a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc @@ -18,7 +18,9 @@ To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage * *GCP PD CSI Driver Operator*: By default, the Operator provides a storage class that you can use to create PVCs. You can disable this default storage class if desired (see xref:../../storage/container_storage_interface/persistent-storage-csi-sc-manage.adoc#persistent-storage-csi-sc-manage[Managing the default storage class]). You also have the option to create the GCP PD storage class as described in xref:../../storage/persistent_storage/persistent-storage-gce.adoc#persistent-storage-using-gce[Persistent storage using GCE Persistent Disk]. -* *GCP PD driver*: The driver enables you to create and mount GCP PD PVs. +* *GCP PD driver*: The driver enables you to create and mount GCP PD PVs. ++ +GCP PD CSI driver supports the C3 instance type for bare metal and N4 machine series. The C3 instance type and N4 machine series support the hyperdisk-balanced disks. ifndef::openshift-dedicated[] [NOTE] @@ -27,6 +29,23 @@ ifndef::openshift-dedicated[] ==== endif::openshift-dedicated[] +== C3 instance type for bare metal and N4 machine series + +include::modules/persistent-storage-csi-gcp-hyperdisk-limitations.adoc[leveloffset=+2] + +include::modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-overview.adoc[leveloffset=+2] + +To set up storage pools, see xref:../../storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc#persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure_persistent-storage-csi-gcp-pd[Setting up hyperdisk-balanced disks]. + +include::modules/persistent-storage-csi-gcp-hyperdisk-storage-pools-procedure.adoc[leveloffset=+2] + +ifndef::openshift-dedicated[] +[id="resources-for-gcp-c3-n4-instances"] +[role="_additional-resources"] +=== Additional resources +* xref:../../installing/installing_gcp/installing-gcp-customizations.adoc#installing-gcp-customizations[Installing a cluster on GCP with customizations] +endif::openshift-dedicated[] + include::modules/persistent-storage-csi-about.adoc[leveloffset=+1] include::modules/persistent-storage-csi-gcp-pd-storage-class-ref.adoc[leveloffset=+1] @@ -39,6 +58,7 @@ include::modules/persistent-storage-byok.adoc[leveloffset=+1] For information about installing with user-managed encryption for GCP PD, see xref:../../installing/installing_gcp/installing-gcp-customizations.adoc#installation-configuration-parameters_installing-gcp-customizations[Installation configuration parameters]. endif::openshift-rosa,openshift-dedicated[] +[id="resources-for-gcp"] [role="_additional-resources"] == Additional resources * xref:../../storage/persistent_storage/persistent-storage-gce.adoc#persistent-storage-using-gce[Persistent storage using GCE Persistent Disk]