1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #81178 from lahinson/osdocs-11001-add-virt-links

[OSDOCS-11001]: Adding more virt docs for HCP
This commit is contained in:
Laura Hinson
2024-09-03 13:54:55 -04:00
committed by GitHub
9 changed files with 330 additions and 0 deletions

View File

@@ -42,6 +42,8 @@ include::modules/hcp-virt-create-hc-console.adoc[leveloffset=+2]
* To create credentials that you can reuse when you create a hosted cluster with the console, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#creating-a-credential-for-an-on-premises-environment[Creating a credential for an on-premises environment].
* To access the hosted cluster, see xref:../../hosted_control_planes/hcp-manage/hcp-manage-virt.adoc#hcp-virt-access_hcp-manage-virt[Accessing the hosted cluster].
include::modules/hcp-virt-ingress-dns.adoc[leveloffset=+1]
[id="hcp-virt-ingress-dns-custom"]

View File

@@ -6,3 +6,47 @@ include::_attributes/common-attributes.adoc[]
toc::[]
After you deploy a hosted cluster on {VirtProductName}, you can manage the cluster by completing the following procedures.
include::modules/hcp-virt-access.adoc[leveloffset=+1]
[id="hcp-virt-storage"]
== Configuring storage for {hcp} on {VirtProductName}
If you do not provide any advanced storage configuration, the default storage class is used for the KubeVirt virtual machine (VM) images, the KubeVirt Container Storage Interface (CSI) mapping, and the etcd volumes.
The following table lists the capabilities that the infrastructure must provide to support persistent storage in a hosted cluster:
.Persistent storage modes in a hosted cluster
|===
| Infrastructure CSI provider | Hosted cluster CSI provider | Hosted cluster capabilities | Notes
| Any RWX `Block` CSI provider
| `kubevirt-csi`
| Basic: RWO `Block` and `File`, RWX `Block` and `Snapshot`
| Recommended
| Any RWX `Block` CSI provider
| {rh-storage-first} external mode
| {rh-storage-first} feature set
|
| Any RWX `Block` CSI provider
| {rh-storage-first} internal mode
| {rh-storage-first} feature set
| Do not use
|===
include::modules/hcp-virt-map-storage.adoc[leveloffset=+2]
include::modules/hcp-virt-csi-snapshot.adoc[leveloffset=+2]
include::modules/hcp-virt-multiple-snapshots.adoc[leveloffset=+2]
include::modules/hcp-virt-root-volume.adoc[leveloffset=+2]
include::modules/hcp-virt-image-caching.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../virt/virtual_machines/creating_vms_custom/virt-creating-vms-by-cloning-pvcs.adoc#smart-cloning_virt-creating-vms-by-cloning-pvcs[Cloning a data volume using smart-cloning]
include::modules/hcp-virt-etcd-storage.adoc[leveloffset=+2]

View File

@@ -0,0 +1,45 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-virt-access_{context}"]
= Accessing the hosted cluster
You can access the hosted cluster by either getting the `kubeconfig` file and `kubeadmin` credential directly from resources, or by using the `hcp` command line interface to generate a `kubeconfig` file.
.Prerequisites
To access the hosted cluster by getting the `kubeconfig` file and credentials directly from resources, you need to be familiar with the access secrets for hosted control plane clusters. The secrets are stored in the hosted cluster (hosting) namespace. The _hosted cluster (hosting)_ namespace contains hosted cluster resources, and the _hosted control plane_ namespace is where the hosted control plane runs.
The secret name formats are as follows:
** `kubeconfig` secret: `<hosted_cluster_namespace>-<name>-admin-kubeconfig` (clusters-hypershift-demo-admin-kubeconfig)
** `kubeadmin` password secret: `<hosted_cluster_namespace>-<name>-kubeadmin-password` (clusters-hypershift-demo-kubeadmin-password)
The `kubeconfig` secret contains a Base64-encoded `kubeconfig` field, which you can decode and save into a file to use with the following command:
[source,terminal]
----
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
----
The `kubeadmin` password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster.
.Procedure
* To access the hosted cluster by using the `hcp` CLI to generate the `kubeconfig` file, take the following steps:
. Generate the `kubeconfig` file by entering the following command:
+
[source,terminal]
----
$ hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig
----
. After you save the `kubeconfig` file, you can access the hosted cluster by entering the following example command:
+
[source,terminal]
----
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
----

View File

@@ -0,0 +1,38 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-virt-csi-snapshot_{context}"]
= Mapping a single KubeVirt CSI volume snapshot class
You can expose your infrastructure volume snapshot class to the hosted cluster by using KubeVirt CSI.
.Procedure
* To map your volume snapshot class to the hosted cluster, use the `--infra-volumesnapshot-class-mapping` argument when creating a hosted cluster. Run the following command:
+
[source,terminal]
----
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ <1>
--node-pool-replicas <worker_node_count> \ <2>
--pull-secret <path_to_pull_secret> \ <3>
--memory <memory> \ <4>
--cores <cpu> \ <5>
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ <6>
--infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class> <7>
----
+
<1> Specify the name of your hosted cluster, for instance, `example`.
<2> Specify the worker count, for example, `2`.
<3> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
<4> Specify a value for memory, for example, `8Gi`.
<5> Specify a value for CPU, for example, `2`.
<6> Replace `<infrastructure_storage_class>` with the storage class present in the infrastructure cluster. Replace `<hosted_storage_class>` with the storage class present in the hosted cluster.
<7> Replace `<infrastructure_volume_snapshot_class>` with the volume snapshot class present in the infrastructure cluster. Replace `<hosted_volume_snapshot_class>` with the volume snapshot class present in the hosted cluster.
+
[NOTE]
====
If you do not use the `--infra-storage-class-mapping` and `--infra-volumesnapshot-class-mapping` arguments, a hosted cluster is created with the default storage class and the volume snapshot class. Therefore, you must set the default storage class and the volume snapshot class in the infrastructure cluster.
====

View File

@@ -0,0 +1,31 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-virt-etcd-storage_{context}"]
= Configuring etcd storage
At cluster creation time, you can configure the storage class that is used to host etcd data by using the `--etcd-storage-class` argument.
.Procedure
* To configure a storage class for etcd, run the following command:
+
[source,terminal]
----
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ <1>
--node-pool-replicas <worker_node_count> \ <2>
--pull-secret <path_to_pull_secret> \ <3>
--memory <memory> \ <4>
--cores <cpu> \ <5>
--etcd-storage-class=<etcd_storage_class_name> <6>
----
+
<1> Specify the name of your hosted cluster, for instance, `example`.
<2> Specify the worker count, for example, `2`.
<3> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
<4> Specify a value for memory, for example, `8Gi`.
<5> Specify a value for CPU, for example, `2`.
<6> Specify the etcd storage class name, for example, `lvm-storageclass`. If you do not provide an `--etcd-storage-class` argument, the default storage class is used.

View File

@@ -0,0 +1,38 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-virt-image-caching_{context}"]
= Enabling KubeVirt VM image caching
You can use KubeVirt VM image caching to optimize both cluster startup time and storage usage. KubeVirt VM image caching supports the use of a storage class that is capable of smart cloning and the `ReadWriteMany` access mode. For more information about smart cloning, see _Cloning a data volume using smart-cloning_.
Image caching works as follows:
. The VM image is imported to a PVC that is associated with the hosted cluster.
. A unique clone of that PVC is created for every KubeVirt VM that is added as a worker node to the cluster.
Image caching reduces VM startup time by requiring only a single image import. It can further reduce overall cluster storage usage when the storage class supports copy-on-write cloning.
.Procedure
* To enable image caching, during cluster creation, use the `--root-volume-cache-strategy=PVC` argument by running the following command:
+
[source,terminal]
----
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ <1>
--node-pool-replicas <worker_node_count> \ <2>
--pull-secret <path_to_pull_secret> \ <3>
--memory <memory> \ <4>
--cores <cpu> \ <5>
--root-volume-cache-strategy=PVC <6>
----
+
<1> Specify the name of your hosted cluster, for instance, `example`.
<2> Specify the worker count, for example, `2`.
<3> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
<4> Specify a value for memory, for example, `8Gi`.
<5> Specify a value for CPU, for example, `2`.
<6> Specify a strategy for image caching, for example, `PVC`.

View File

@@ -0,0 +1,61 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-virt-map-storage_{context}"]
= Mapping KubeVirt CSI storage classes
KubeVirt CSI supports mapping a infrastructure storage class that is capable of `ReadWriteMany` (RWX) access. You can map the infrastructure storage class to hosted storage class during cluster creation.
.Procedure
* To map the infrastructure storage class to the hosted storage class, use the `--infra-storage-class-mapping` argument by running the following command:
+
[source,terminal]
----
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ <1>
--node-pool-replicas <worker_node_count> \ <2>
--pull-secret <path_to_pull_secret> \ <3>
--memory <memory> \ <4>
--cores <cpu> \ <5>
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ <6>
----
+
<1> Specify the name of your hosted cluster, for instance, `example`.
<2> Specify the worker count, for example, `2`.
<3> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
<4> Specify a value for memory, for example, `8Gi`.
<5> Specify a value for CPU, for example, `2`.
<6> Replace `<infrastructure_storage_class>` with the infrastructure storage class name and `<hosted_storage_class>` with the hosted cluster storage class name. You can use the `--infra-storage-class-mapping` argument multiple times within the `hcp create cluster` command.
After you create the hosted cluster, the infrastructure storage class is visible within the hosted cluster. When you create a Persistent Volume Claim (PVC) within the hosted cluster that uses one of those storage classes, KubeVirt CSI provisions that volume by using the infrastructure storage class mapping that you configured during cluster creation.
[NOTE]
====
KubeVirt CSI supports mapping only an infrastructure storage class that is capable of RWX access.
====
The following table shows how volume and access mode capabilities map to KubeVirt CSI storage classes:
.Mapping KubeVirt CSI storage classes to access and volume modes
|===
| Infrastructure CSI capability | Hosted cluster CSI capability | VM live migration support | Notes
| RWX: `Block` or `Filesystem`
| `ReadWriteOnce` (RWO) `Block` or `Filesystem` RWX `Block` only
| Supported
| Use `Block` mode because `Filesystem` volume mode results in degraded hosted `Block` mode performance. RWX `Block` volume mode is supported only when the hosted cluster is {ocp-short} 4.16 or later.
| RWO `Block` storage
| RWO `Block` storage or `Filesystem`
| Not supported
| Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs.
| RWO `FileSystem`
| RWO `Block` or `Filesystem`
| Not supported
| Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs. Use of the infrastructure `Filesystem` volume mode results in degraded hosted `Block` mode performance.
|===

View File

@@ -0,0 +1,36 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-virt-multiple-snapshots_{context}"]
= Mapping multiple KubeVirt CSI volume snapshot classes
You can map multiple volume snapshot classes to the hosted cluster by assigning them to a specific group. The infrastructure storage class and the volume snapshot class are compatible with each other only if they belong to a same group.
.Procedure
* To map multiple volume snapshot classes to the hosted cluster, use the `group` option when creating a hosted cluster. Run the following command:
+
[source,terminal]
----
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ <1>
--node-pool-replicas <worker_node_count> \ <2>
--pull-secret <path_to_pull_secret> \ <3>
--memory <memory> \ <4>
--cores <cpu> \ <5>
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \ <6>
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \
--infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \
--infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name> \ <7>
--infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name>
----
+
<1> Specify the name of your hosted cluster, for instance, `example`.
<2> Specify the worker count, for example, `2`.
<3> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
<4> Specify a value for memory, for example, `8Gi`.
<5> Specify a value for CPU, for example, `2`.
<6> Replace `<infrastructure_storage_class>` with the storage class present in the infrastructure cluster. Replace `<hosted_storage_class>` with the storage class present in the hosted cluster. Replace `<group_name>` with the group name. For example, `infra-storage-class-mygroup/hosted-storage-class-mygroup,group=mygroup` and `infra-storage-class-mymap/hosted-storage-class-mymap,group=mymap`.
<7> Replace `<infrastructure_volume_snapshot_class>` with the volume snapshot class present in the infrastructure cluster. Replace `<hosted_volume_snapshot_class>` with the volume snapshot class present in the hosted cluster. For example, `infra-vol-snap-mygroup/hosted-vol-snap-mygroup,group=mygroup` and `infra-vol-snap-mymap/hosted-vol-snap-mymap,group=mymap`.

View File

@@ -0,0 +1,35 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-manage/hcp-manage-virt.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-virt-root-volume_{context}"]
= Configuring KubeVirt VM root volume
At cluster creation time, you can configure the storage class that is used to host the KubeVirt VM root volumes by using the `--root-volume-storage-class` argument.
.Procedure
* To set a custom storage class and volume size for KubeVirt VMs, run the following command:
+
[source,terminal]
----
$ hcp create cluster kubevirt \
--name <hosted_cluster_name> \ <1>
--node-pool-replicas <worker_node_count> \ <2>
--pull-secret <path_to_pull_secret> \ <3>
--memory <memory> \ <4>
--cores <cpu> \ <5>
--root-volume-storage-class <root_volume_storage_class> \ <6>
--root-volume-size <volume_size> <7>
----
+
<1> Specify the name of your hosted cluster, for instance, `example`.
<2> Specify the worker count, for example, `2`.
<3> Specify the path to your pull secret, for example, `/user/name/pullsecret`.
<4> Specify a value for memory, for example, `8Gi`.
<5> Specify a value for CPU, for example, `2`.
<6> Specify a name of the storage class to host the KubeVirt VM root volumes, for example, `ocs-storagecluster-ceph-rbd`.
<7> Specify the volume size, for example, `64`.
+
As a result, you get a hosted cluster created with VMs hosted on PVCs.