1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/virt/install/preparing-cluster-for-virt.adoc

172 lines
9.1 KiB
Plaintext

:_content-type: ASSEMBLY
[id="preparing-cluster-for-virt"]
= Preparing your cluster for {VirtProductName}
include::_attributes/common-attributes.adoc[]
:context: preparing-cluster-for-virt
:toclevels: 3
toc::[]
Review this section before you install {VirtProductName} to ensure that your cluster meets the requirements.
[IMPORTANT]
====
Installation method considerations::
You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy {product-title}. However, the installation method and the cluster topology might affect {VirtProductName} functionality, such as snapshots or xref:../../virt/install/preparing-cluster-for-virt.adoc#live-migration_preparing-cluster-for-virt[live migration].
{rh-storage-first}::
If you deploy {VirtProductName} with {rh-storage-first}, you must create a dedicated storage class for Windows virtual machine disks. See link:https://access.redhat.com/articles/6978371[Optimizing ODF PersistentVolumes for Windows VMs] for details.
IPv6::
You cannot run {VirtProductName} on a single-stack IPv6 cluster.
====
.FIPS mode
If you install your cluster in xref:../../installing/installing-fips.adoc#installing-fips-mode_installing-fips[FIPS mode], no additional setup is required for {VirtProductName}.
// Section is in assembly so that we can use xrefs
[id="virt-hardware-os-requirements_preparing-cluster-for-virt"]
== Hardware and operating system requirements
Review the following hardware and operating system requirements for {VirtProductName}.
[id="supported-platforms_preparing-cluster-for-virt"]
=== Supported platforms
* On-premise bare metal servers. See xref:../../installing/installing_bare_metal/preparing-to-install-on-bare-metal.adoc#virt-planning-bare-metal-cluster-for-ocp-virt_preparing-to-install-on-bare-metal[Planning a bare metal cluster for {VirtProductName}].
* Amazon Web Services bare metal instances. See link:https://access.redhat.com/articles/6409731[Deploy {VirtProductName} on AWS metal instance types].
+
--
ifdef::openshift-enterprise[]
:FeatureName: Installing OpenShift Virtualization on AWS bare metal instances
include::snippets/technology-preview.adoc[]
:!FeatureName:
endif::[]
--
* IBM Cloud Bare Metal Servers. See link:https://access.redhat.com/articles/6738731[Deploy {VirtProductName} on IBM Cloud Bare Metal nodes].
+
--
ifdef::openshift-enterprise[]
:FeatureName: Installing OpenShift Virtualization on IBM Cloud Bare Metal Servers
include::snippets/technology-preview.adoc[]
:!FeatureName:
endif::[]
--
+
*Bare metal instances or servers offered by other cloud providers are not supported.*
[id="cpu-requirements_preparing-cluster-for-virt"]
=== CPU requirements
* Supported by {op-system-base-full} 9.
+
See link:https://catalog.redhat.com[Red Hat Ecosystem Catalog] for supported CPUs.
+
[NOTE]
====
If your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines.
See xref:../../nodes/scheduling/nodes-scheduler-node-affinity.adoc#nodes-scheduler-node-affinity-configuring-required_nodes-scheduler-node-affinity[Configuring a required node affinity rule] for details.
====
* Support for Intel 64 or AMD64 CPU extensions.
* Intel VT or AMD-V hardware virtualization extensions enabled.
* NX (no execute) flag enabled.
[id="os-requirements_preparing-cluster-for-virt"]
=== Operating system requirements
* {op-system-first} installed on worker nodes.
+
See xref:../../architecture/architecture-rhcos.adoc#rhcos-about_architecture-rhcos[About RHCOS] for details.
+
[NOTE]
====
{op-system-base} worker nodes are not supported.
====
[id="storage-requirements_preparing-cluster-for-virt"]
=== Storage requirements
* Supported by {product-title}. See xref:../../scalability_and_performance/optimization/optimizing-storage.adoc#_optimizing-storage[Optimizing storage].
* {rh-storage-first}: You must create a dedicated storage class for Windows virtual machine disks. See link:https://access.redhat.com/articles/6978371[Optimizing ODF PersistentVolumes for Windows VMs] for details.
[NOTE]
====
You must specify a default storage class for the cluster. See xref:../../storage/container_storage_interface/persistent-storage-csi-sc-manage.adoc#persistent-storage-csi-sc-manage[Managing the default storage class]. If the default storage class provisioner supports the `ReadWriteMany` (RWX) xref:../../storage/understanding-persistent-storage.adoc#pv-access-modes_understanding-persistent-storage[access mode], use the RWX mode for the associated persistent volumes for optimal performance.
If the storage provisioner supports snapshots, there must be a xref:../../storage/container_storage_interface/persistent-storage-csi-snapshots.adoc#volume-snapshot-crds[`VolumeSnapshotClass`] object associated with the default storage class.
====
include::modules/virt-about-storage-volumes-for-vm-disks.adoc[leveloffset=+3]
[id="live-migration_preparing-cluster-for-virt"]
== Live migration requirements
* Shared storage with `ReadWriteMany` (RWX) access mode.
* Sufficient RAM and network bandwidth.
+
[NOTE]
====
You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
----
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
----
The default xref:../../virt/live_migration/virt-configuring-live-migration#virt-configuring-live-migration-limits_virt-configuring-live-migration[number of migrations that can run in parallel] in the cluster is 5.
====
* If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU.
* A xref:../../virt/vm_networking/virt-dedicated-network-live-migration.adoc#virt-dedicated-network-live-migration[dedicated Multus network] for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
include::modules/virt-cluster-resource-requirements.adoc[leveloffset=+1]
include::modules/virt-sno-differences.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../storage/index.adoc#openshift-storage-common-terms_storage-overview[Glossary of common terms for {product-title} storage]
[id="object-maximums_preparing-cluster-for-virt"]
== Object maximums
You must consider the following tested object maximums when planning your cluster:
* xref:../../scalability_and_performance/planning-your-environment-according-to-object-maximums.adoc#planning-your-environment-according-to-object-maximums[{product-title} object maximums].
* link:https://access.redhat.com/articles/6571671[{VirtProductName} object maximums].
// The HA section actually belongs to OpenShift, not Virt
[id="cluster-high-availability-options_preparing-cluster-for-virt"]
== Cluster high-availability options
You can configure one of the following high-availability (HA) options for your cluster:
* Automatic high availability for xref:../../installing/installing_bare_metal_ipi/ipi-install-overview.adoc#ipi-install-overview[installer-provisioned infrastructure] (IPI) is available by deploying xref:../../machine_management/deploying-machine-health-checks.adoc#machine-health-checks-about_deploying-machine-health-checks[machine health checks].
+
[NOTE]
====
In {product-title} clusters installed using installer-provisioned infrastructure and with a properly configured `MachineHealthCheck` resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See xref:../../virt/nodes/virt-node-maintenance.adoc#run-strategies[Run strategies] for more detailed information about the potential outcomes and how run strategies affect those outcomes.
====
* Automatic high availability for both IPI and non-IPI is available by using the *Node Health Check Operator* on the {product-title} cluster to deploy the `NodeHealthCheck` controller. The controller identifies unhealthy nodes and uses the Self Node Remediation Operator to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the link:https://access.redhat.com/documentation/en-us/workload_availability_for_red_hat_openshift/23.2/html-single/remediation_fencing_and_maintenance/index#about-remediation-fencing-maintenance[Workload Availability for Red Hat OpenShift] documentation.
+
--
ifdef::openshift-enterprise[]
:FeatureName: Node Health Check Operator
include::snippets/technology-preview.adoc[leveloffset=+2]
:!FeatureName:
endif::[]
--
* High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run `oc delete node <lost_node>`.
+
[NOTE]
====
Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.
====