* On-premise bare metal servers. See xref:../../installing/installing_bare_metal/preparing-to-install-on-bare-metal.adoc#virt-planning-bare-metal-cluster-for-ocp-virt_preparing-to-install-on-bare-metal[Planning a bare metal cluster for {VirtProductName}].
//* {ibm-cloud-name} Bare Metal Servers. See link:https://access.redhat.com/articles/6738731[Deploy {VirtProductName} on {ibm-cloud-name} Bare Metal nodes].
//+
//--
//ifdef::openshift-enterprise[]
//:FeatureName: Installing OpenShift Virtualization on {ibm-cloud-name} Bare Metal Servers
* {ibm-z-name} or {ibm-linuxone-name} (s390x architecture) systems where an {product-title} cluster is installed in logical partitions (LPARs). See xref:../../installing/installing_ibm_z/preparing-to-install-on-ibm-z.adoc#preparing-to-install-on-ibm-z_preparing-to-install-on-ibm-z[Preparing to install on {ibm-z-title} and {ibm-linuxone-title}].
// the following section is in the assembly for xref reasons
Cloud platforms::
{VirtProductName} is also compatible with a variety of public cloud platforms. Each cloud platform has specific storage provider options available. The following table outlines which platforms are fully supported (GA) and which are currently offered as Technology Preview features.
| Elastic Block Store (EBS), {odf-first}, Portworx, FSx (NetApp)
| * xref:../../installing/installing_aws/ipi/installing-aws-customizations.adoc#installing-aws-customizations[Installing a cluster on AWS with customizations]
| {product-rosa} (ROSA)
| GA
| EBS, Portworx, FSx (Q3), {odf-short}
| * link:https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/virtualization/index[{VirtProductName}] in the {product-rosa} documentation
* link:https://docs.aws.amazon.com/rosa/latest/userguide/what-is-rosa.html[What is {product-rosa}?] in the {aws-short} documentation
| * link:https://access.redhat.com/articles/7118050[{VirtProductName} and Oracle Cloud Infrastructure known issues and limitations] in the Red{nbsp}Hat Knowledgebase
* link:https://github.com/oracle-quickstart/oci-openshift/blob/main/docs/openshift-virtualization.md[Installing {VirtProductName} on OCI] in the `oracle-quickstart/oci-openshift` GitHub repository
| * link:https://learn.microsoft.com/en-us/azure/openshift/howto-create-openshift-virtualization[{VirtProductName} for Azure Red Hat OpenShift (preview)] in the Microsoft documentation
| {gcp-first}
| Technology Preview
| {gcp-short} native storage
| * link:https://access.redhat.com/articles/7120382[{VirtProductName} and {gcp-full} known storage issues and limitations] in the Red{nbsp}Hat Knowledgebase
For platform-specific networking information, see the xref:../../virt/vm_networking/virt-networking-overview.adoc#virt-networking[networking overview].
* xref:../../virt/vm_networking/virt-connecting-vm-to-ovn-secondary-network.adoc#virt-connecting-vm-to-ovn-secondary-network[Connecting a virtual machine to an OVN-Kubernetes secondary network]
* xref:../../virt/vm_networking/virt-exposing-vm-with-service.adoc#virt-exposing-vm-with-service[Exposing a virtual machine by using a service]
Using {VirtProductName} on an {product-title} cluster installed on an ARM64 system is generally available (GA).
Before using {VirtProductName} on an ARM64-based system, consider the following limitations:
Operating system::
* Only Linux-based guest operating systems are supported.
* All virtualization limitations for {op-system-base} also apply to {VirtProductName}. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_feature-support-and-limitations-in-rhel-9-virtualization_configuring-and-managing-virtualization#how-virtualization-on-arm-64-differs-from-amd64-and-intel64_feature-support-and-limitations-in-rhel-9-virtualization[How virtualization on ARM64 differs from AMD64 and Intel 64] in the {op-system-base} documentation.
Live migration::
* Live migration is *not supported* on ARM64-based {product-title} clusters.
* Hotplug is not supported on ARM64-based clusters because it depends on live migration.
VM creation::
* {op-system-base} 10 supports instance types and preferences, but not templates.
* {op-system-base} 9 supports templates, instance types, and preferences.
You can use {VirtProductName} in an {product-title} cluster that is installed in logical partitions (LPARs) on an {ibm-z-name} or {ibm-linuxone-name} (`s390x` architecture) system.
Some features are not currently available on `s390x` architecture, while others require workarounds or procedural changes. These lists are subject to change.
The following features are available for use on s390x architecture but function differently or require procedural changes:
* When xref:../../virt/managing_vms/virt-delete-vms.adoc#virt-delete-vm-web_virt-delete-vms[deleting a virtual machine by using the web console], the *grace period* option is ignored.
* When xref:../../virt/managing_vms/advanced_vm_management/virt-configuring-default-cpu-model.adoc#virt-configuring-default-cpu-model_virt-configuring-default-cpu-model[configuring the default CPU model], the `spec.defaultCPUModel` value is `"gen15b"` for an {ibm-z-title} cluster.
* When xref:../../virt/monitoring/virt-exposing-downward-metrics.adoc#virt-configuring-downward-metrics_virt-exposing-downward-metrics[configuring a downward metrics device], if you use a VM preference, the `spec.preference.name` value must be set to `rhel.9.s390x` or another available preference with the format `*.s390x`.
* When xref:../../virt/creating_vm/virt-creating-vms-from-instance-types.adoc#virt-creating-vms-from-instance-types[creating virtual machines from instance types], you are not allowed to set `spec.domain.memory.maxGuest` because memory hot plugging is not supported on {ibm-z-name}.
* Prometheus queries for VM guests could have inconsistent outcome in comparison to `x86`.
Before you install {VirtProductName} on any platform, note the following caveats and considerations.
Installation method considerations::
You can use any installation method, including user-provisioned, installer-provisioned, or Assisted Installer, to deploy {product-title}. However, the installation method and the cluster topology might affect {VirtProductName} functionality, such as snapshots or xref:../../virt/install/preparing-cluster-for-virt.adoc#live-migration_preparing-cluster-for-virt[live migration].
{rh-storage-first}::
If you deploy {VirtProductName} with {rh-storage-first}, you must create a dedicated storage class for Windows virtual machine disks. See link:https://access.redhat.com/articles/6978371[Optimizing ODF PersistentVolumes for Windows VMs] for details.
IPv6::
{VirtProductName} support for single-stack IPv6 clusters is limited to the OVN-Kubernetes localnet and Linux bridge Container Network Interface (CNI) plugins.
+
--
:FeatureName: Deploying {VirtProductName} on a single-stack IPv6 cluster
FIPS mode:: If you install your cluster in xref:../../installing/overview/installing-fips.adoc#installing-fips-mode_installing-fips[FIPS mode], no additional setup is required for {VirtProductName}.
See link:https://catalog.redhat.com[Red Hat Ecosystem Catalog] for supported CPUs.
+
[NOTE]
====
If your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines.
See xref:../../nodes/scheduling/nodes-scheduler-node-affinity.adoc#nodes-scheduler-node-affinity-configuring-required_nodes-scheduler-node-affinity[Configuring a required node affinity rule] for details.
* Supports AMD64, Intel 64-bit (x86-64-v2), {ibm-z-name} (`s390x`), or ARM64-based (`arm64` or `aarch64`) architectures and their respective CPU extensions.
* Intel VT-x, AMD-V, or ARM virtualization extensions are enabled, or `s390x` virtualization support is enabled.
* NX (no execute) flag is enabled.
* If you use `s390x` architecture, the xref:../../virt/managing_vms/advanced_vm_management/virt-configuring-default-cpu-model.adoc#virt-configuring-default-cpu-model[default CPU model] is set to `gen15b`.
* Supported by {product-title}. See xref:../../scalability_and_performance/optimization/optimizing-storage.adoc#_optimizing-storage[Optimizing storage].
* You must create a default {VirtProductName} or {product-title} storage class. The purpose of this is to address the unique storage needs of VM workloads and offer optimized performance, reliability, and user experience. If both {VirtProductName} and {product-title} default storage classes exist, the {VirtProductName} class takes precedence when creating VM disks.
You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
----
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
The default xref:../../virt/live_migration/virt-configuring-live-migration.adoc#virt-configuring-live-migration[number of migrations that can run in parallel] in the cluster is 5.
A xref:../../virt/vm_networking/virt-dedicated-network-live-migration.adoc#virt-dedicated-network-live-migration[dedicated Multus network] for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
* Automatic high availability for xref:../../installing/installing_bare_metal/ipi/ipi-install-overview.adoc#ipi-install-overview[installer-provisioned infrastructure] (IPI) is available by deploying xref:../../machine_management/deploying-machine-health-checks.adoc#machine-health-checks-about_deploying-machine-health-checks[machine health checks].
In {product-title} clusters installed using installer-provisioned infrastructure and with a properly configured `MachineHealthCheck` resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See xref:../../virt/nodes/virt-node-maintenance.adoc#run-strategies[Run strategies] for more detailed information about the potential outcomes and how run strategies affect those outcomes.
* Automatic high availability for both IPI and non-IPI is available by using the *Node Health Check Operator* on the {product-title} cluster to deploy the `NodeHealthCheck` controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the link:https://access.redhat.com/documentation/en-us/workload_availability_for_red_hat_openshift[Workload Availability for Red Hat OpenShift] documentation.
Fence Agents Remediation uses supported fencing agents to reset failed nodes faster than the Self Node Remediation Operator. This improves overall virtual machine high availability. For more information, see the link:https://access.redhat.com/articles/7057929[OpenShift Virtualization - Fencing and VM High Availability Guide] knowledgebase article.
* High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run `oc delete node <lost_node>`.
+
[NOTE]
====
Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.