1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

CNV-17405: installation requirements reorganization

This commit is contained in:
Avital Pinnick
2022-06-21 10:59:01 +03:00
committed by openshift-cherrypick-robot
parent 094f0db2ad
commit 6d88e72b79
5 changed files with 86 additions and 78 deletions

View File

@@ -3070,18 +3070,12 @@ Topics:
- Name: Installing
Dir: install
Topics:
- Name: Preparing your OpenShift cluster for OpenShift Virtualization
- Name: Preparing your cluster for OpenShift Virtualization
File: preparing-cluster-for-virt
Distros: openshift-enterprise
- Name: Preparing your OKD cluster for OKD Virtualization
File: preparing-cluster-for-virt
Distros: openshift-origin
- Name: Planning your environment according to OpenShift Virtualization object maximums
File: virt-planning-environment-object-maximums
Distros: openshift-enterprise
- Name: Planning your environment according to OKD Virtualization object maximums
File: virt-planning-environment-object-maximums
Distros: openshift-origin
- Name: Specifying nodes for OpenShift Virtualization components
File: virt-specifying-nodes-for-virtualization-components
Distros: openshift-enterprise

View File

@@ -2,9 +2,9 @@
//
// * virt/install/preparing-cluster-for-virt.adoc
:_content-type: REFERENCE
[id="virt-cluster-resource-requirements_{context}"]
= Additional hardware requirements for {VirtProductName}
= Physical resource overhead requirements
{VirtProductName} is an add-on to {product-title} and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the {product-title} requirements. Oversubscribing the physical resources in a cluster can affect performance.
@@ -30,7 +30,6 @@ Memory overhead per worker node ≈ 360 MiB
Additionally, {VirtProductName} environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.
.Virtual machine memory overhead
----
@@ -44,7 +43,6 @@ Memory overhead per virtual machine ≈ (1.002 * requested memory) + 146 MiB \
If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
[id="CPU-overhead_{context}"]
== CPU overhead
@@ -64,12 +62,10 @@ CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine
Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for {VirtProductName} management workloads in addition to the CPUs required for virtual machine workloads.
.Virtual machine CPU overhead
If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.
[id="storage-overhead_{context}"]
== Storage overhead
@@ -87,7 +83,6 @@ Aggregated storage overhead per node ≈ 10 GiB
Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. {VirtProductName} does not currently allocate any additional ephemeral storage for the running container itself.
[id="example-scenario_{context}"]
== Example

View File

@@ -0,0 +1,31 @@
// Module included in the following assemblies:
//
// * virt/install/preparing-cluster-for-virt.adoc
:_content-type: REFERENCE
[id="virt-platform-node-storage-requirements_{context}"]
= Platform, node, and storage requirements
You can install {VirtProductName} on a cluster that complies with the following platform, node, and storage requirements:
* Platforms:
** On-premise bare metal.
** Amazon Web Services bare metal instance. Bare metal instances offered by other cloud providers are not supported.
ifdef::openshift-enterprise[]
:FeatureName: Installing OpenShift Virtualization on an AWS bare metal instance
include::snippets/technology-preview.adoc[]
:!FeatureName:
endif::[]
* Worker nodes: Red Hat Enterprise Linux CoreOS (RHCOS). RHEL worker nodes are not supported.
* CPUs:
** Supported by RHEL 8
** Support for Intel 64 or AMD64 CPU extensions
** Intel VT or AMD-V hardware virtualization extensions enabled
** NX (no execute) flag enabled
* Storage supported by {product-title}.

View File

@@ -1,71 +1,84 @@
:_content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="preparing-cluster-for-virt"]
= Configuring your cluster for {VirtProductName}
:context: preparing-cluster-for-virt
= Preparing your cluster for {VirtProductName}
:context: virt-installation-requirements
toc::[]
Before you install {VirtProductName}, ensure that your {product-title} cluster meets the following requirements:
Review this section before you install {VirtProductName} to ensure that your cluster meets the requirements.
* Your cluster must be installed on
xref:../../installing/installing_bare_metal/preparing-to-install-on-bare-metal.adoc#installing-bare-metal[on-premise bare metal] infrastructure with {op-system-first} workers. You can use any installation method including user-provisioned, installer-provisioned, or assisted installer to deploy your cluster.
+
[NOTE]
[IMPORTANT]
====
{VirtProductName} only supports {op-system} worker nodes. RHEL 7 or RHEL 8 nodes are not supported.
You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy {product-title}. However, the installation method and the cluster topology might affect {VirtProductName} functionality, such as snapshots or live migration.
====
* You can install {VirtProductName} on Amazon Web Services (AWS) bare metal instances. Bare metal instances offered by other cloud providers are not supported.
+
--
ifdef::openshift-enterprise[]
:FeatureName: Installing OpenShift Virtualization on AWS bare metal instances
include::snippets/technology-preview.adoc[leveloffset=+2]
:!FeatureName:
endif::[]
--
.Single Node OpenShift behavior
* Shared storage is required to enable live migration.
You can install {VirtProductName} on a single node cluster, also known as xref:../../installing/installing_sno/install-sno-preparing-to-install-sno.adoc#install-sno-about-installing-on-a-single-node_install-sno-preparing[Single Node OpenShift] (SNO). SNO does not support high availability, which results in the following differences:
* You must manage your Compute nodes according to the number and size of the virtual machines that you want to host in the cluster.
* xref:../../nodes/pods/nodes-pods-priority.adoc#priority-preemption-other_nodes-pods-priority[Pod disruption budgets] are not supported.
* xref:../../virt/live_migration/virt-live-migration.adoc#virt-live-migration[Live migration] is not supported.
* Templates or virtual machines that use data volumes or storage profiles must not have the xref:../../virt/live_migration/virt-configuring-vmi-eviction-strategy.adoc#virt-configuring-vmi-eviction-strategy[`evictionStrategy`] set.
* If you have limited internet connectivity, you can xref:../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[configure proxy support in Operator Lifecycle Manager] to access the Red Hat-provided OperatorHub. If you are using a restricted network with no internet connectivity, you must xref:../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[configure Operator Lifecycle Manager for restricted networks].
.FIPS mode
* If your cluster uses worker nodes with different CPUs, live migration failures can occur because different CPUs have different capacities. To avoid such failures, use CPUs with appropriate capacity for each node and set node affinity on your virtual machines to ensure successful migration. See xref:../../nodes/scheduling/nodes-scheduler-node-affinity.adoc#nodes-scheduler-node-affinity-configuring-required_nodes-scheduler-node-affinity[Configuring a required node affinity rule] for more information.
If you install your cluster in xref:../../installing/installing-fips.adoc#installing-fips-mode_installing-fips[FIPS mode], no additional setup is required for {VirtProductName}.
* All CPUs must be supported by Red Hat Enterprise Linux 8 and meet the following requirements:
include::modules/virt-platform-node-storage-requirements.adoc[leveloffset=+1]
** Intel 64 or AMD64 CPU extensions are supported
** Intel VT or AMD-V hardware virtualization extensions are enabled
** The no-execute (NX) flag is enabled
[role="_additional-resources"]
.Additional resources
* If FIPS mode is xref:../../installing/installing-fips.adoc#installing-fips[enabled for your cluster], no additional setup is needed for {VirtProductName}. Support for FIPS cryptography must be enabled before the operating system that your cluster uses boots for the first time.
* xref:../../architecture/architecture-rhcos.adoc#rhcos-about_architecture-rhcos[About RHCOS]
* link:https://catalog.redhat.com[Red Hat Ecosystem Catalog] for supported CPUs
* xref:../../storage/index.adoc#storage-overview[Supported storage]
{VirtProductName} works with {product-title} by default, but the following installation configurations are recommended:
include::modules/virt-cluster-resource-requirements.adoc[leveloffset=+1]
* Configure xref:../../monitoring/monitoring-overview.adoc#monitoring-overview[monitoring] in the cluster.
[id="object-maximums_{context}"]
== Object maximums
[NOTE]
====
To obtain an evaluation version of {product-title}, download a trial
from the {product-title} home page.
====
You must consider the following tested object maximums when planning your cluster:
[id="virt-maintain-high-availability_preparing-cluster-for-virt"]
== How to maintain high availability of virtual machines
* xref:../../scalability_and_performance/planning-your-environment-according-to-object-maximums.html#planning-your-environment-according-to-object-maximums[{product-title} object maximums]
* link:https://access.redhat.com/articles/6571671[{VirtProductName} object maximums]
There are three options to maintain high availability (HA) of virtual machines:
[id="restricted-networks-environments_{context}"]
== Restricted network environments
If you install {VirtProductName} in a restricted environment with no internet connectivity, you must xref:../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[configure Operator Lifecycle Manager for restricted networks].
If you have limited internet connectivity, you can xref:../../operators/admin/olm-configuring-proxy-support.adoc#olm-configuring-proxy-support[configure proxy support in Operator Lifecycle Manager] to access the Red Hat-provided OperatorHub.
[id="live-migration_{context}"]
== Live migration
Live migration has the following requirements:
* Shared storage with `ReadWriteMany` (RWX) access mode
* Sufficient RAM and network bandwidth
* Appropriate CPUs with sufficent capacity on the worker nodes. If the CPUs have different capacities, live migration might be very slow or fail.
[id="snapshots-and-cloning_{context}"]
== Snapshots and cloning
See xref:../../virt/virtual_machines/virtual_disks/virt-features-for-storage.adoc#virt-features-for-storage[{VirtProductName} storage features] for snapshot and cloning requirements.
// The HA section actually belongs to OpenShift, not Virt
[id="cluster-high-availability-options_{context}"]
== Cluster high-availability options
You can configure one of the following high-availability (HA) options for your cluster:
* Automatic high availability for xref:../../installing/installing_bare_metal_ipi/ipi-install-overview.adoc#ipi-install-overview[installer-provisioned infrastructure] (IPI) is available by deploying xref:../../machine_management/deploying-machine-health-checks.adoc#machine-health-checks-about_deploying-machine-health-checks[machine health checks].
+
[NOTE]
====
In {VirtProductName} clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See xref:../../virt/virtual_machines/virt-create-vms.adoc#virt-about-runstrategies-vms_virt-create-vms[About RunStrategies for virtual machines] for more detailed information about the potential outcomes and how RunStrategies affect those outcomes.
In {product-title} clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See xref:../../virt/virtual_machines/virt-create-vms.adoc#virt-about-runstrategies-vms_virt-create-vms[About RunStrategies for virtual machines] for more detailed information about the potential outcomes and how RunStrategies affect those outcomes.
====
* Automatic high availability for both IPI and non-IPI is available by using the xref:../../nodes/nodes/eco-node-health-check-operator.adoc#node-health-check-operator[Node Health Check Operator] on any {product-title} cluster to deploy the `NodeHealthCheck` controller. The controller identifies unhealthy nodes and uses the Self Node Remediation Operator to remediate the unhealthy nodes.
* Automatic high availability for both IPI and non-IPI is available by using the xref:../../nodes/nodes/eco-node-health-check-operator.adoc#node-health-check-operator[Node Health Check Operator] on the {product-title} cluster to deploy the `NodeHealthCheck` controller. The controller identifies unhealthy nodes and uses the Self Node Remediation Operator to remediate the unhealthy nodes.
+
--
ifdef::openshift-enterprise[]
@@ -81,11 +94,3 @@ endif::[]
====
Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.
====
include::modules/virt-single-node-cluster.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../operators/operator_sdk/osdk-ha-sno.adoc#osdk-ha-sno[High-availability or single node cluster detection and support]
include::modules/virt-cluster-resource-requirements.adoc[leveloffset=+1]

View File

@@ -1,17 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-planning-environment-object-maximums"]
include::_attributes/common-attributes.adoc[]
= Planning your environment according to {VirtProductName} object maximums
:context: virt-planning-environment-object-maximums
toc::[]
Plan your cluster for {VirtProductName} by using tested object maximums as guidelines.
include::modules/virt-about-cluster-limits.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_{context}"]
== Additional resources
* xref:../../scalability_and_performance/planning-your-environment-according-to-object-maximums.adoc#planning-your-environment-according-to-object-maximums[Planning your environment according to object maximums]