1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

[enterprise-4.14] CNV- 24272: Live migration improvements

Signed-off-by: Avital Pinnick <apinnick@redhat.com>
This commit is contained in:
Avital Pinnick
2023-09-11 13:21:12 +03:00
parent 576b983671
commit 5c927e214c
40 changed files with 534 additions and 520 deletions

View File

@@ -2720,7 +2720,7 @@ Topics:
Dir: application_backup_and_restore
Topics:
- Name: Introduction to OpenShift API for Data Protection
File: oadp-intro
File: oadp-intro
- Name: OADP release notes
File: oadp-release-notes
- Name: OADP features and plugins
@@ -3749,7 +3749,6 @@ Topics:
File: virt-high-availability-for-vms
- Name: Control plane tuning
File: virt-vm-control-plane-tuning
#A BETTER NAME THAN 'STORAGE 4 U'
- Name: VM disks
Dir: virtual_disks
Topics:
@@ -3809,29 +3808,23 @@ Topics:
- Name: Live migration
Dir: live_migration
Topics:
- Name: Virtual machine live migration
File: virt-live-migration
- Name: Live migration limits and timeouts
File: virt-live-migration-limits
- Name: Migrating a virtual machine instance to another node
File: virt-migrate-vmi
- Name: Cancelling the live migration of a virtual machine instance
File: virt-cancel-vmi-migration
- Name: Configuring virtual machine eviction strategy
File: virt-configuring-vmi-eviction-strategy
- Name: Configuring live migration policies
File: virt-configuring-live-migration-policies
- Name: About live migration
File: virt-about-live-migration
- Name: Configuring live migration
File: virt-configuring-live-migration
- Name: Initiating and canceling live migration
File: virt-initiating-live-migration
# Node maintenance mode
- Name: Nodes
Dir: nodes
Topics:
- Name: About node maintenance
File: virt-about-node-maintenance
- Name: Node maintenance
File: virt-node-maintenance
- Name: Managing node labeling for obsolete CPU models
File: virt-managing-node-labeling-obsolete-cpu-models
- Name: Preventing node reconciliation
File: virt-preventing-node-reconciliation
- Name: Deleting a failed node to trigger virtual machine failover
- Name: Deleting a failed node to trigger VM failover
File: virt-triggering-vm-failover-resolving-failed-node
- Name: Monitoring
Dir: monitoring

View File

@@ -1,20 +0,0 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-live-migration.adoc
:_content-type: CONCEPT
[id="virt-about-live-migration_{context}"]
= About live migration
Live migration is the process of moving a running virtual machine instance (VMI) to another node in the cluster without interrupting the virtual workload or access. If a VMI uses the `LiveMigrate` eviction strategy, it automatically migrates when the node that the VMI runs on is placed into maintenance mode. You can also manually start live migration by selecting a VMI to migrate.
You can use live migration if the following conditions are met:
* Shared storage with `ReadWriteMany` (RWX) access mode.
* Sufficient RAM and network bandwidth.
* If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU.
By default, live migration traffic is encrypted using Transport Layer Security (TLS).
You can configure a dedicated Multus network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.

View File

@@ -1,31 +0,0 @@
// Module included in the following assemblies:
// virt/nodes/virt-about-node-maintenance.adoc
:_content-type: CONCEPT
[id="virt-about-node-maintenance_{context}"]
= About node maintenance mode
Nodes can be placed into maintenance mode by using the `oc adm` utility or `NodeMaintenance` custom resources (CRs).
[NOTE]
====
The `node-maintenance-operator` (NMO) is no longer shipped with {VirtProductName}. It is deployed as a standalone Operator from the *OperatorHub* in the {product-title} web console or by using the OpenShift CLI (`oc`).
For more information on remediation, fencing, and maintaining nodes, see the link:https://access.redhat.com/documentation/en-us/workload_availability_for_red_hat_openshift/23.2/html-single/remediation_fencing_and_maintenance/index#about-remediation-fencing-maintenance[Workload Availability for Red Hat OpenShift] documentation.
====
Placing a node into maintenance marks the node as unschedulable and drains all the virtual machines and pods from it. Virtual machine instances that have a `LiveMigrate` eviction strategy are live migrated to another node without loss of service. This eviction strategy is configured by default in virtual machine created from common templates but must be configured manually for custom virtual machines.
Virtual machine instances without an eviction strategy are shut down. Virtual machines with a `runStrategy` of `Running` or `RerunOnFailure` are recreated on another node. Virtual machines with a `runStrategy` of `Manual` are not automatically restarted.
[IMPORTANT]
====
Virtual machines must have a persistent volume claim (PVC) with a shared `ReadWriteMany` (RWX) access mode to be live migrated.
====
The Node Maintenance Operator watches for new or deleted `NodeMaintenance` CRs. When a new `NodeMaintenance` CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a `NodeMaintenance` CR is deleted, the node that is referenced in the CR is made available for new workloads.
[NOTE]
====
Using a `NodeMaintenance` CR for node maintenance tasks achieves the same results as the `oc adm cordon` and `oc adm drain` commands using standard {product-title} custom resource processing.
====

View File

@@ -1,79 +0,0 @@
// Module included in the following assemblies:
//
// * virt/nodes/virt-about-node-maintenance.adoc
:_content-type: CONCEPT
[id="virt-about-runstrategies-vms_{context}"]
= About run strategies for virtual machines
Run strategies for virtual machines (VMs) determine how virtual machine instances (VMIs) behave under certain conditions.
You configure a run strategy by assigning a value to the `runStrategy` key in the `VirtualMachine` manifest as in the following example:
.Example run strategy
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
runStrategy: Always
template:
# ...
----
[IMPORTANT]
====
The `runStrategy` and the `running` keys are mutually exclusive. Only one of them can be used.
====
The `runStrategy` key gives you more flexibility because it has four values, unlike the `running` key, which has a Boolean value.
.`runStrategy` key values
`Always`::
The VMI is always present when a virtual machine is created. A new VMI is created if the original stops for any reason. This is the same behavior as `running: true`.
`RerunOnFailure`::
The VMI is re-created if the previous instance fails. The instance is not re-created if the virtual machine stops successfully, such as when it is shut down.
`Manual`::
You control the VMI state manually with the `start`, `stop`, and `restart` virtctl client commands.
`Halted`::
No VMI is present when a virtual machine is created. This is the same behavior as `running: false`.
Different combinations of the `start`, `stop` and `restart` virtctl commands affect the run strategy.
The following table describes a VM's transition from different states. The first column shows the VM's initial run strategy. The remaining columns show a virtctl command and the new run strategy after that command is run.
.Run strategy before and after `virtctl` commands
[options="header"]
|===
|Initial run strategy |Start |Stop |Restart
|Always
|-
|Halted
|Always
|RerunOnFailure
|-
|Halted
|RerunOnFailure
|Manual
|Manual
|Manual
|Manual
|Halted
|Always
|-
|-
|===
[NOTE]
====
If a node in a cluster installed by using installer-provisioned infrastructure fails the machine health check and is unavailable, VMs with `runStrategy: Always` or `runStrategy: RerunOnFailure` are rescheduled on a new node.
====

View File

@@ -1,12 +1,12 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-cancel-vmi-migration.adoc
// * virt/live_migration/virt-initiating-live-migration.adoc
:_content-type: PROCEDURE
[id="virt-cancelling-vm-migration-cli_{context}"]
= Cancelling live migration of a virtual machine instance in the CLI
[id="virt-canceling-vm-migration-cli_{context}"]
= Canceling live migration by using the command line
Cancel the live migration of a virtual machine instance by deleting the
Cancel the live migration of a virtual machine by deleting the
`VirtualMachineInstanceMigration` object associated with the migration.
.Procedure

View File

@@ -0,0 +1,15 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-initiating-live-migration.adoc
:_content-type: PROCEDURE
[id="virt-canceling-vm-migration-web_{context}"]
= Canceling live migration by using the web console
You can cancel the live migration of a virtual machine (VM) by using the {product-title} web console.
.Procedure
. Navigate to *Virtualization* -> *VirtualMachines* in the web console.
. Select *Cancel Migration* on the Options menu {kebab} beside a VM.

View File

@@ -1,16 +0,0 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-cancel-vmi-migration.adoc
:_content-type: PROCEDURE
[id="virt-cancelling-vm-migration-web_{context}"]
= Cancelling live migration of a virtual machine instance in the web console
You can cancel the live migration of a virtual machine instance in the web console.
.Procedure
. In the {product-title} console, click *Virtualization* -> *VirtualMachines* from the side menu.
. Click the Options menu {kebab} beside a virtual machine and select *Cancel Migration*.

View File

@@ -1,45 +1,36 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-configuring-live-migration-policies.adoc
// * virt/live_migration/virt-configuring-live-migration.adoc
:_content-type: PROCEDURE
[id="virt-configuring-a-live-migration-policy_{context}"]
= Configuring a live migration policy from the command line
= Creating a live migration policy by using the command line
Use the `MigrationPolicy` custom resource definition (CRD) to define migration policies for one or more groups of selected virtual machine instances (VMIs).
You can create a live migration policy by using the command line. A live migration policy is applied to selected virtual machines (VMs) by using any combination of labels:
You can specify groups of VMIs by using any combination of the following:
* VM labels such as `size`, `os`, or `gpu`
* Project labels such as `priority`, `bandwidth`, or `hpc-workload`
* Virtual machine instance labels such as `size`, `os`, `gpu`, and other VMI labels.
* Namespace labels such as `priority`, `bandwidth`, `hpc-workload`, and other namespace labels.
For the policy to apply to a specific group of VMIs, all labels on the group of VMIs must match the labels in the policy.
For the policy to apply to a specific group of VMs, all labels on the group of VMs must match the labels of the policy.
[NOTE]
====
If multiple live migration policies apply to a VMI, the policy with the highest number of matching labels takes precedence. If multiple policies meet this criteria, the policies are sorted by lexicographic order of the matching labels keys, and the first one in that order takes precedence.
If multiple live migration policies apply to a VM, the policy with the greatest number of matching labels takes precedence.
If multiple policies meet this criteria, the policies are sorted by alphabetical order of the matching label keys, and the first one in that order takes precedence.
====
.Procedure
. Create a `MigrationPolicy` CRD for your specified group of VMIs. The following example YAML configures a group with the labels `hpc-workloads:true`, `xyz-workloads-type: ""`, `workload-type: db`, and `operating-system: ""`:
. Create a `MigrationPolicy` object as in the following example:
+
[source,yaml]
----
apiVersion: migrations.kubevirt.io/v1beta1
apiVersion: migrations.kubevirt.io/v1alpha1
kind: MigrationPolicy
metadata:
name: my-awesome-policy
name: <migration_policy>
spec:
# Migration Configuration
allowAutoConverge: true
bandwidthPerMigration: 217Ki
completionTimeoutPerGiB: 23
allowPostCopy: false
# Matching to VMIs
selectors:
namespaceSelector: <1>
hpc-workloads: "True"
@@ -48,5 +39,12 @@ spec:
workload-type: "db"
operating-system: ""
----
<1> Use `namespaceSelector` to define a group of VMIs by using namespace labels.
<2> Use `virtualMachineInstanceSelector` to define a group of VMIs by using VMI labels.
<1> Specify project labels.
<2> Specify VM labels.
. Create the migration policy by running the following command:
+
[source,terminal]
----
$ oc create migrationpolicy -f <migration_policy>.yaml
----

View File

@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//
// * virt/nodes/virt-node-maintenance.adoc
:_content-type: PROCEDURE
[id="virt-configuring-cluster-eviction-strategy-cli_{context}"]
= Configuring a cluster eviction strategy by using the command line
You can configure an eviction strategy for a cluster by using the command line.
.Procedure
. Edit the `hyperconverged` resource by running the following command:
+
[source,terminal,subs="attributes+"]
----
$ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace}
----
. Set the cluster eviction strategy as shown in the following example:
+
.Example cluster eviction strategy
[source,yaml]
----
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
spec:
evictionStrategy: LiveMigrate
# ...
----

View File

@@ -1,7 +1,7 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-live-migration-limits.adoc
// * virt/live_migration/virt-configuring-live-migration.adoc
:_content-type: PROCEDURE
[id="virt-configuring-live-migration-limits_{context}"]
@@ -12,7 +12,7 @@ Configure live migration limits and timeouts for the cluster by updating the `Hy
.Procedure
* Edit the `HyperConverged` CR and add the necessary live migration parameters.
* Edit the `HyperConverged` CR and add the necessary live migration parameters:
+
[source,terminal,subs="attributes+"]
----
@@ -28,15 +28,19 @@ metadata:
name: kubevirt-hyperconverged
namespace: {CNVNamespace}
spec:
liveMigrationConfig: <1>
bandwidthPerMigration: 64Mi
completionTimeoutPerGiB: 800
parallelMigrationsPerCluster: 5
parallelOutboundMigrationsPerNode: 2
progressTimeout: 150
liveMigrationConfig:
bandwidthPerMigration: 64Mi <1>
completionTimeoutPerGiB: 800 <2>
parallelMigrationsPerCluster: 5 <3>
parallelOutboundMigrationsPerNode: 2 <4>
progressTimeout: 150 <5>
----
<1> In this example, the `spec.liveMigrationConfig` array contains the default values for each field.
+
<1> Bandwidth limit of each migration, in MiB/s or GiB/s. Default: `0`, which is unlimited.
<2> The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a VM with 6GiB memory times out if it has not completed migration in 4800 seconds. If the `Migration Method` is `BlockMigration`, the size of the migrating disks is included in the calculation.
<3> Number of migrations running in parallel in the cluster. Default: `5`.
<4> Maximum number of outbound migrations per node. Default: `2`.
<5> The migration is canceled if memory copy fails to make progress in this time, in seconds. Default: `150`.
[NOTE]
====
You can restore the default value for any `spec.liveMigrationConfig` field by deleting that key/value pair and saving the file. For example, delete `progressTimeout: <value>` to restore the default `progressTimeout: 150`.

View File

@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//
// * virt/nodes/virt-node-maintenance.adoc
:_content-type: PROCEDURE
[id="virt-configuring-runstrategy-vm_{context}"]
= Configuring a VM run strategy by using the command line
You can configure a run strategy for a virtual machine (VM) by using the command line.
[IMPORTANT]
====
The `spec.runStrategy` and `spec.running` keys are mutually exclusive. A VM configuration that contains values for both keys is invalid.
====
.Procedure
* Edit the `VirtualMachine` resource by running the following command:
+
[source,terminal]
----
$ oc edit vm <vm_name> -n <namespace>
----
+
.Example run strategy
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
runStrategy: Always
# ...
----

View File

@@ -0,0 +1,47 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-configuring-live-migration.adoc
:_content-type: PROCEDURE
[id="virt-configuring-vm-eviction-strategy-cli_{context}"]
= Configuring a VM eviction strategy using the command line
You can configure an eviction strategy for a virtual machine (VM) by using the command line.
[IMPORTANT]
====
The default eviction strategy is `LiveMigrate`. A non-migratable VM with a `LiveMigrate` eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in a `Pending` or `Scheduling` state unless you shut down the VM manually.
You must set the eviction strategy of non-migratable VMs to `LiveMigrateIfPossible`, which does not block an upgrade, or to `None`, for VMs that should not be migrated.
====
.Procedure
. Edit the `VirtualMachine` resource by running the following command:
+
[source,terminal]
----
$ oc edit vm <vm_name> -n <namespace>
----
+
.Example eviction strategy
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: <vm_name>
spec:
template:
spec:
evictionStrategy: LiveMigrateIfPossible <1>
# ...
----
<1> Specify the eviction strategy. The default value is `LiveMigrate`.
. Restart the VM to apply the changes:
+
[source,terminal]
----
$ virtctl restart <vm_name> -n <namespace>
----

View File

@@ -1,45 +0,0 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-configuring-vmi-eviction-strategy.adoc
:_content-type: PROCEDURE
[id="virt-configuring-vm-live-migration-cli_{context}"]
= Configuring custom virtual machines with the LiveMigration eviction strategy
You only need to configure the `LiveMigration` eviction strategy on custom
virtual machines. Common templates have this eviction strategy
configured by default.
.Procedure
. Add the `evictionStrategy: LiveMigrate` option to the `spec.template.spec` section in the
virtual machine configuration file. This example uses `oc edit` to update
the relevant snippet of the `VirtualMachine` configuration file:
+
[source,terminal]
----
$ oc edit vm <custom-vm> -n <my-namespace>
----
+
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: custom-vm
spec:
template:
spec:
evictionStrategy: LiveMigrate
# ...
----
. Restart the virtual machine for the update to take effect:
+
[source,terminal]
----
$ virtctl restart <custom-vm> -n <my-namespace>
----

View File

@@ -1,33 +1,63 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-migrate-vmi.adoc
// * virt/live_migration/virt-initiating-live-migration.adoc
:_content-type: PROCEDURE
[id="virt-initiating-vm-migration-cli_{context}"]
= Initiating live migration of a virtual machine instance in the CLI
= Initiating live migration by using the command line
Initiate a live migration of a running virtual machine instance by creating a `VirtualMachineInstanceMigration` object in the cluster and referencing the name of the virtual machine instance.
You can initiate the live migration of a running virtual machine (VM) by using the command line to create a `VirtualMachineInstanceMigration` object for the VM.
.Procedure
. Create a `VirtualMachineInstanceMigration` configuration file for the virtual machine instance to migrate. For example, `vmi-migrate.yaml`:
. Create a `VirtualMachineInstanceMigration` manifest for the VM that you want to migrate:
+
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: migration-job
name: <migration_name>
spec:
vmiName: vmi-fedora
vmiName: <vm_name>
----
. Create the object in the cluster by running the following command:
. Create the object by running the following command:
+
[source,terminal]
----
$ oc create -f vmi-migrate.yaml
$ oc create -f <migration_name>.yaml
----
+
The `VirtualMachineInstanceMigration` object triggers a live migration of the VM. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted.
The `VirtualMachineInstanceMigration` object triggers a live migration of the virtual machine instance.
This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted.
.Verification
* Obtain the VM status by running the following command:
+
[source,terminal]
----
$ oc describe vmi <vm_name>
----
+
.Example output
[source,yaml]
----
# ...
Status:
Conditions:
Last Probe Time: <nil>
Last Transition Time: <nil>
Status: True
Type: LiveMigratable
Migration Method: LiveMigration
Migration State:
Completed: true
End Timestamp: 2018-12-24T06:19:42Z
Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1
Source Node: node2.example.com
Start Timestamp: 2018-12-24T06:19:35Z
Target Node: node1.example.com
Target Node Address: 10.9.0.18:43891
Target Node Domain Detected: true
----

View File

@@ -1,25 +1,25 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-migrate-vmi.adoc
// * virt/live_migration/virt-initiating-live-migration.adoc
:_content-type: PROCEDURE
[id="virt-initiating-vm-migration-web_{context}"]
= Initiating live migration of a virtual machine instance in the web console
= Initiating live migration by using the web console
Migrate a running virtual machine instance to a different node in the cluster.
You can live migrate a running virtual machine (VM) to a different node in the cluster by using the {product-title} web console.
[NOTE]
====
The *Migrate* action is visible to all users but only admin users can initiate a virtual machine migration.
The *Migrate* action is visible to all users but only cluster administrators can initiate a live migration.
====
.Prerequisites
* The VM must be migratable.
* If the VM is configured with a host model CPU, the cluster must have an available node that supports the CPU model.
.Procedure
. In the {product-title} console, click *Virtualization* -> *VirtualMachines* from the side menu.
. You can initiate the migration from this page, which makes it easier to perform actions on multiple virtual machines on the same page, or from the *VirtualMachine details* page where you can view comprehensive details of the selected virtual machine:
* Click the Options menu {kebab} next to the virtual machine and select *Migrate*.
* Click the virtual machine name to open the *VirtualMachine details* page and click *Actions* -> *Migrate*.
. Click *Migrate* to migrate the virtual machine to another node.
. Navigate to *Virtualization* -> *VirtualMachines* in the web console.
. Select *Migrate* from the Options menu {kebab} beside a VM.
. Click *Migrate*.

View File

@@ -1,42 +0,0 @@
// Module included in the following assemblies:
//
// * virt/live_migration/virt-live-migration-limits.adoc
[id="virt-live-migration-limits-ref_{context}"]
= Cluster-wide live migration limits and timeouts
[caption=]
.Migration parameters
[cols="2,3,1"]
|===
|Parameter |Description |Default
|`parallelMigrationsPerCluster`
|Number of migrations running in parallel in the cluster.
|5
|`parallelOutboundMigrationsPerNode`
|Maximum number of outbound migrations per node.
|2
|`bandwidthPerMigration`
|Bandwidth limit of each migration, in MiB/s.
|0 ^[1]^
|`completionTimeoutPerGiB`
|The migration is canceled if it has not completed in this time, in seconds
per GiB of memory. For example, a virtual machine instance with 6GiB memory times out if it has
not completed migration in 4800 seconds. If the `Migration Method` is
`BlockMigration`, the size of the migrating disks is included in the calculation.
|800
|`progressTimeout`
|The migration is canceled if memory copy fails to make progress in this
time, in seconds.
|150
|===
[.small]
--
1. The default value of `0` is unlimited.
--

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// virt/nodes/virt-about-node-maintenance.adoc
// virt/nodes/virt-node-maintenance.adoc
[id="virt-maintaining-bare-metal-nodes_{context}"]
= Maintaining bare metal nodes

View File

@@ -1,40 +0,0 @@
// Module included in the following assemblies:
//
// * virt/support/virt-monitor-vmi-migration.adoc
:_content-type: PROCEDURE
[id="virt-monitoring-vm-migration-cli_{context}"]
= Monitoring live migration of a virtual machine instance in the CLI
The status of the virtual machine migration is stored in the `Status` component of the `VirtualMachineInstance` configuration.
.Procedure
* Use the `oc describe` command on the migrating virtual machine instance:
+
[source,terminal]
----
$ oc describe vmi vmi-fedora
----
+
.Example output
[source,yaml]
----
# ...
Status:
Conditions:
Last Probe Time: <nil>
Last Transition Time: <nil>
Status: True
Type: LiveMigratable
Migration Method: LiveMigration
Migration State:
Completed: true
End Timestamp: 2018-12-24T06:19:42Z
Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1
Source Node: node2.example.com
Start Timestamp: 2018-12-24T06:19:35Z
Target Node: node1.example.com
Target Node Address: 10.9.0.18:43891
Target Node Domain Detected: true
----

View File

@@ -0,0 +1,57 @@
// Module included in the following assemblies:
//
// * virt/nodes/virt-node-maintenance.adoc
:_content-type: REFERENCE
[id="virt-runstrategies-vms_{context}"]
= Run strategies
The `spec.runStrategy` key has four possible values:
`Always`::
The virtual machine instance (VMI) is always present when a virtual machine (VM) is created on another node. A new VMI is created if the original stops for any reason. This is the same behavior as `running: true`.
`RerunOnFailure`::
The VMI is re-created on another node if the previous instance fails. The instance is not re-created if the VM stops successfully, such as when it is shut down.
`Manual`::
You control the VMI state manually with the `start`, `stop`, and `restart` virtctl client commands. The VM is not automatically restarted.
`Halted`::
No VMI is present when a VM is created. This is the same behavior as `running: false`.
Different combinations of the `virtctl start`, `stop` and `restart` commands affect the run strategy.
The following table describes a VM's transition between states. The first column shows the VM's initial run strategy. The remaining columns show a virtctl command and the new run strategy after that command is run.
.Run strategy before and after `virtctl` commands
[options="header"]
|===
|Initial run strategy |Start |Stop |Restart
|Always
|-
|Halted
|Always
|RerunOnFailure
|-
|Halted
|RerunOnFailure
|Manual
|Manual
|Manual
|Manual
|Halted
|Always
|-
|-
|===
[NOTE]
====
If a node in a cluster installed by using installer-provisioned infrastructure fails the machine health check and is unavailable, VMs with `runStrategy: Always` or `runStrategy: RerunOnFailure` are rescheduled on a new node.
====

View File

@@ -91,6 +91,9 @@ You use `virtctl` virtual machine (VM) management commands to manage and migrate
|`virtctl migrate <vm_name>`
|Migrate a VM.
|`virtctl migrate-cancel <vm_name>`
|Cancel a VM migration.
|`virtctl restart <vm_name>`
|Restart a VM.

View File

@@ -42,6 +42,6 @@ include::modules/virt-sno-differences.adoc[leveloffset=+1]
* xref:../../installing/installing_sno/install-sno-preparing-to-install-sno.adoc#install-sno-about-installing-on-a-single-node_install-sno-preparing[About {sno}]
* link:https://cloud.redhat.com/blog/using-the-openshift-assisted-installer-service-to-deploy-an-openshift-cluster-on-metal-and-vsphere[Assisted installer]
* xref:../../nodes/pods/nodes-pods-priority.adoc#priority-preemption-other_nodes-pods-priority[Pod disruption budgets]
* xref:../../virt/live_migration/virt-live-migration.adoc#virt-live-migration[Live migration]
* xref:../../virt/live_migration/virt-configuring-vmi-eviction-strategy.adoc#virt-configuring-vmi-eviction-strategy[Eviction strategy]
* xref:../../virt/live_migration/virt-about-live-migration.adoc#virt-about-live-migration[About live migration]
* xref:../../virt/nodes/virt-node-maintenance.adoc#eviction-strategies[Eviction strategies]
* link:https://access.redhat.com/articles/6994974[Tuning & Scaling Guide]

View File

@@ -73,6 +73,6 @@ Manage VMs:
VMs are connected to the pod network by default. You must configure a secondary network, such as Linux bridge or SR-IOV, and then add the network to the VM configuration.
====
* xref:../../virt/live_migration/virt-live-migration.adoc#virt-live-migration[Live migrate VMs].
* xref:../../virt/live_migration/virt-about-live-migration.adoc#virt-about-live-migration[Live migrate VMs].
* xref:../../virt/backup_restore/virt-backup-restore-overview.adoc#virt-backup-restore-overview[Back up and restore VMs].
* link:https://access.redhat.com/articles/6994974[Tune and scale your cluster].

View File

@@ -101,8 +101,7 @@ include::modules/virt-about-storage-volumes-for-vm-disks.adoc[leveloffset=+3]
* Shared storage with `ReadWriteMany` (RWX) access mode.
* Sufficient RAM and network bandwidth.
* If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU.
+
[NOTE]
====
You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
@@ -111,9 +110,12 @@ You must ensure that there is enough memory request capacity in the cluster to s
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
----
The default xref:../../virt/live_migration/virt-live-migration-limits.adoc#virt-live-migration-limits[number of migrations that can run in parallel] in the cluster is 5.
The default xref:../../virt/live_migration/virt-configuring-live-migration#virt-configuring-live-migration-limits_virt-configuring-live-migration[number of migrations that can run in parallel] in the cluster is 5.
====
* If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU.
* A xref:../../virt/vm_networking/virt-dedicated-network-live-migration.adoc#virt-dedicated-network-live-migration[dedicated Multus network] for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
include::modules/virt-cluster-resource-requirements.adoc[leveloffset=+1]
include::modules/virt-sno-differences.adoc[leveloffset=+1]
@@ -141,7 +143,7 @@ You can configure one of the following high-availability (HA) options for your c
+
[NOTE]
====
In {product-title} clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See xref:../../virt/nodes/virt-about-node-maintenance.adoc#virt-about-runstrategies-vms_virt-about-node-maintenance[About RunStrategies for virtual machines] for more detailed information about the potential outcomes and how RunStrategies affect those outcomes.
In {product-title} clusters installed using installer-provisioned infrastructure and with a properly configured `MachineHealthCheck` resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See xref:../../virt/nodes/virt-node-maintenance.adoc#run-strategies[Run strategies] for more detailed information about the potential outcomes and how run strategies affect those outcomes.
====
* Automatic high availability for both IPI and non-IPI is available by using the *Node Health Check Operator* on the {product-title} cluster to deploy the `NodeHealthCheck` controller. The controller identifies unhealthy nodes and uses the Self Node Remediation Operator to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the link:https://access.redhat.com/documentation/en-us/workload_availability_for_red_hat_openshift/23.2/html-single/remediation_fencing_and_maintenance/index#about-remediation-fencing-maintenance[Workload Availability for Red Hat OpenShift] documentation.

View File

@@ -0,0 +1,54 @@
:_content-type: ASSEMBLY
[id="virt-about-live-migration"]
= About live migration
include::_attributes/common-attributes.adoc[]
:context: virt-about-live-migration
toc::[]
Live migration is the process of moving a running virtual machine (VM) to another node in the cluster without interrupting the virtual workload. By default, live migration traffic is encrypted using Transport Layer Security (TLS).
[id="live-migration-requirements_virt-about-live-migration"]
== Live migration requirements
Live migration has the following requirements:
* The cluster must have shared storage with `ReadWriteMany` (RWX) access mode.
* The cluster must have sufficient RAM and network bandwidth.
+
[NOTE]
====
You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation:
----
Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)
----
The default number of migrations that can run in parallel in the cluster is 5.
====
* If a VM uses a host model CPU, the nodes must support the CPU.
* xref:../../virt/vm_networking/virt-dedicated-network-live-migration.adoc#virt-dedicated-network-live-migration[Configuring a dedicated Multus network] for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration.
[id="common-live-migration-tasks_virt-about-live-migration"]
== Common live migration tasks
You can perform the following live migration tasks:
* Configure live migration settings:
** xref:../../virt/live_migration/virt-configuring-live-migration.adoc#virt-configuring-live-migration-limits_virt-configuring-live-migration[Limits and timeouts]
** xref:../../virt/getting_started/virt-web-console-overview.adoc#overview-settings-cluster_virt-web-console-overview[Maximum number of migrations per node or cluster]
** xref:../../virt/getting_started/virt-web-console-overview.adoc#overview-settings-cluster_virt-web-console-overview[Select a dedicated live migration network from existing networks]
* xref:../../virt/live_migration/virt-initiating-live-migration.adoc#virt-initiating-live-migration[Initiate and cancel live migration]
* xref:../../virt/getting_started/virt-web-console-overview.adoc#overview-migrations_virt-web-console-overview[Monitor the progress of all live migrations]
* xref:../../virt/getting_started/virt-web-console-overview.adoc#virtualmachine-details-metrics_virt-web-console-overview[View VM migration metrics]
[id="additional-resources_virt-about-live-migration"]
== Additional resources
* xref:../../virt/monitoring/virt-prometheus-queries.adoc#virt-live-migration-metrics_virt-prometheus-queries[Prometheus queries for live migration]
* link:https://access.redhat.com/articles/6994974#vm-migration-tuning[VM migration tuning]
* xref:../../virt/nodes/virt-node-maintenance.adoc#run-strategies[VM run strategies]
* xref:../../virt/nodes/virt-node-maintenance.adoc#eviction-strategies[VM and cluster eviction strategies]

View File

@@ -1,16 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-cancel-vmi-migration"]
= Cancelling the live migration of a virtual machine instance
include::_attributes/common-attributes.adoc[]
:context: virt-cancel-vmi-migration
toc::[]
Cancel the live migration so that the virtual machine instance remains
on the original node.
You can cancel a live migration from either the web console or the CLI.
include::modules/virt-cancelling-vm-migration-web.adoc[leveloffset=+1]
include::modules/virt-cancelling-vm-migration-cli.adoc[leveloffset=+1]

View File

@@ -1,16 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-configuring-live-migration-policies"]
= Configuring live migration policies
include::_attributes/common-attributes.adoc[]
:context: virt-live-migration
toc::[]
You can define different migration configurations for specified groups of virtual machine instances (VMIs) by using a live migration policy.
:FeatureName: Live migration policy
include::snippets/technology-preview.adoc[]
To configure a live migration policy by using the web console, see the xref:../../virt/getting_started/virt-web-console-overview.adoc#migrationpolicies-page_virt-web-console-overview[MigrationPolicies page documentation].
include::modules/virt-configuring-a-live-migration-policy.adoc[leveloffset=+1]

View File

@@ -0,0 +1,38 @@
:_content-type: ASSEMBLY
[id="virt-configuring-live-migration"]
= Configuring live migration
include::_attributes/common-attributes.adoc[]
:context: virt-configuring-live-migration
toc::[]
You can configure live migration settings to ensure that the migration processes do not overwhelm the cluster.
You can configure live migration policies to apply different migration configurations to groups of virtual machines (VMs).
[id="live-migration-settings"]
== Live migration settings
You can configure the following live migration settings:
* xref:../../virt/live_migration/virt-configuring-live-migration.adoc#virt-configuring-live-migration-limits_virt-configuring-live-migration[Limits and timeouts]
* xref:../../virt/getting_started/virt-web-console-overview.adoc#overview-settings-cluster_virt-web-console-overview[Maximum number of migrations per node or cluster]
include::modules/virt-configuring-live-migration-limits.adoc[leveloffset=+2]
[id="live-migration-policies"]
== Live migration policies
You can create live migration policies to apply different migration configurations to groups of VMs that are defined by VM or project labels.
[TIP]
====
You can create live migration policies by using the xref:../../virt/getting_started/virt-web-console-overview.adoc#migrationpolicies-page_virt-web-console-overview[web console].
====
include::modules/virt-configuring-a-live-migration-policy.adoc[leveloffset=+2]
[role="_additional-resources"]
[id="additional-resources_virt-configuring-live-migration"]
== Additional resources
* xref:../../virt/vm_networking/virt-dedicated-network-live-migration.adoc#virt-dedicated-network-live-migration[Configuring a dedicated Multus network for live migration]

View File

@@ -1,14 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-configuring-vmi-eviction-strategy"]
= Configuring virtual machine eviction strategy
include::_attributes/common-attributes.adoc[]
:context: virt-configuring-vmi-eviction-strategy
toc::[]
The `LiveMigrate` eviction strategy ensures that a virtual machine instance is
not interrupted if the node is placed into maintenance or drained.
Virtual machines instances with this eviction strategy will be live migrated to
another node.
include::modules/virt-configuring-vm-live-migration-cli.adoc[leveloffset=+1]

View File

@@ -0,0 +1,36 @@
:_content-type: ASSEMBLY
[id="virt-initiating-live-migration"]
= Initiating and canceling live migration
include::_attributes/common-attributes.adoc[]
:context: virt-initiating-live-migration
toc::[]
You can initiate the live migration of a virtual machine (VM) to another node by using the xref:../../virt/live_migration/virt-initiating-live-migration.adoc#virt-initiating-vm-migration-web_virt-initiating-live-migration[{product-title} web console] or the xref:../../virt/live_migration/virt-initiating-live-migration.adoc#virt-initiating-vm-migration-cli_virt-initiating-live-migration[command line].
You can cancel a live migration by using the xref:../../virt/live_migration/virt-initiating-live-migration.adoc#virt-canceling-vm-migration-web_virt-initiating-live-migration[web console] or the xref:../../virt/live_migration/virt-initiating-live-migration.adoc#virt-canceling-vm-migration-cli_virt-initiating-live-migration[command line]. The VM remains on its original node.
[TIP]
====
You can also initiate and cancel live migration by using the `virtctl migrate <vm_name>` and `virtctl migrate-cancel <vm_name>` commands.
====
[id="initating-live-migration_initiating-canceling"]
== Initiating live migration
include::modules/virt-initiating-vm-migration-web.adoc[leveloffset=+2]
include::modules/virt-initiating-vm-migration-cli.adoc[leveloffset=+2]
[id="canceling-live-migration_initiating-canceling"]
== Canceling live migration
include::modules/virt-canceling-vm-migration-web.adoc[leveloffset=+2]
include::modules/virt-canceling-vm-migration-cli.adoc[leveloffset=+2]
[role="_additional-resources"]
[id="additional-resources_virt-initiating-live-migration"]
== Additional resources
* xref:../../virt/getting_started/virt-web-console-overview.adoc#overview-migrations_virt-web-console-overview[Monitoring the progress of all live migrations by using the web console]
* xref:../../virt/getting_started/virt-web-console-overview.adoc#virtualmachine-details-metrics_virt-web-console-overview[Viewing VM migration metrics by using the web console]

View File

@@ -1,14 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-live-migration-limits"]
= Live migration limits and timeouts
include::_attributes/common-attributes.adoc[]
:context: virt-live-migration-limits
toc::[]
Apply live migration limits and timeouts so that migration processes do
not overwhelm the cluster. Configure these settings by editing the
`HyperConverged` custom resource (CR).
include::modules/virt-configuring-live-migration-limits.adoc[leveloffset=+1]
include::modules/virt-live-migration-limits-reftable.adoc[leveloffset=+1]

View File

@@ -1,19 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-live-migration"]
= Virtual machine live migration
include::_attributes/common-attributes.adoc[]
:context: virt-live-migration
toc::[]
include::modules/virt-about-live-migration.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_virt-live-migration"]
== Additional resources
* xref:../../virt/vm_networking/virt-dedicated-network-live-migration.adoc#virt-dedicated-network-live-migration[Configuring a dedicated network for live migration]
* xref:../../virt/live_migration/virt-migrate-vmi.adoc#virt-migrate-vmi[Migrating a virtual machine instance to another node]
* xref:../../virt/monitoring/virt-prometheus-queries.adoc#virt-live-migration-metrics_virt-prometheus-queries[Live migration metrics]
* xref:../../virt/live_migration/virt-live-migration-limits.adoc#virt-live-migration-limits[Live migration limiting]
* xref:../../virt/storage/virt-configuring-storage-profile.adoc#virt-configuring-storage-profile[Customizing the storage profile]
* link:https://access.redhat.com/articles/6994974#vm-migration-tuning[VM migration tuning]

View File

@@ -1,33 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-migrate-vmi"]
= Migrating a virtual machine instance to another node
include::_attributes/common-attributes.adoc[]
:context: virt-migrate-vmi
toc::[]
Manually initiate a live migration of a virtual machine instance to another node using either the web console or the CLI.
[NOTE]
====
If a virtual machine uses a host model CPU, you can perform live migration of that virtual machine only between nodes that support its host CPU model.
====
include::modules/virt-initiating-vm-migration-web.adoc[leveloffset=+1]
[id="monitoring-live-migration-by-using-the-web-console_{context}"]
=== Monitoring live migration by using the web console
You can monitor the progress of all live migrations on the xref:../../virt/getting_started/virt-web-console-overview.adoc#overview-migrations_virt-web-console-overview[*Overview* -> *Migrations* tab] in the web console.
You can view the migration metrics of a virtual machine on the xref:../../virt/getting_started/virt-web-console-overview.adoc#virtualmachine-details-metrics_virt-web-console-overview[*VirtualMachine details* -> *Metrics* tab] in the web console.
include::modules/virt-initiating-vm-migration-cli.adoc[leveloffset=+1]
include::modules/virt-monitoring-vm-migration-cli.adoc[leveloffset=+2]
[role="_additional-resources"]
[id="virt-migrate-vmi_additional-resources"]
== Additional resources
* xref:../../virt/live_migration/virt-cancel-vmi-migration.adoc#virt-cancel-vmi-migration[Cancelling the live migration of a virtual machine instance]

View File

@@ -1,18 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-migrating-vm-on-secondary-network"]
= Migrating a virtual machine over a dedicated additional network
include::_attributes/common-attributes.adoc[]
:context: virt-migrating-vm-on-secondary-network
toc::[]
You can configure a dedicated xref:../../virt/vm_networking/virt-connecting-vm-to-linux-bridge.adoc#virt-connecting-vm-to-linux-bridge[Multus network] for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration.
include::modules/virt-configuring-secondary-network-vm-live-migration.adoc[leveloffset=+1]
include::modules/virt-selecting-migration-network-ui.adoc[leveloffset=+1]
[id="additional-resources_virt-migrating-vm-on-secondary-network"]
== Additional resources
* xref:../../virt/live_migration/virt-live-migration-limits.adoc#virt-live-migration-limits[Live migration limits and timeouts]

View File

@@ -1,19 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-about-node-maintenance"]
= About node maintenance
include::_attributes/common-attributes.adoc[]
:context: virt-about-node-maintenance
toc::[]
include::modules/virt-about-node-maintenance.adoc[leveloffset=+1]
include::modules/virt-about-runstrategies-vms.adoc[leveloffset=+1]
include::modules/virt-maintaining-bare-metal-nodes.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_virt-about-node-maintenance"]
== Additional resources
* xref:../../virt/live_migration/virt-live-migration.adoc#virt-live-migration[Virtual machine live migration]
* xref:../../virt/live_migration/virt-configuring-vmi-eviction-strategy.adoc#virt-configuring-vmi-eviction-strategy[Configuring virtual machine eviction strategy]

View File

@@ -9,5 +9,7 @@ toc::[]
You can schedule a virtual machine (VM) on a node as long as the VM CPU model and policy are supported by the node.
include::modules/virt-about-node-labeling-obsolete-cpu-models.adoc[leveloffset=+1]
include::modules/virt-about-node-labeling-cpu-features.adoc[leveloffset=+1]
include::modules/virt-configuring-obsolete-cpu-models.adoc[leveloffset=+1]

View File

@@ -0,0 +1,100 @@
:_content-type: ASSEMBLY
[id="virt-node-maintenance"]
= Node maintenance
include::_attributes/common-attributes.adoc[]
:context: virt-node-maintenance
toc::[]
Nodes can be placed into maintenance mode by using the `oc adm` utility or `NodeMaintenance` custom resources (CRs).
[NOTE]
====
The `node-maintenance-operator` (NMO) is no longer shipped with {VirtProductName}. It is deployed as a standalone Operator from the *OperatorHub* in the {product-title} web console or by using the OpenShift CLI (`oc`).
For more information on remediation, fencing, and maintaining nodes, see the link:https://access.redhat.com/documentation/en-us/workload_availability_for_red_hat_openshift/23.2/html-single/remediation_fencing_and_maintenance/index#about-remediation-fencing-maintenance[Workload Availability for Red Hat OpenShift] documentation.
====
[IMPORTANT]
====
Virtual machines (VMs) must have a persistent volume claim (PVC) with a shared `ReadWriteMany` (RWX) access mode to be live migrated.
====
The Node Maintenance Operator watches for new or deleted `NodeMaintenance` CRs. When a new `NodeMaintenance` CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a `NodeMaintenance` CR is deleted, the node that is referenced in the CR is made available for new workloads.
[NOTE]
====
Using a `NodeMaintenance` CR for node maintenance tasks achieves the same results as the `oc adm cordon` and `oc adm drain` commands using standard {product-title} custom resource processing.
====
[id="eviction-strategies"]
== Eviction strategies
Placing a node into maintenance marks the node as unschedulable and drains all the VMs and pods from it.
You can configure eviction strategies for virtual machines (VMs) or for the cluster.
VM eviction strategy::
The VM `LiveMigrate` eviction strategy ensures that a virtual machine instance (VMI) is not interrupted if the node is placed into maintenance or drained. VMIs with this eviction strategy will be live migrated to another node.
+
You can configure eviction strategies for virtual machines (VMs) by using the xref:../../virt/getting_started/virt-web-console-overview.adoc#virtualmachine-details-scheduling_virt-web-console-overview[web console] or the xref:../../virt/live_migration/virt-configuring-live-migration.adoc#virt-configuring-a-live-migration-policy_virt-configuring-live-migration[command line].
+
[IMPORTANT]
====
The default eviction strategy is `LiveMigrate`. A non-migratable VM with a `LiveMigrate` eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in a `Pending` or `Scheduling` state unless you shut down the VM manually.
You must set the eviction strategy of non-migratable VMs to `LiveMigrateIfPossible`, which does not block an upgrade, or to `None`, for VMs that should not be migrated.
====
Cluster eviction strategy::
You can configure an eviction strategy for the cluster to prioritize workload continuity or infrastructure upgrade.
:FeatureName: Configuring a cluster eviction strategy
include::snippets/technology-preview.adoc[]
.Cluster eviction strategies
[cols="1,2,1,1",options="header"]
|====
|Eviction strategy |Description |Interrupts workflow |Blocks upgrades
|`LiveMigrate` ^1^ |Prioritizes workload continuity over upgrades. |No |Yes ^2^
|`LiveMigrateIfPossible` |Prioritizes upgrades over workload continuity to ensure that the environment is updated. |Yes |No
|`None` ^3^ |Shuts down VMs with no eviction strategy. |Yes |No
|====
[.small]
--
1. Default eviction strategy for multi-node clusters.
2. If a VM blocks an upgrade, you must shut down the VM manually.
3. Default eviction strategy for {sno}.
--
include::modules/virt-configuring-vm-eviction-strategy-cli.adoc[leveloffset=+2]
include::modules/virt-configuring-cluster-eviction-strategy-cli.adoc[leveloffset=+2]
[id="run-strategies"]
== Run strategies
A virtual machine (VM) configured with `spec.running: true` is immediately restarted. The `spec.runStrategy` key provides greater flexibility for determining how a VM behaves under certain conditions.
[IMPORTANT]
====
The `spec.runStrategy` and `spec.running` keys are mutually exclusive. Only one of them can be used.
A VM configuration with both keys is invalid.
====
include::modules/virt-runstrategies-vms.adoc[leveloffset=+2]
include::modules/virt-configuring-runstrategy-vm.adoc[leveloffset=+2]
include::modules/virt-maintaining-bare-metal-nodes.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_virt-node-maintenance"]
== Additional resources
* xref:../../virt/live_migration/virt-about-live-migration.adoc#virt-about-live-migration[About live migration]

View File

@@ -13,5 +13,4 @@ include::modules/virt-using-skip-node.adoc[leveloffset=+1]
[id="additional-resources_{context}"]
[role="_additional-resources"]
== Additional resources
* xref:../../virt/nodes/virt-managing-node-labeling-obsolete-cpu-models.adoc#virt-managing-node-labeling-obsolete-cpu-models[Managing node labeling for obsolete CPU models]

View File

@@ -13,7 +13,7 @@ If a node fails and xref:../../machine_management/deploying-machine-health-check
If you installed your cluster by using xref:../../installing/installing_bare_metal_ipi/ipi-install-overview.adoc#ipi-install-overview[installer-provisioned infrastructure] and you properly configured machine health checks, the following events occur:
* Failed nodes are automatically recycled.
* Virtual machines with xref:../../virt/nodes/virt-about-node-maintenance.adoc#virt-about-runstrategies-vms_virt-about-node-maintenance[`runStrategy`] set to `Always` or `RerunOnFailure` are automatically scheduled on healthy nodes.
* Virtual machines with xref:../../virt/nodes/virt-node-maintenance.adoc#run-strategies[`runStrategy`] set to `Always` or `RerunOnFailure` are automatically scheduled on healthy nodes.
====
[id="prerequisites_{context}"]

View File

@@ -47,17 +47,10 @@ Configure workload updates to ensure that VMIs update automatically.
[id="additional-resources_upgrading-virt"]
[role="_additional-resources"]
== Additional resources
* xref:../../updating/updating_a_cluster/eus-eus-update.adoc#eus-eus-update[Performing an EUS-to-EUS update]
* xref:../../operators/understanding/olm-what-operators-are.adoc#olm-what-operators-are[What are Operators?]
* xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager concepts and resources]
* xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-csv_olm-understanding-olm[Cluster service versions (CSVs)]
* xref:../../virt/live_migration/virt-live-migration.adoc#virt-live-migration[Virtual machine live migration]
* xref:../../virt/live_migration/virt-configuring-vmi-eviction-strategy.adoc#virt-configuring-vmi-eviction-strategy[Configuring virtual machine eviction strategy]
* xref:../../virt/live_migration/virt-live-migration-limits.adoc#virt-configuring-live-migration-limits_virt-live-migration-limits[Configuring live migration limits and timeouts]
* xref:../../virt/live_migration/virt-about-live-migration.adoc#virt-about-live-migration[About live migration]
* xref:../../virt/nodes/virt-node-maintenance.adoc#eviction-strategies[Configuring eviction strategies]
* xref:../../virt/live_migration/virt-configuring-live-migration.adoc#virt-configuring-live-migration-limits_virt-configuring-live-migration[Configuring live migration limits and timeouts]

View File

@@ -15,5 +15,4 @@ include::modules/virt-selecting-migration-network-ui.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_virt-migrating-vm-on-secondary-network"]
== Additional resources
* xref:../../virt/live_migration/virt-live-migration-limits.adoc#virt-live-migration-limits[Live migration limits and timeouts]
* xref:../../virt/live_migration/virt-configuring-live-migration#virt-configuring-live-migration-limits_virt-configuring-live-migration[Configuring live migration limits and timeouts]