1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

CNV-40007: Added AAQ Operator docs

This commit is contained in:
Shikha Jhala
2024-10-09 14:34:07 -04:00
committed by openshift-cherrypick-robot
parent 28ac014076
commit 8170bd4c43
5 changed files with 174 additions and 0 deletions

View File

@@ -4417,6 +4417,8 @@ Topics:
#Advanced virtual machine configuration
- Name: Working with resource quotas for virtual machines
File: virt-working-with-resource-quotas-for-vms
- Name: Configuring the Application-Aware Quota Operator
File: virt-understanding-aaq-operator
- Name: Specifying nodes for virtual machines
File: virt-specifying-nodes-for-vms
- Name: Activating kernel samepage merging (KSM)

View File

@@ -0,0 +1,79 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/advanced_vm_management/virt-understanding-aaq-operator.adoc
:_mod-docs-content-type: CONCEPT
[id="virt-about-aaq-operator_{context}"]
= About the AAQ Operator
The Application-Aware Quota (AAQ) Operator provides more flexible and extensible quota management compared to the native `ResourceQuota` object in the {product-title} platform.
In a multi-tenant cluster environment, where multiple workloads operate on shared infrastructure and resources, using the Kubernetes native `ResourceQuota` object to limit aggregate CPU and memory consumption presents infrastructure overhead and live migration challenges for {VirtProductName} workloads.
{VirtProductName} requires significant compute resource allocation to handle virtual machine (VM) live migrations and manage VM infrastructure overhead. When upgrading {VirtProductName}, you must migrate VMs to upgrade the `virt-launcher` pod. However, migrating a VM in the presence of a resource quota can cause the migration, and subsequently the upgrade, to fail.
With AAQ, you can allocate resources for VMs without interfering with cluster-level activities such as upgrades and node maintenance. The AAQ Operator also supports non-compute resources which eliminates the need to manage both the native resource quota and AAQ API objects separately.
[id="aaq-controller-and-crds_{context}"]
== AAQ Operator controller and custom resources
The AAQ Operator introduces two new API objects defined as custom resource definitions (CRDs) for managing alternative quota implementations across multiple namespaces:
* `ApplicationAwareResourceQuota`: Sets aggregate quota restrictions enforced per namespace. The `ApplicationAwareResourceQuota` API is compatible with the native `ResourceQuota` object and shares the same specification and status definitions.
+
.Example manifest
[source,yaml]
----
apiVersion: aaq.kubevirt.io/v1alpha1
kind: ApplicationAwareResourceQuota
metadata:
name: example-resource-quota
spec:
hard:
requests.memory: 1Gi
limits.memory: 1Gi
requests.cpu/vmi: "1" # <1>
requests.memory/vmi: 1Gi # <2>
# ...
----
<1> The maximum amount of CPU that is allowed for VM workloads in the default namespace.
<2> The maximum amount of RAM that is allowed for VM workloads in the default namespace.
* `ApplicationAwareClusterResourceQuota`: Mirrors the `ApplicationAwareResourceQuota` object at a cluster scope. It is compatible with the native `ClusterResourceQuota` API object and shares the same specification and status definitions. When creating an AAQ cluster quota, you can select multiple namespaces based on annotation selection, label selection, or both by editing the `spec.selector.labels` or `spec.selector.annotations` fields.
+
.Example manifest
[source,yaml]
----
apiVersion: aaq.kubevirt.io/v1alpha1
kind: ApplicationAwareClusterResourceQuota # <1>
metadata:
name: example-resource-quota
spec:
quota:
hard:
requests.memory: 1Gi
limits.memory: 1Gi
requests.cpu/vmi: "1"
requests.memory/vmi: 1Gi
selector:
annotations: null
labels:
matchLabels:
kubernetes.io/metadata.name: default
# ...
----
<1> You can only create an `ApplicationAwareClusterResourceQuota` object if the `spec.allowApplicationAwareClusterResourceQuota` field in the `HyperConverged` custom resource (CR) is set to `true`.
+
[NOTE]
====
If both `spec.selector.labels` and `spec.selector.annotations` fields are set, only namespaces that match both are selected.
====
The AAQ controller uses a scheduling gate mechanism to evaluate whether there is enough of a resource available to run a workload. If so, the scheduling gate is removed from the pod and it is considered ready for scheduling. The quota usage status is updated to indicate how much of the quota is used.
If the CPU and memory requests and limits for the workload exceed the enforced quota usage limit, the pod remains in `SchedulingGated` status until there is enough quota available. The AAQ controller creates an event of type `Warning` with details on why the quota was exceeded. You can view the event details by using the `oc get events` command.
[IMPORTANT]
====
Pods that have the `spec.nodeName` field set to a specific node cannot use namespaces that match the `spec.namespaceSelector` labels defined in the `HyperConverged` CR.
====

View File

@@ -0,0 +1,45 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/advanced_vm_management/virt-understanding-aaq-operator.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-configuring-aaq-operator_{context}"]
= Configuring the AAQ Operator by using the CLI
You can configure the AAQ Operator by specifying the fields of the `spec.applicationAwareConfig` object in the `HyperConverged` custom resource (CR).
.Prerequisites
* You have access to the cluster as a user with `cluster-admin` privileges.
* You have installed the OpenShift CLI (`oc`).
.Procedure
* Update the `HyperConverged` CR by running the following command:
+
[source,terminal]
----
$ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type merge -p '{
"spec": {
"applicationAwareConfig": {
"vmiCalcConfigName": "DedicatedVirtualResources",
"namespaceSelector": {
"matchLabels": {
"app": "my-app"
}
},
"allowApplicationAwareClusterResourceQuota": true
}
}
}'
----
+
where:
`vmiCalcConfigName`:: Specifies how resource counting is managed for pods that run virtual machine (VM) workloads. Possible values are:
+
--
* `VmiPodUsage`: Counts compute resources for pods associated with VMs in the same way as native resource quotas and excludes migration-related resources.
* `VirtualResources`: Counts compute resources based on the VM specifications, using the VM RAM size for memory and virtual CPUs for processing.
* `DedicatedVirtualResources` (default): Similar to `VirtualResources`, but separates resource tracking for pods associated with VMs by adding a `/vmi` suffix to CPU and memory resource names. For example, `requests.cpu/vmi` and `requests.memory/vmi`.
--
`namespaceSelector`:: Determines the namespaces for which an AAQ scheduling gate is added to pods when they are created. If a namespace selector is not defined, the AAQ Operator targets namespaces with the `application-aware-quota/enable-gating` label as default.
`allowApplicationAwareClusterResourceQuota`:: If set to `true`, you can create and manage the `ApplicationAwareClusterResourceQuota` object. Setting this attribute to `true` can increase scheduling time.

View File

@@ -0,0 +1,22 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/advanced_vm_management/virt-understanding-aaq-operator.adoc
:_mod-docs-content-type: PROCEDURE
[id="virt-enabling-aaq-operator_{context}"]
= Enabling the AAQ Operator
To deploy the AAQ Operator, set the `enableApplicationAwareQuota` feature gate to `true` in the `HyperConverged` custom resource (CR).
.Prerequisites
* You have access to the cluster as a user with `cluster-admin` privileges.
* You have installed the OpenShift CLI (`oc`).
.Procedure
* Set the `enableApplicationAwareQuota` feature gate to `true` in the `HyperConverged` CR by running the following command:
+
[source,terminal]
----
$ oc patch hco kubevirt-hyperconverged -n openshift-cnv \
--type json -p '[{"op": "add", "path": "/spec/featureGates/enableApplicationAwareQuota", "value": true}]'
----

View File

@@ -0,0 +1,26 @@
:_mod-docs-content-type: ASSEMBLY
[id="virt-understanding-aaq-operator"]
= Configuring the Application-Aware Quota (AAQ) Operator
include::_attributes/common-attributes.adoc[]
:context: virt-understanding-aaq-operator
toc::[]
You can use the Application-Aware Quota (AAQ) Operator to customize and manage resource quotas for individual components in an {product-title} cluster.
include::modules/virt-about-aaq-operator.adoc[leveloffset=+1]
include::modules/virt-enabling-aaq-operator.adoc[leveloffset=+1]
include::modules/virt-configuring-aaq-operator.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_{context}"]
== Additional resources
* xref:../../../applications/quotas/quotas-setting-per-project.adoc#quotas-setting-per-project[Resource quotas per project]
* xref:../../../applications/quotas/quotas-setting-across-multiple-projects.adoc#quotas-setting-across-multiple-projects[Resource quotas across multiple projects]
* xref:../../../rest_api/schedule_and_quota_apis/resourcequota-v1.adoc#resourcequota-v1[`ResourceQuota` API reference]
* xref:../../../rest_api/schedule_and_quota_apis/clusterresourcequota-quota-openshift-io-v1.adoc#clusterresourcequota-quota-openshift-io-v1[`ClusterResourceQuota` API reference]
* xref:../../../rest_api/workloads_apis/pod-v1.adoc#spec-schedulinggates[Pod scheduling gates specification]
* xref:../../../nodes/clusters/nodes-containers-events.adoc#nodes-containers-events[Viewing system event information in an {product-title} cluster]