1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS#16228: Migrating Kueue docs to within OCP docs

This commit is contained in:
Andrea Hoffer
2025-09-16 16:01:30 -04:00
committed by openshift-cherrypick-robot
parent 99ea70865d
commit 340e2bc363
46 changed files with 1320 additions and 8 deletions

View File

@@ -79,7 +79,10 @@ endif::[]
:descheduler-operator: Kube Descheduler Operator
:cli-manager: CLI Manager Operator
:lws-operator: Leader Worker Set Operator
:kueue-prod-name: Red{nbsp}Hat build of Kueue
//Kueue
:kueue-name: Red{nbsp}Hat build of Kueue
:kueue-op: Red Hat Build of Kueue Operator
:ms: Red{nbsp}Hat build of MicroShift (MicroShift)
// Backup and restore
:launch: image:app-launcher.png[title="Application Launcher"]
:mtc-first: Migration Toolkit for Containers (MTC)

View File

@@ -3431,6 +3431,34 @@ Distros: openshift-enterprise
Topics:
- Name: Overview of AI workloads on OpenShift Container Platform
File: index
- Name: Red Hat build of Kueue
Dir: kueue
Distros: openshift-enterprise
Topics:
- Name: Introduction to Red Hat build of Kueue
File: about-kueue
- Name: Release notes
File: release-notes
- Name: Installing Red Hat build of Kueue
File: install-kueue
- Name: Installing Red Hat build of Kueue in a disconnected environment
File: install-disconnected
- Name: Configuring role-based permissions
File: rbac-permissions
- Name: Configuring quotas
File: configuring-quotas
- Name: Managing jobs and workloads
File: managing-workloads
- Name: Using cohorts
File: using-cohorts
- Name: Configuring fair sharing
File: configuring-fairsharing
- Name: Gang scheduling
File: gangscheduling
- Name: Running jobs with quota limits
File: running-kueue-jobs
- Name: Getting support
File: getting-support
- Name: Leader Worker Set Operator
Dir: leader_worker_set
Distros: openshift-enterprise

View File

@@ -15,6 +15,7 @@ include::modules/ai-operators.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../ai_workloads/kueue/about-kueue.adoc#about-kueue[Introduction to {kueue-name}]
* xref:../ai_workloads/leader_worker_set/index.adoc#lws-about[{lws-operator} overview]
// Exclude this for now until we can get it reviewed by the RHOAI team

View File

@@ -0,0 +1 @@
../../_attributes/

View File

@@ -0,0 +1,52 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="about-kueue"]
= Introduction to {kueue-name}
:context: about-kueue
toc::[]
{kueue-name} is a Kubernetes-native system that manages access to resources for jobs.
{kueue-name} can determine when a job waits, is admitted to start by creating pods, or should be _preempted_, meaning that active pods for that job are deleted.
[NOTE]
====
In the context of {kueue-name}, a job can be defined as a one-time or on-demand task that runs to completion.
====
{kueue-name} is based on the link:https://kueue.sigs.k8s.io/docs/[Kueue] open source project.
{kueue-name} is compatible with environments that use heterogeneous, elastic resources. This means that the environment has many different resource types, and those resources are capable of dynamic scaling.
{kueue-name} does not replace any existing components in a Kubernetes cluster, but instead integrates with the existing Kubernetes API server, scheduler, and cluster autoscaler components.
{kueue-name} supports all-or-nothing semantics. This means that either an entire job with all of its components is admitted to the cluster, or the entire job is rejected if it does not fit on the cluster.
// Personas
[id="about-kueue-personas"]
== Personas
Different personas exist in a {kueue-name} workflow.
Batch administrators:: Batch administrators manage the cluster infrastructure and establish quotas and queues.
Batch users:: Batch users run jobs on the cluster. Examples of batch users might be researchers, AI/ML engineers, or data scientists.
Serving users:: Serving users run jobs on the cluster. For example, to expose a trained AI/ML model for inference.
Platform developers:: Platform developers integrate {kueue-name} with other software. They might also contribute to the Kueue open source project.
[id="about-kueue-workflow"]
== Workflow overview
The {kueue-name} workflow can be described at a high level as follows:
. Batch administrators create and configure `ResourceFlavor`, `LocalQueue`, and `ClusterQueue` resources.
. User personas create jobs on the cluster.
. The Kubernetes API server validates and accepts job data.
. {kueue-name} admits jobs based on configured options, such as order or quota. It injects affinity into the job by using resource flavors, and creates a `Workload` object that corresponds to each job.
. The applicable controller for the job type creates pods.
. The Kubernetes scheduler assigns pods to a node in the cluster.
. The Kubernetes cluster autoscaler provisions more nodes as required.
////
TODO:Add docs explaining different job / workload types
These can be added as we add stories / docs for different use cases
////

View File

@@ -0,0 +1,18 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="configuring-fairsharing"]
= Configuring fair sharing
:context: configuring-fairsharing
toc::[]
Fair sharing is a preemption strategy that is used to achieve an equal or weighted share of borrowable resources between the tenants of a cohort. Borrowable resources are the unused nominal quota of all the cluster queues in a cohort.
You can configure fair sharing by setting the `preemptionPolicy` value in the `Kueue` custom resource (CR) to `FairSharing`.
include::modules/kueue-clusterqueue-share-value.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_{context}"]
== Additional resources
* xref:../../ai_workloads/kueue/install-kueue.adoc#create-kueue-cr_install-kueue[Creating a `Kueue` custom resource]

View File

@@ -0,0 +1,38 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="configuring-quotas"]
= Configuring quotas
:context: configuring-quotas
toc::[]
As an administrator, you can use {kueue-name} to configure quotas to optimize resource allocation and system throughput for user workloads.
You can configure quotas for compute resources such as CPU, memory, pods, and GPU.
You can configure quotas in {kueue-name} by completing the following steps:
. Configure a cluster queue.
. Configure a resource flavor.
. Configure a local queue.
Users can then submit their workloads to the local queue.
include::modules/kueue-configuring-clusterqueues.adoc[leveloffset=+1]
[role="_next-steps"]
[id="clusterqueues-next-steps_{context}"]
.Next steps
The cluster queue is not ready for use until a xref:../../ai_workloads/kueue/configuring-quotas.adoc#configuring-resourceflavors_configuring-quotas[`ResourceFlavor` object] has also been configured.
include::modules/kueue-configuring-resourceflavors.adoc[leveloffset=+1]
include::modules/kueue-configuring-localqueues.adoc[leveloffset=+1]
include::modules/kueue-configuring-localqueue-defaults.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="clusterqueues-additional-resources_{context}"]
== Additional resources
* xref:../../ai_workloads/kueue/rbac-permissions.adoc#rbac-permissions[RBAC permissions]
* link:https://kueue.sigs.k8s.io/docs/concepts/cluster_queue/[Kubernetes documentation about cluster queues]

View File

@@ -0,0 +1,27 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="gangscheduling"]
= Gang scheduling
:context: gangscheduling
toc::[]
Gang scheduling ensures that a group or _gang_ of related jobs only start when all required resources are available. {kueue-name} enables gang scheduling by suspending jobs until the {product-title} cluster can guarantee the capacity to start and execute all of the related jobs in the gang together. This is also known as _all-or-nothing_ scheduling.
Gang scheduling is important if you are working with expensive, limited resources, such as GPUs. Gang scheduling can prevent jobs from claiming but not using GPUs, which can improve GPU utilization and can reduce running costs. Gang scheduling can also help to prevent issues like resource segmentation and deadlocking.
include::modules/kueue-configuring-gangscheduling.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_{context}"]
== Additional resources
* xref:../../ai_workloads/kueue/install-kueue.adoc#create-kueue-cr_install-kueue[Creating a Kueue custom resource]
////
// use case - deep learning
One classic example is in deep learning workloads. Deep learning frameworks (Tensorflow, PyTorch etc) require all the workers to be running during the training process.
In this scenario, when you deploy training workloads, all the components should be scheduled and deployed to ensure the training works as expected.
Gang Scheduling is a critical feature for Deep Learning workloads to enable all-or-nothing scheduling capability, as most DL frameworks requires all workers to be running to start training process. Gang Scheduling avoids resource inefficiency and scheduling deadlock sometimes.
////

View File

@@ -0,0 +1,27 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="getting-support"]
= Getting support
:context: getting-support
toc::[]
If you experience difficulty with a procedure described in this documentation, or with {kueue-name} in general, visit the link:http://access.redhat.com[Red{nbsp}Hat Customer Portal].
From the Customer Portal, you can:
* Search or browse through the Red{nbsp}Hat Knowledgebase of articles and solutions relating to Red{nbsp}Hat products.
* Submit a support case to Red{nbsp}Hat Support.
* Access other product documentation.
[id="getting-support-rh-kb"]
== About the Red Hat Knowledgebase
The link:https://access.redhat.com/knowledgebase[Red{nbsp}Hat Knowledgebase] provides rich content aimed at helping you make the most of Red{nbsp}Hat's products and technologies. The Red{nbsp}Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red{nbsp}Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps.
include::modules/kueue-gathering-cluster-data.adoc[leveloffset=+1]
[id="getting-support-additional-resources"]
[role="_additional-resources"]
== Additional resources
* xref:../../support/index.adoc#support-overview[Support overview]

1
ai_workloads/kueue/images Symbolic link
View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1,29 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="install-disconnected"]
= Installing {kueue-name} in a disconnected environment
:context: install-disconnected
toc::[]
Before you can install {kueue-name} on a disconnected {product-title} cluster, you must enable {olm-first} in disconnected environments by completing the following steps:
* Disable the default remote OperatorHub sources for OLM.
* Use a workstation with full internet access to create and push local mirrors of the OperatorHub content to a mirror registry.
* Configure OLM to install and manage Operators from local sources on the mirror registry instead of the default remote sources.
After enabling OLM in a disconnected environment, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released.
For full documentation on completing these steps, see the {product-title} documentation on xref:../../disconnected/using-olm.adoc#olm-restricted-networks[Using Operator Lifecycle Manager in disconnected environments].
include::modules/kueue-compatible-environments.adoc[leveloffset=+1]
include::modules/kueue-install-kueue-operator.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../security/cert_manager_operator/cert-manager-operator-install.adoc#installing-the-cert-manager-operator-for-red-hat-openshift[Installing the {cert-manager-operator}]
include::modules/kueue-create-kueue-cr.adoc[leveloffset=+1]
include::modules/kueue-label-namespaces.adoc[leveloffset=+1]

View File

@@ -0,0 +1,22 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="install-kueue"]
= Installing {kueue-name}
:context: install-kueue
toc::[]
You can install {kueue-name} by using the {kueue-op} in OperatorHub.
include::modules/kueue-compatible-environments.adoc[leveloffset=+1]
include::modules/kueue-install-kueue-operator.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../security/cert_manager_operator/cert-manager-operator-install.adoc#installing-the-cert-manager-operator-for-red-hat-openshift[Installing the {cert-manager-operator}]
include::modules/kueue-create-kueue-cr.adoc[leveloffset=+1]
include::modules/kueue-label-namespaces.adoc[leveloffset=+1]

View File

@@ -0,0 +1,13 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="managing-workloads"]
= Managing jobs and workloads
:context: managing-workloads
toc::[]
{kueue-name} does not directly manipulate jobs that are created by users. Instead, Kueue manages `Workload` objects that represent the resource requirements of a job. {kueue-name} automatically creates a workload for each job, and syncs any decisions and statuses between the two objects.
include::modules/kueue-label-namespaces.adoc[leveloffset=+1]
include::modules/kueue-configuring-labelpolicy.adoc[leveloffset=+1]

1
ai_workloads/kueue/modules Symbolic link
View File

@@ -0,0 +1 @@
../../modules/

View File

@@ -0,0 +1,26 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="rbac-permissions"]
= Configuring role-based permissions
:context: rbac-permissions
toc::[]
The following procedures provide information about how you can configure role-based access control (RBAC) for your {kueue-name} deployment. These RBAC permissions determine which types of users can create which types of {kueue-name} objects.
[id="authentication-clusterroles"]
== Cluster roles
The {kueue-name} Operator deploys `kueue-batch-admin-role` and `kueue-batch-user-role` cluster roles by default.
kueue-batch-admin-role:: This cluster role includes the permissions to manage cluster queues, local queues, workloads, and resource flavors.
kueue-batch-user-role:: This cluster role includes the permissions to manage jobs and to view local queues and workloads.
include::modules/kueue-configure-rbac-batch-admins.adoc[leveloffset=+1]
include::modules/kueue-configure-rbac-batch-users.adoc[leveloffset=+1]
[role="_additional-resources"]
== Additional resources
* xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions]
* xref:../../authentication/index.adoc#openshift-auth-common-terms_overview-of-authentication-authorization[Glossary of common terms for {product-title} authentication and authorization]

View File

@@ -0,0 +1,15 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="release-notes"]
= Release notes
:context: release-notes
toc::[]
{kueue-name} is released as an Operator that is supported on {product-title}.
include::modules/kueue-compatible-environments.adoc[leveloffset=+1]
include::modules/kueue-release-notes-1.0.1.adoc[leveloffset=+1]
include::modules/kueue-release-notes-1.0.adoc[leveloffset=+1]

View File

@@ -0,0 +1,13 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="running-kueue-jobs"]
= Running jobs with quota limits
:context: running-kueue-jobs
toc::[]
You can run Kubernetes jobs with {kueue-name} enabled to manage resource allocation within defined quota limits. This can help to ensure predictable resource availability, cluster stability, and optimized performance.
include::modules/kueue-identifying-local-queues.adoc[leveloffset=+1]
include::modules/kueue-defining-running-jobs.adoc[leveloffset=+1]

1
ai_workloads/kueue/snippets Symbolic link
View File

@@ -0,0 +1 @@
../../snippets/

View File

@@ -0,0 +1,31 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="troubleshooting"]
= Troubleshooting
:context: troubleshooting
toc::[]
// commented out - note for TS docs
// Troubleshooting installations
// Verifying node health
// Troubleshooting network issues
// Troubleshooting Operator issues
// Investigating pod issues
// Diagnosing CLI issues
////
Troubleshooting Jobs
Troubleshooting the status of a Job
Troubleshooting Queues
Troubleshooting the status of a LocalQueue or ClusterQueue
Troubleshooting Provisioning Request in Kueue
Troubleshooting the status of a Provisioning Request in Kueue
Troubleshooting Pods
Troubleshooting the status of a Pod or group of Pods
Troubleshooting delete ClusterQueue
////

View File

@@ -0,0 +1,16 @@
:_mod-docs-content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="using-cohorts"]
= Using cohorts
:context: using-cohorts
toc::[]
You can use cohorts to group cluster queues and determine which cluster queues are able to share borrowable resources with each other.
Borrowable resources are defined as the unused nominal quota of all the cluster queues in a cohort.
Using cohorts can help to optimize resource utilization by preventing under-utilization and enabling fair sharing configurations.
Cohorts can also help to simplify resource management and allocation between teams, because you can group cluster queues for related workloads or for each team.
You can also use cohorts to set resource quotas at a group level to define the limits for resources that a group of cluster queues can consume.
include::modules/kueue-clusterqueue-configuring-cohorts-reference.adoc[leveloffset=+1]

View File

@@ -10,18 +10,16 @@ You can use Operators to run artificial intelligence (AI) and machine learning (
{product-title} provides several Operators that can help you run AI workloads:
{kueue-name}::
You can use {kueue-name} to provide structured queues and prioritization so that workloads are handled fairly and efficiently. Without proper prioritization, important jobs might be delayed while less critical jobs occupy resources.
+
For more information, see "Introduction to {kueue-name}".
{lws-operator}::
You can use the {lws-operator} to enable large-scale AI inference workloads to run reliably across nodes with synchronization between leader and worker processes. Without proper coordination, large training runs might fail or stall.
+
For more information, see "{lws-operator} overview".
{kueue-prod-name}::
You can use {kueue-prod-name} to provide structured queues and prioritization so that workloads are handled fairly and efficiently. Without proper prioritization, important jobs might be delayed while less critical jobs occupy resources.
+
For more information, see link:https://docs.redhat.com/en/documentation/red_hat_build_of_kueue/latest/html/overview/about-kueue[Introduction to Red Hat build of Kueue] in the {kueue-prod-name} documentation.
// TODO: Anything else to list yet?
////
Keep for future use (JobSet and DRA) - From Gaurav (PM):
AI in OpenShift Focus Areas

View File

@@ -0,0 +1,25 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/using-cohorts.adoc
:_mod-docs-content-type: REFERENCE
[id="clusterqueue-configuring-cohorts-reference_{context}"]
= Configuring cohorts within a cluster queue spec
You can add a cluster queue to a cohort by specifying the name of the cohort in the `.spec.cohort` field of the `ClusterQueue` object, as shown in the following example:
[source,yaml]
----
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
name: cluster-queue
spec:
# ...
cohort: example-cohort
# ...
----
All cluster queues that have a matching `spec.cohort` are part of the same cohort.
If the `spec.cohort` field is omitted, the cluster queue does not belong to any cohort and cannot access borrowable resources.

View File

@@ -0,0 +1,39 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/configuring-fairsharing.adoc
:_mod-docs-content-type: REFERENCE
[id="clusterqueue-share-value_{context}"]
= Cluster queue weights
After you have enabled fair sharing, you must set share values for each cluster queue before fair sharing can take place. Share values are represented as the `weight` value in a `ClusterQueue` object.
Share values are important because they allow administrators to prioritize specific job types or teams. Critical applications or high-priority teams can be configured with a weighted value so that they receive a proportionally larger share of the available resources. Configuring weights ensures that unused resources are distributed according to defined organizational or project priorities rather than on a first-come, first-served basis.
The `weight` value, or share value, defines a comparative advantage for the cluster queue when competing for borrowable resources. Generally, {kueue-name} admits jobs with a lower share value first. Jobs with a higher share value are more likely to be preempted before those with lower share values.
.Example cluster queue with a fair sharing weight configured
[source,yaml]
----
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
name: cluster-queue
spec:
namespaceSelector: {}
resourceGroups:
- coveredResources: ["cpu"]
flavors:
- name: default-flavor
resources:
- name: cpu
nominalQuota: 9
cohort: example-cohort
fairSharing:
weight: 2
----
[id="clusterqueue-share-value-zero_{context}"]
== Zero weight
A `weight` value of `0` represents an infinite share value. This means that the cluster queue is always at a disadvantage compared to others, and its workloads are always the first to be preempted when fair sharing is enabled.

View File

@@ -0,0 +1,34 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/install-kueue.adoc
// * ai_workloads/kueue/install-disconnected.adoc
// * ai_workloads/kueue/release-notes.adoc
:_mod-docs-content-type: REFERENCE
[id="compatible-environments_{context}"]
= Compatible environments
Before you install {kueue-name}, review this section to ensure that your cluster meets the requirements.
[id="compatible-environments-arch_{context}"]
== Supported architectures
{kueue-name} is supported on the following architectures:
* ARM64
* 64-bit x86
* ppc64le ({ibm-power-name})
* s390x ({ibm-z-name})
[id="compatible-environments-platforms_{context}"]
== Supported platforms
{kueue-name} is supported on the following platforms:
* {product-title}
* {hcp-capital} for {product-title}
[IMPORTANT]
====
Currently, {kueue-name} is not supported on {ms}.
====

View File

@@ -0,0 +1,70 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/rbac-permissions.adoc
:_mod-docs-content-type: PROCEDURE
[id="configure-rbac-batch-admins_{context}"]
= Configuring permissions for batch administrators
You can configure permissions for batch administrators by binding the `kueue-batch-admin-role` cluster role to a user or group of users.
.Prerequisites
include::snippets/prereqs-snippet-yaml-admin.adoc[]
.Procedure
. Create a `ClusterRoleBinding` object as a YAML file:
+
.Example `ClusterRoleBinding` object
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kueue-admins <1>
subjects: <2>
- kind: User
name: admin@example.com
apiGroup: rbac.authorization.k8s.io
roleRef: <3>
kind: ClusterRole
name: kueue-batch-admin-role
apiGroup: rbac.authorization.k8s.io
----
<1> Provide a name for the `ClusterRoleBinding` object.
<2> Add details about which user or group of users you want to provide user permissions for.
<3> Add details about the `kueue-batch-admin-role` cluster role.
. Apply the `ClusterRoleBinding` object:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----
.Verification
* You can verify that the `ClusterRoleBinding` object was applied correctly by running the following command and verifying that the output contains the correct information for the `kueue-batch-admin-role` cluster role:
+
[source,yaml]
----
$ oc describe clusterrolebinding.rbac
----
+
.Example output
[source,terminal]
----
...
Name: kueue-batch-admin-role
Labels: app.kubernetes.io/name=kueue
Annotations: <none>
Role:
Kind: ClusterRole
Name: kueue-batch-admin-role
Subjects:
Kind Name Namespace
---- ---- ---------
User admin@example.com admin-namespace
...
----

View File

@@ -0,0 +1,73 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/rbac-permissions.adoc
:_mod-docs-content-type: PROCEDURE
[id="configure-rbac-batch-users_{context}"]
= Configuring permissions for users
You can configure permissions for {kueue-name} users by binding the `kueue-batch-user-role` cluster role to a user or group of users.
.Prerequisites
include::snippets/prereqs-snippet-yaml-admin.adoc[]
.Procedure
. Create a `RoleBinding` object as a YAML file:
+
.Example `ClusterRoleBinding` object
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kueue-users <1>
namespace: user-namespace <2>
subjects: <3>
- kind: Group
name: team-a@example.com
apiGroup: rbac.authorization.k8s.io
roleRef: <4>
kind: ClusterRole
name: kueue-batch-user-role
apiGroup: rbac.authorization.k8s.io
----
<1> Provide a name for the `RoleBinding` object.
<2> Add details about which namespace the `RoleBinding` object applies to.
<3> Add details about which user or group of users you want to provide user permissions for.
<4> Add details about the `kueue-batch-user-role` cluster role.
. Apply the `RoleBinding` object:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----
.Verification
* You can verify that the `RoleBinding` object was applied correctly by running the following command and verifying that the output contains the correct information for the `kueue-batch-user-role` cluster role:
+
[source,yaml]
----
$ oc describe rolebinding.rbac
----
+
.Example output
[source,terminal]
----
...
Name: kueue-users
Labels: app.kubernetes.io/name=kueue
Annotations: <none>
Role:
Kind: ClusterRole
Name: kueue-batch-user-role
Subjects:
Kind Name Namespace
---- ---- ---------
Group team-a@example.com user-namespace
...
----

View File

@@ -0,0 +1,63 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/configuring-quotas.adoc
:_mod-docs-content-type: PROCEDURE
[id="configuring-clusterqueues_{context}"]
= Configuring a cluster queue
A cluster queue is a cluster-scoped resource, represented by a `ClusterQueue` object, that governs a pool of resources such as CPU, memory, and pods.
Cluster queues can be used to define usage limits, quotas for resource flavors, order of consumption, and fair sharing rules.
[NOTE]
====
The cluster queue is not ready for use until a `ResourceFlavor` object has also been configured.
====
.Prerequisites
include::snippets/prereqs-snippet-yaml.adoc[]
.Procedure
. Create a `ClusterQueue` object as a YAML file:
+
.Example of a basic `ClusterQueue` object using a single resource flavor
[source,yaml]
----
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
name: cluster-queue
spec:
namespaceSelector: {} # <1>
resourceGroups:
- coveredResources: ["cpu", "memory", "pods", "foo.com/gpu"] # <2>
flavors:
- name: "default-flavor" # <3>
resources: # <4>
- name: "cpu"
nominalQuota: 9
- name: "memory"
nominalQuota: 36Gi
- name: "pods"
nominalQuota: 5
- name: "foo.com/gpu"
nominalQuota: 100
----
<1> Defines which namespaces can use the resources governed by this cluster queue. An empty `namespaceSelector` as shown in the example means that all namespaces can use these resources.
<2> Defines the resource types governed by the cluster queue. This example `ClusterQueue` object governs CPU, memory, pod, and GPU resources.
<3> Defines the resource flavor that is applied to the resource types listed. In this example, the `default-flavor` resource flavor is applied to CPU, memory, pod, and GPU resources.
<4> Defines the resource requirements for admitting jobs. This example cluster queue only admits jobs if the following conditions are met:
+
* The sum of the CPU requests is less than or equal to 9.
* The sum of the memory requests is less than or equal to 36Gi.
* The total number of pods is less than or equal to 5.
* The sum of the GPU requests is less than or equal to 100.
. Apply the `ClusterQueue` object by running the following command:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----

View File

@@ -0,0 +1,43 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/gangscheduling.adoc
:_mod-docs-content-type: REFERENCE
[id="configuring-gangscheduling_{context}"]
= Configuring gang scheduling
As a cluster administrator, you can configure gang scheduling by modifying the `gangScheduling` spec in the `Kueue` custom resource (CR).
.Example `Kueue` CR with gang scheduling configured
[source,yaml]
----
apiVersion: kueue.openshift.io/v1
kind: Kueue
metadata:
name: cluster
labels:
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/name: kueue-operator
namespace: openshift-kueue-operator
spec:
config:
gangScheduling:
policy: ByWorkload # <1>
byWorkload:
admission: Parallel # <2>
# ...
----
<1> You can set the `policy` value to enable or disable gang scheduling. The possible values are `ByWorkload`, `None`, or empty (`""`).
+
`ByWorkload`:: When the `policy` value is set to `ByWorkload`, each job is processed and considered for admission as a single unit. If the job does not become ready within the specified time, the entire job is evicted and retried at a later time.
+
`None`:: When the `policy` value is set to `None`, gang scheduling is disabled.
+
Empty (`""`):: When the `policy` value is empty or set to `""`, the {kueue-name} Operator determines settings for gang scheduling. Currently, gang scheduling is disabled by default.
<2> If the `policy` value is set to `ByWorkload`, you must configure job admission settings. The possible values for the `admission` spec are `Parallel`, `Sequential`, or empty (`""`).
+
`Parallel`:: When the `admission` value is set to `Parallel`, pods from any job can be admitted at any time. This can cause a deadlock, where jobs are in contention for cluster capacity. When a deadlock occurs, the successful scheduling of pods from another job can prevent the scheduling of pods from the current job.
+
`Sequential`:: When the `admission` value is set to `Sequential`, only pods from the currently processing job are admitted. After all of the pods from the current job have been admitted and are ready, {kueue-name} processes the next job. Sequential processing can slow down admission when the cluster has sufficient capacity for multiple jobs, but provides a higher likelihood that all of the pods for a job are scheduled together successfully.
+
Empty (`""`):: When the `admission` value is empty or set to `""`, the {kueue-name} Operator determines job admission settings. Currently, the `admission` value is set to `Parallel` by default.

View File

@@ -0,0 +1,45 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/managing-workloads.adoc
:_mod-docs-content-type: REFERENCE
[id="configuring-labelpolicy_{context}"]
= Configuring label policies for jobs
The `spec.config.workloadManagement.labelPolicy` spec in the `Kueue` custom resource (CR) is an optional field that controls how {kueue-name} decides whether to manage or ignore different jobs. The allowed values are `QueueName`, `None` and empty (`""`).
If the `labelPolicy` setting is omitted or empty (`""`), the default policy is that {kueue-name} manages jobs that have a `kueue.x-k8s.io/queue-name` label, and ignores jobs that do not have the `kueue.x-k8s.io/queue-name` label. This is the same workflow as if the `labelPolicy` is set to `QueueName`.
If the `labelPolicy` setting is set to `None`, jobs are managed by {kueue-name} even if they do not have the `kueue.x-k8s.io/queue-name` label.
.Example `workloadManagement` spec configuration
[source,yaml]
----
apiVersion: kueue.openshift.io/v1
kind: Kueue
metadata:
labels:
app.kubernetes.io/name: kueue-operator
app.kubernetes.io/managed-by: kustomize
name: cluster
namespace: openshift-kueue-operator
spec:
config:
workloadManagement:
labelPolicy: QueueName
# ...
----
.Example user-created `Job` object containing the `kueue.x-k8s.io/queue-name` label
[source,yaml]
----
apiVersion: batch/v1
kind: Job
metadata:
generateName: sample-job-
namespace: my-namespace
labels:
kueue.x-k8s.io/queue-name: user-queue
spec:
# ...
----

View File

@@ -0,0 +1,50 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/configuring-quotas.adoc
:_mod-docs-content-type: PROCEDURE
[id="configuring-localqueue-defaults_{context}"]
= Configuring a default local queue
As a cluster administrator, you can improve quota enforcement in your cluster by managing all jobs in selected namespaces without needing to explicitly label each job. You can do this by creating a default local queue.
A default local queue serves as the local queue for newly created jobs that do not have the `kueue.x-k8s.io/queue-name` label. After you create a default local queue, any new jobs created in the namespace without a `kueue.x-k8s.io/queue-name` label automatically update to have the `kueue.x-k8s.io/queue-name: default` label.
[IMPORTANT]
====
Preexisting jobs in a namespace are not affected when you create a default local queue. If jobs already exist in the namespace before you create the default local queue, you must label those jobs explicitly to assign them to a queue.
====
.Prerequisites
include::snippets/prereqs-snippet-yaml-1.1.adoc[]
* You have created a `ClusterQueue` object.
.Procedure
. Create a `LocalQueue` object named `default` as a YAML file:
+
.Example of a default `LocalQueue` object
[source,yaml]
----
apiVersion: kueue.x-k8s.io/v1beta1
kind: LocalQueue
metadata:
namespace: team-namespace
name: default
spec:
clusterQueue: cluster-queue
----
. Apply the `LocalQueue` object by running the following command:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----
.Verification
. Create a job in the same namespace as the default local queue.
. Observe that the job updates with the `kueue.x-k8s.io/queue-name: default` label.

View File

@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/configuring-quotas.adoc
:_mod-docs-content-type: PROCEDURE
[id="configuring-localqueues_{context}"]
= Configuring a local queue
A local queue is a namespaced object, represented by a `LocalQueue` object, that groups closely related workloads that belong to a single namespace.
As an administrator, you can configure a `LocalQueue` object to point to a cluster queue. This allocates resources from the cluster queue to workloads in the namespace specified in the `LocalQueue` object.
.Prerequisites
include::snippets/prereqs-snippet-yaml.adoc[]
* You have created a `ClusterQueue` object.
.Procedure
. Create a `LocalQueue` object as a YAML file:
+
.Example of a basic `LocalQueue` object
[source,yaml]
----
apiVersion: kueue.x-k8s.io/v1beta1
kind: LocalQueue
metadata:
namespace: team-namespace
name: user-queue
spec:
clusterQueue: cluster-queue
----
. Apply the `LocalQueue` object by running the following command:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----

View File

@@ -0,0 +1,49 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/configuring-quotas.adoc
:_mod-docs-content-type: PROCEDURE
[id="configuring-resourceflavors_{context}"]
= Configuring a resource flavor
After you have configured a `ClusterQueue` object, you can configure a `ResourceFlavor` object.
Resources in a cluster are typically not homogeneous. If the resources in your cluster are homogeneous, you can use an empty `ResourceFlavor` instead of adding labels to custom resource flavors.
You can use a custom `ResourceFlavor` object to represent different resource variations that are associated with cluster nodes through labels, taints, and tolerations. You can then associate workloads with specific node types to enable fine-grained resource management.
.Prerequisites
include::snippets/prereqs-snippet-yaml.adoc[]
.Procedure
. Create a `ResourceFlavor` object as a YAML file:
+
.Example of an empty `ResourceFlavor` object
[source,yaml]
----
apiVersion: kueue.x-k8s.io/v1beta1
kind: ResourceFlavor
metadata:
name: default-flavor
----
+
.Example of a custom `ResourceFlavor` object
[source,yaml]
----
apiVersion: kueue.x-k8s.io/v1beta1
kind: ResourceFlavor
metadata:
name: "x86"
spec:
nodeLabels:
cpu-arch: x86
----
. Apply the `ResourceFlavor` object by running the following command:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----

View File

@@ -0,0 +1,66 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/install-kueue.adoc
// * ai_workloads/kueue/install-disconnected.adoc
:_mod-docs-content-type: PROCEDURE
[id="create-kueue-cr_{context}"]
= Creating a Kueue custom resource
After you have installed the {kueue-op}, you must create a `Kueue` custom resource (CR) to configure your installation.
.Prerequisites
include::snippets/prereqs-snippet-console.adoc[]
.Procedure
. In the {product-title} web console, click *Operators* -> *Installed Operators*.
. In the *Provided APIs* table column, click *Kueue*. This takes you to the *Kueue* tab of the *Operator details* page.
. Click *Create Kueue*. This takes you to the *Create Kueue* YAML view.
. Enter the details for your `Kueue` CR.
+
.Example `Kueue` CR
[source,yaml]
----
apiVersion: kueue.openshift.io/v1
kind: Kueue
metadata:
labels:
app.kubernetes.io/name: kueue-operator
app.kubernetes.io/managed-by: kustomize
name: cluster # <1>
namespace: openshift-kueue-operator
spec:
managementState: Managed
config:
integrations:
frameworks: # <2>
- BatchJob
preemption:
preemptionPolicy: Classical # <3>
# ...
----
<1> The name of the `Kueue` CR must be `cluster`.
<2> If you want to configure {kueue-name} for use with other workload types, add those types here. For the default configuration, only the `BatchJob` type is recommended and supported.
<3> Optional: If you want to configure fair sharing for {kueue-name}, set the `preemptionPolicy` value to `FairSharing`. The default setting in the `Kueue` CR is `Classical` preemption.
// Once conceptual docs are added mention those docs here. "For more information about X, see..."
. Click *Create*.
.Verification
* After you create the `Kueue` CR, the web console brings you to the *Operator details* page, where you can see the CR in the list of *Kueues*.
* Optional: If you have the {oc-first} installed, you can run the following command and observe the output to confirm that your `Kueue` CR has been created successfully:
+
[source,terminal]
----
$ oc get kueue
----
+
.Example output
[source,terminal]
----
NAME AGE
cluster 4m
----

View File

@@ -0,0 +1,90 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/running-kueue-jobs.adoc
:_mod-docs-content-type: PROCEDURE
[id="defining-running-jobs_{context}"]
= Defining a job to run with {kueue-name}
When you are defining a job to run with {kueue-name}, ensure that it meets the following criteria:
* Specify the local queue to submit the job to, by using the `kueue.x-k8s.io/queue-name` label.
* Include the resource requests for each job pod.
{kueue-name} suspends the job, and then starts it when resources are available. {kueue-name} creates a corresponding workload, represented as a `Workload` object with a name that matches the job.
.Prerequisites
include::snippets/prereqs-snippet-yaml-user.adoc[]
* You have identified the name of the local queue that you want to submit jobs to.
.Procedure
. Create a `Job` object.
+
.Example job
[source,yaml]
----
apiVersion: batch/v1
kind: Job # <1>
metadata:
generateName: sample-job- # <2>
namespace: my-namespace
labels:
kueue.x-k8s.io/queue-name: user-queue # <3>
spec:
parallelism: 3
completions: 3
template:
spec:
containers:
- name: dummy-job
image: registry.k8s.io/e2e-test-images/agnhost:2.53
args: ["entrypoint-tester", "hello", "world"]
resources: # <4>
requests:
cpu: 1
memory: "200Mi"
restartPolicy: Never
----
<1> Defines the resource type as a `Job` object, which represents a batch computation task.
<2> Provides a prefix for generating a unique name for the job.
<3> Identifies the queue to send the job to.
<4> Defines the resource requests for each pod.
. Run the job by running the following command:
+
[source,terminal]
----
$ oc create -f <filename>.yaml
----
.Verification
* Verify that pods are running for the job you have created, by running the following command and observing the output:
+
[source,terminal]
----
$ oc get job <job-name>
----
+
.Example output
[source,terminal]
----
NAME STATUS COMPLETIONS DURATION AGE
sample-job-sk42x Suspended 0/1 2m12s
----
* Verify that a workload has been created in your namespace for the job, by running the following command and observing the output:
+
[source,terminal]
----
$ oc -n <namespace> get workloads
----
+
.Example output
[source,terminal]
----
NAME QUEUE RESERVED IN ADMITTED FINISHED AGE
job-sample-job-sk42x-77c03 user-queue 3m8s
----

View File

@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/getting-support.adoc
:_mod-docs-content-type: PROCEDURE
[id="gathering-cluster-data_{context}"]
= Collecting data for Red Hat Support
You can use the `oc adm must-gather` CLI command to collect the information about your {kueue-name} instance that is most likely needed for debugging issues, including:
* {kueue-name} custom resources, such as workloads, cluster queues, local queues, resource flavors, admission checks, and their corresponding cluster resource definitions (CRDs)
* Services
* Endpoints
* Webhook configurations
* Logs from the `openshift-kueue-operator` namespace and `kueue-controller-manager` pods
Collected data is written into a new directory named `must-gather/` in the current working directory by default.
.Prerequisites
* The {kueue-name} Operator is installed on your cluster.
* You have installed the {oc-first}.
.Procedure
. Navigate to the directory where you want to store the `must-gather` data.
. Collect `must-gather` data by running the following command:
+
[source,terminal]
----
$ oc adm must-gather \
--image=registry.redhat.io/kueue/kueue-must-gather-rhel9:<version>
----
+
Where `<version>` is your current version of {kueue-name}.
. Create a compressed file from the `must-gather` directory that was just created in your working directory. Make sure you provide the date and cluster ID for the unique `must-gather` data. For more information about how to find the cluster ID, see link:https://access.redhat.com/solutions/5280291[How to find the cluster-id or name on OpenShift cluster].
. Attach the compressed file to your support case on the link:https://access.redhat.com/support/cases/#/case/list[the *Customer Support* page] of the Red{nbsp}Hat Customer Portal.

View File

@@ -0,0 +1,29 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/running-kueue-jobs.adoc
:_mod-docs-content-type: PROCEDURE
[id="identifying-local-queues_{context}"]
= Identifying available local queues
Before you can submit a job to a queue, you must find the name of the local queue.
.Prerequisites
include::snippets/prereqs-snippet-yaml-user.adoc[]
.Procedure
* Run the following command to list available local queues in your namespace:
+
[source,terminal]
----
$ oc -n <namespace> get localqueues
----
+
.Example output
[source,terminal]
----
NAME CLUSTERQUEUE PENDING WORKLOADS
user-queue cluster-queue 3
----

View File

@@ -0,0 +1,25 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/install-kueue.adoc
// * ai_workloads/kueue/install-disconnected.adoc
:_mod-docs-content-type: PROCEDURE
[id="install-kueue-operator_{context}"]
= Installing the {kueue-op}
You can install the {kueue-op} on a {product-title} cluster by using the OperatorHub in the web console.
.Prerequisites
* You have administrator permissions on a {product-title} cluster.
* You have access to the {product-title} web console.
* You have installed and configured the {cert-manager-operator} for your cluster.
.Procedure
. In the {product-title} web console, click *Operators* -> *OperatorHub*.
. Choose *{kueue-op}* from the list of available Operators, and click *Install*.
.Verification
* Go to *Operators* -> *Installed Operators* and confirm that the *{kueue-op}* is listed with *Status* as *Succeeded*.

View File

@@ -0,0 +1,29 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/install-kueue.adoc
// * ai_workloads/kueue/install-disconnected.adoc
:_mod-docs-content-type: PROCEDURE
[id="label-namespaces_{context}"]
= Labeling namespaces to allow {kueue-name} to manage jobs
The {kueue-name} Operator uses an opt-in webhook mechanism to ensure that policies are only enforced for the jobs and namespaces that it is expected to target.
You must label the namespaces where you want {kueue-name} to manage jobs with the `kueue.openshift.io/managed=true` label.
.Prerequisites
* You have cluster administrator permissions.
* The {kueue-name} Operator is installed on your cluster, and you have created a `Kueue` custom resource (CR).
* You have installed the {oc-first}.
.Procedure
* Add the `kueue.openshift.io/managed=true` label to a namespace by running the following command:
+
[source,terminal]
----
$ oc label namespace <namespace> kueue.openshift.io/managed=true
----
When you add this label, you instruct the {kueue-name} Operator that the namespace is managed by its webhook admission controllers. As a result, any {kueue-name} resources within that namespace are properly validated and mutated.

View File

@@ -0,0 +1,20 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/release-notes.adoc
:_mod-docs-content-type: REFERENCE
[id="release-notes-1.0.1_{context}"]
= Release notes for {kueue-name} version 1.0.1
{kueue-name} version 1.0.1 is a patch release that is supported on {product-title} versions 4.18 and 4.19 on the 64-bit x86 architecture.
{kueue-name} version 1.0.1 uses link:https://kueue.sigs.k8s.io/docs/overview/[Kueue] version 0.11.
[id="release-notes-1.0.1-bug-fixes_{context}"]
== Bug fixes in {kueue-name} version 1.0.1
* Previously, leader election for {kueue-name} was not configured to tolerate disruption, which resulted in frequent crashing. With this release, the leader election values for {kueue-name} have been updated to match the durations recommended for {product-title}. (link:https://issues.redhat.com/browse/OCPBUGS-58496[OCPBUGS-58496])
* Previously, the `ReadyReplicas` count was not set in the reconciler, which meant that the {kueue-name} Operator status would report that there were no replicas ready. With this release, the `ReadyReplicas` count is based on the number of ready replicas for the deployment, which ensures that the Operator shows as ready in the {product-title} console when the `kueue-controller-manager` pods are ready. (link:https://issues.redhat.com/browse/OCPBUGS-59261[OCPBUGS-59261])
* Previously, when the `Kueue` custom resource (CR) was deleted from the `openshift-kueue-operator` namespace, the `kueue-manager-config` config map was not deleted automatically and could remain in the namespace. With this release, the `kueue-manager-config` config map, `kueue-webhook-server-cert` secret, and `metrics-server-cert` secret are deleted automatically when the `Kueue` CR is deleted. (link:https://issues.redhat.com/browse/OCPBUGS-57960[OCPBUGS-57960])

View File

@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//
// * ai_workloads/kueue/release-notes.adoc
:_mod-docs-content-type: REFERENCE
[id="release-notes-1.0_{context}"]
= Release notes for {kueue-name} version 1.0
{kueue-name} version 1.0 is a generally available release that is supported on {product-title} versions 4.18 and 4.19 on the 64-bit x86 architecture.
{kueue-name} version 1.0 uses link:https://kueue.sigs.k8s.io/docs/overview/[Kueue] version 0.11.
[id="release-notes-1.0-new-features_{context}"]
== New features in {kueue-name} version 1.0
The following features are supported in {kueue-name} version 1.0:
* Role-based access control (RBAC) enables you to control which types of users can create which types of {kueue-name} resources.
* Configuring resource quotas by creating cluster queues, resource flavors, and local queues enables you to control the amount of resources used by user-submitted jobs and workloads.
* Labeling namespaces and configuring label policies enable you to control which jobs and workloads are managed by {kueue-name}.
* Configuring cohorts, fair sharing, and gang scheduling settings enable you to share unused, borrowable resources between queues.
[id="release-notes-1.0-known-issues_{context}"]
== Known issues in {kueue-name} version 1.0
* {kueue-name} uses the `managedJobsNamespaceSelector` configuration field, so that administrators can configure which namespaces opt in to be managed by {kueue-name}. Because namespaces must be manually configured to opt in to being managed by {kueue-name}, resources in system or third-party namespaces are not impacted or managed by {kueue-name}.
+
The current behavior in {kueue-name} allows reconciliation of `Job` resources that have the `kueue.x-k8s.io/queue-name` label, even if these resources are in namespaces that are not configured to opt in to being managed by {kueue-name}. This is inconsistent with the behavior for other core integrations like pods, deployments, and stateful sets, which are only reconciled if they are in namespaces that have been configured to opt in to being managed by {kueue-name}. (link:https://issues.redhat.com/browse/OCPBUGS-58205[OCPBUGS-58205])
* If you try to use the {product-title} web console to create a `Kueue` custom resource (CR) by using the form view, the web console shows an error and the resource cannot be created. As a workaround, use the YAML view to create a `Kueue` CR instead. (link:https://issues.redhat.com/browse/OCPBUGS-58118[OCPBUGS-58118])

View File

@@ -0,0 +1,17 @@
// Module included in the following assemblies:
//
// * /develop/running-kueue-jobs.adoc
:_mod-docs-content-type: PROCEDURE
[id="running-jobs_{context}"]
= Running jobs with {product-title}
When you run a job, {product-title} creates a corresponding workload for the job, which is represented by a `Workload` object.
.Prerequisites
include::snippets/prereqs-snippet-yaml-user.adoc[]
.Procedure
.Verification

View File

@@ -0,0 +1,15 @@
// Text snippet included in the following modules:
//
// * modules/kueue-create-kueue-cr.adoc
//
// Text snippet included in the following assemblies:
//
// *
:_mod-docs-content-type: SNIPPET
Ensure that you have completed the following prerequisites:
* The {kueue-name} Operator is installed on your cluster.
* You have cluster administrator permissions and the `kueue-batch-admin-role` role.
* You have access to the {product-title} web console.

View File

@@ -0,0 +1,13 @@
// Text snippet included in the following modules:
//
// * modules/kueue-configuring-localqueue-defaults.adoc
//
// Text snippet included in the following assemblies:
//
// *
:_mod-docs-content-type: SNIPPET
* You have installed {kueue-name} version 1.1 on your cluster.
* You have cluster administrator permissions or the `kueue-batch-admin-role` role.
* You have installed the {oc-first}.

View File

@@ -0,0 +1,14 @@
// Text snippet included in the following modules:
//
// * modules/kueue-configure-rbac-batch-admins.adoc
// * modules/kueue-configure-rbac-batch-users.adoc
//
// Text snippet included in the following assemblies:
//
// *
:_mod-docs-content-type: SNIPPET
* The {kueue-name} Operator is installed on your cluster.
* You have cluster administrator permissions.
* You have installed the {oc-first}.

View File

@@ -0,0 +1,14 @@
// Text snippet included in the following modules:
//
// * kueue-identifying-local-queues.adoc
// * kueue-defining-running-jobs.adoc
//
// Text snippet included in the following assemblies:
//
// *
:_mod-docs-content-type: SNIPPET
* A cluster administrator has installed and configured {kueue-name} on your {product-title} cluster.
* A cluster administrator has assigned you the `kueue-batch-user-role` cluster role.
* You have installed the {oc-first}.

View File

@@ -0,0 +1,15 @@
// Text snippet included in the following modules:
//
// * modules/kueue-configuring-clusterqueues.adoc
// * modules/kueue-configuring-localqueues.adoc
// * modules/kueue-configuring-resourceflavors.adoc
//
// Text snippet included in the following assemblies:
//
// *
:_mod-docs-content-type: SNIPPET
* The {kueue-name} Operator is installed on your cluster.
* You have cluster administrator permissions or the `kueue-batch-admin-role` role.
* You have installed the {oc-first}.