mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OSDOCS#15493: New AI workloads book and LWS docs
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
9851ace328
commit
381aeff00c
@@ -78,6 +78,8 @@ endif::[]
|
||||
:secondary-scheduler-operator: Secondary Scheduler Operator
|
||||
:descheduler-operator: Kube Descheduler Operator
|
||||
:cli-manager: CLI Manager Operator
|
||||
:lws-operator: Leader Worker Set Operator
|
||||
:kueue-prod-name: Red{nbsp}Hat build of Kueue
|
||||
// Backup and restore
|
||||
:launch: image:app-launcher.png[title="Application Launcher"]
|
||||
:mtc-first: Migration Toolkit for Containers (MTC)
|
||||
|
||||
@@ -3413,6 +3413,25 @@ Topics:
|
||||
File: node-observability-operator
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
---
|
||||
Name: AI workloads
|
||||
Dir: ai_workloads
|
||||
Distros: openshift-enterprise
|
||||
Topics:
|
||||
- Name: Overview of AI workloads on OpenShift Container Platform
|
||||
File: index
|
||||
- Name: Leader Worker Set Operator
|
||||
Dir: leader_worker_set
|
||||
Distros: openshift-enterprise
|
||||
Topics:
|
||||
- Name: Leader Worker Set Operator overview
|
||||
File: index
|
||||
- Name: Leader Worker Set Operator release notes
|
||||
File: lws-release-notes
|
||||
- Name: Managing distributed workloads with the Leader Worker Set Operator
|
||||
File: lws-managing
|
||||
- Name: Uninstalling the Leader Worker Set Operator
|
||||
File: lws-uninstalling
|
||||
---
|
||||
Name: Edge computing
|
||||
Dir: edge_computing
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
|
||||
1
ai_workloads/_attributes
Symbolic link
1
ai_workloads/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../_attributes/
|
||||
1
ai_workloads/images
Symbolic link
1
ai_workloads/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../images/
|
||||
22
ai_workloads/index.adoc
Normal file
22
ai_workloads/index.adoc
Normal file
@@ -0,0 +1,22 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="ai-workloads-about"]
|
||||
= Overview of AI workloads on {product-title}
|
||||
|
||||
:context: ai-workloads-about
|
||||
|
||||
toc::[]
|
||||
|
||||
{product-title} provides a secure, scalable foundation for running artificial intelligence (AI) workloads across training, inference, and data science workflows.
|
||||
|
||||
// Operators for running AI workloads
|
||||
include::modules/ai-operators.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../ai_workloads/leader_worker_set/index.adoc#lws-about[{lws-operator} overview]
|
||||
|
||||
// Exclude this for now until we can get it reviewed by the RHOAI team
|
||||
// {rhoai-full}
|
||||
// include::modules/ai-rhoai.adoc[leveloffset=+1]
|
||||
1
ai_workloads/leader_worker_set/_attributes
Symbolic link
1
ai_workloads/leader_worker_set/_attributes
Symbolic link
@@ -0,0 +1 @@
|
||||
../../_attributes/
|
||||
1
ai_workloads/leader_worker_set/images
Symbolic link
1
ai_workloads/leader_worker_set/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../../images/
|
||||
24
ai_workloads/leader_worker_set/index.adoc
Normal file
24
ai_workloads/leader_worker_set/index.adoc
Normal file
@@ -0,0 +1,24 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="lws-about"]
|
||||
= {lws-operator} overview
|
||||
|
||||
:context: lws-about
|
||||
|
||||
toc::[]
|
||||
|
||||
Using large language models (LLMs) for AI/ML inference often requires significant compute resources, and workloads typically must be sharded across multiple nodes. This can make deployments complex, creating challenges around scaling, recovery from failures, and efficient pod placement.
|
||||
|
||||
The {lws-operator} simplifies these multi-node deployments by treating a group of pods as a single, coordinated unit. It manages the lifecycle of each pod in the group, scales the entire group together, and performs updates and failure recovery at the group level to ensure consistency.
|
||||
|
||||
// About the {lws-operator}
|
||||
include::modules/lws-about.adoc[leveloffset=+1]
|
||||
|
||||
// LeaderWorkerSet architecture
|
||||
include::modules/lws-arch.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="lws-about_additional-resources"]
|
||||
== Additional resources
|
||||
|
||||
* link:https://lws.sigs.k8s.io/docs/overview/[LeaderWorkerSet documentation (Kubernetes)]
|
||||
22
ai_workloads/leader_worker_set/lws-managing.adoc
Normal file
22
ai_workloads/leader_worker_set/lws-managing.adoc
Normal file
@@ -0,0 +1,22 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="lws-managing"]
|
||||
= Managing distributed workloads with the {lws-operator}
|
||||
|
||||
:context: lws-managing
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use the {lws-operator} to manage distributed inference workloads and process large-scale inference requests efficiently.
|
||||
|
||||
// Installing the {lws-operator}
|
||||
include::modules/lws-install-operator.adoc[leveloffset=+1]
|
||||
|
||||
// Deploying a leader worker set
|
||||
include::modules/lws-config.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="lws-managing_additional-resources"]
|
||||
== Additional resources
|
||||
|
||||
* link:https://lws.sigs.k8s.io/docs/reference/leaderworkerset.v1/[LeaderWorkerSet API (Kubernetes)]
|
||||
17
ai_workloads/leader_worker_set/lws-release-notes.adoc
Normal file
17
ai_workloads/leader_worker_set/lws-release-notes.adoc
Normal file
@@ -0,0 +1,17 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="lws-release-notes"]
|
||||
= {lws-operator} release notes
|
||||
|
||||
:context: lws-release-notes
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use the {lws-operator} to manage distributed inference workloads and process large-scale inference requests efficiently.
|
||||
|
||||
These release notes track the development of the {lws-operator}.
|
||||
|
||||
For more information, see xref:../../ai_workloads/leader_worker_set/index.adoc#lws-about_lws-about[About the {lws-operator}].
|
||||
|
||||
// Release notes for Leader Worker Set Operator 1.0.0
|
||||
include::modules/lws-rn-1.0.0.adoc[leveloffset=+1]
|
||||
16
ai_workloads/leader_worker_set/lws-uninstalling.adoc
Normal file
16
ai_workloads/leader_worker_set/lws-uninstalling.adoc
Normal file
@@ -0,0 +1,16 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="lws-uninstalling"]
|
||||
= Uninstalling the {lws-operator}
|
||||
|
||||
:context: lws-uninstalling
|
||||
|
||||
toc::[]
|
||||
|
||||
You can remove the {lws-operator} from {product-title} by uninstalling the Operator and removing its related resources.
|
||||
|
||||
// Uninstalling the {lws-operator}
|
||||
include::modules/lws-uninstall.adoc[leveloffset=+1]
|
||||
|
||||
// Removing {lws-operator} resources
|
||||
include::modules/lws-remove-resources.adoc[leveloffset=+1]
|
||||
1
ai_workloads/leader_worker_set/modules
Symbolic link
1
ai_workloads/leader_worker_set/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../../modules/
|
||||
1
ai_workloads/leader_worker_set/snippets
Symbolic link
1
ai_workloads/leader_worker_set/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../../snippets/
|
||||
1
ai_workloads/modules
Symbolic link
1
ai_workloads/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../modules/
|
||||
1
ai_workloads/snippets
Symbolic link
1
ai_workloads/snippets
Symbolic link
@@ -0,0 +1 @@
|
||||
../snippets/
|
||||
BIN
images/587_OpenShift_lws_0925.png
Normal file
BIN
images/587_OpenShift_lws_0925.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 61 KiB |
40
modules/ai-operators.adoc
Normal file
40
modules/ai-operators.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * ai_workloads/index.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="ai-operators_{context}"]
|
||||
= Operators for running AI workloads
|
||||
|
||||
You can use Operators to run artificial intelligence (AI) and machine learning (ML) workloads on {product-title}. With Operators, you can build a customized environment that meets your specific AI/ML requirements while continuing to use {product-title} as the core platform for your applications.
|
||||
|
||||
{product-title} provides several Operators that can help you run AI workloads:
|
||||
|
||||
{lws-operator}::
|
||||
You can use the {lws-operator} to enable large-scale AI inference workloads to run reliably across nodes with synchronization between leader and worker processes. Without proper coordination, large training runs might fail or stall.
|
||||
+
|
||||
For more information, see "{lws-operator} overview".
|
||||
|
||||
{kueue-prod-name}::
|
||||
You can use {kueue-prod-name} to provide structured queues and prioritization so that workloads are handled fairly and efficiently. Without proper prioritization, important jobs might be delayed while less critical jobs occupy resources.
|
||||
+
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/red_hat_build_of_kueue/latest/html/overview/about-kueue[Introduction to Red Hat build of Kueue] in the {kueue-prod-name} documentation.
|
||||
|
||||
// TODO: Anything else to list yet?
|
||||
|
||||
////
|
||||
Keep for future use (JobSet and DRA) - From Gaurav (PM):
|
||||
AI in OpenShift – Focus Areas
|
||||
|
||||
What We’re Building
|
||||
- Smarter Resource Allocation (DRA) – enhancing how accelerators and devices are requested, bound, and shared to maximize efficiency and utilization.
|
||||
- Coordinated Distributed Jobs (LWS) – enabling large-scale AI training workloads to run reliably across many nodes with proper synchronization between lead and worker processes.
|
||||
- Intelligent Queuing and Scheduling (Kueue) – providing structured queues and prioritization so workloads are handled fairly, respecting policies while improving throughput.
|
||||
- Batch and Group Workload Management (Job Set) – allowing sets of jobs to be submitted, scheduled, and managed together, making it easier to run multi-step AI pipelines.
|
||||
|
||||
The Problems We’re Solving
|
||||
- Resource waste and inefficiency (DRA) – current systems often over- or under-allocate accelerators, increasing cost.
|
||||
- Complexity of distributed AI training (LWS) – without coordination, large training runs can fail or stall.
|
||||
- Unfair or unpredictable scheduling (Kueue) – important jobs may be delayed while less critical ones consume resources.
|
||||
- Lack of support for pipelines (Job Set) – multi-job workflows are hard to manage and monitor as a single unit.
|
||||
////
|
||||
17
modules/ai-rhoai.adoc
Normal file
17
modules/ai-rhoai.adoc
Normal file
@@ -0,0 +1,17 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * ai_workloads/index.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="ai-rhoai_{context}"]
|
||||
= {rhoai-full}
|
||||
|
||||
// TODO: This needs approval from RHOAI team before it can be included
|
||||
|
||||
If your organization requires an integrated environment to develop, train, serve, test, and monitor AI/ML models and applications, consider {rhoai-full}.
|
||||
|
||||
{rhoai-full} is a platform for data scientists and developers of artificial intelligence and machine learning (AI/ML) applications. {rhoai-full} builds on {product-title} and provides a preconfigured set of tools, accelerators, and other features to manage the full AI/ML lifecycle. This approach reduces the need to assemble and maintain individual Operators or components for AI workloads.
|
||||
|
||||
{rhoai-full} is available as an add-on cloud service to {product-rosa} or {product-dedicated}, or as a self-managed software product. It provides an AI platform with popular open source tooling for model serving and data science pipelines, integrated into a flexible UI.
|
||||
|
||||
For more information, see the link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai/[{rhoai-full} documentation].
|
||||
20
modules/lws-about.adoc
Normal file
20
modules/lws-about.adoc
Normal file
@@ -0,0 +1,20 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * ai_workloads/leader_worker_set/index.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="lws-about_{context}"]
|
||||
= About the {lws-operator}
|
||||
|
||||
The {lws-operator} is based on the link:https://lws.sigs.k8s.io/[LeaderWorkerSet] open source project. `LeaderWorkerSet` is a custom Kubernetes API that can be used to deploy a group of pods as a unit. This is useful for artificial intelligence (AI) and machine learning (ML) inference workloads, where large language models (LLMs) are sharded across multiple nodes.
|
||||
|
||||
With the `LeaderWorkerSet` API, pods are grouped into units consisting of one leader and multiple workers, all managed together as a single entity. Each pod in a group has a unique pod identity. Pods within a group are created in parallel and share identical lifecycle stages. Rollouts, rolling updates, and pod failure restarts are performed as a group.
|
||||
|
||||
In the `LeaderWorkerSet` configuration, you define the size of the groups and the number of group replicas. If necessary, you can define separate templates for leader and worker pods, allowing for role-specific customization. You can also configure topology-aware placement, so that pods in the same group are co-located in the same topology.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Before you install the {lws-operator}, you must install the {cert-manager-operator} because it is required to configure services and manage metrics collection.
|
||||
====
|
||||
|
||||
Monitoring for the {lws-operator} is provided by default with {product-title} through Prometheus.
|
||||
16
modules/lws-arch.adoc
Normal file
16
modules/lws-arch.adoc
Normal file
@@ -0,0 +1,16 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * ai_workloads/leader_worker_set/index.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="lws-arch_{context}"]
|
||||
= LeaderWorkerSet architecture
|
||||
|
||||
The following diagram shows how the `LeaderWorkerSet` API organizes groups of pods into a single unit, with one pod as the leader and the rest as the workers, to coordinate distributed workloads:
|
||||
|
||||
.Leader worker set architecture
|
||||
image::587_OpenShift_lws_0925.png[Leader worker set architecture]
|
||||
|
||||
The `LeaderWorkerSet` API uses a leader stateful set to manage the deployment and lifecycle of the groups of pods. For each replica defined, a leader-worker group is created.
|
||||
|
||||
Each leader-worker group contains a leader pod and a worker stateful set. The worker stateful set is owned by the leader pod and manages the set of worker pods associated with that leader pod. The specified size defines the total number of pods in each leader-worker group, with the leader pod included in that number.
|
||||
124
modules/lws-config.adoc
Normal file
124
modules/lws-config.adoc
Normal file
@@ -0,0 +1,124 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * ai_workloads/leader_worker_set/lws-managing.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="lws-config_{context}"]
|
||||
= Deploying a leader worker set
|
||||
|
||||
You can use the {lws-operator} to deploy a leader worker set to assist with managing distributed workloads across nodes.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have installed the {lws-operator}.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a new project by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc new-project my-namespace
|
||||
----
|
||||
|
||||
. Create a file named `leader-worker-set.yaml`
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: leaderworkerset.x-k8s.io/v1
|
||||
kind: LeaderWorkerSet
|
||||
metadata:
|
||||
generation: 1
|
||||
name: my-lws <1>
|
||||
namespace: my-namespace <2>
|
||||
spec:
|
||||
leaderWorkerTemplate:
|
||||
leaderTemplate: <3>
|
||||
metadata: {}
|
||||
spec:
|
||||
containers:
|
||||
- image: nginxinc/nginx-unprivileged:1.27
|
||||
name: leader
|
||||
resources: {}
|
||||
restartPolicy: RecreateGroupOnPodRestart <4>
|
||||
size: 3 <5>
|
||||
workerTemplate: <6>
|
||||
metadata: {}
|
||||
spec:
|
||||
containers:
|
||||
- image: nginxinc/nginx-unprivileged:1.27
|
||||
name: worker
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
protocol: TCP
|
||||
resources: {}
|
||||
networkConfig:
|
||||
subdomainPolicy: Shared <7>
|
||||
replicas: 2 <8>
|
||||
rolloutStrategy:
|
||||
rollingUpdateConfiguration:
|
||||
maxSurge: 1 <9>
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
startupPolicy: LeaderCreated
|
||||
----
|
||||
<1> Specify the name of the leader worker set resource.
|
||||
<2> Specify the namespace for the leader worker set to run in.
|
||||
<3> Specify the pod template for the leader pods.
|
||||
<4> Specify the restart policy for when pod failures occur. Allowed values are `RecreateGroupOnPodRestart` to restart the whole group or `None` to not restart the group.
|
||||
<5> Specify the number of pods to create for each group, including the leader pod. For example, a value of `3` creates 1 leader pod and 2 worker pods. The default value is `1`.
|
||||
<6> Specify the pod template for the worker pods.
|
||||
<7> Specify the policy to use when creating the headless service. Allowed values are `UniquePerReplica` or `Shared`. The default value is `Shared`.
|
||||
<8> Specify the number of replicas, or leader-worker groups. The default value is `1`.
|
||||
<9> Specify the maximum number of replicas that can be scheduled above the `replicas` value during rolling updates. The value can be specified as an integer or a percentage.
|
||||
+
|
||||
For more information on all available fields to configure, see link:https://lws.sigs.k8s.io/docs/reference/leaderworkerset.v1/[LeaderWorkerSet API] upstream documentation.
|
||||
|
||||
. Apply the leader worker set configuration by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f leader-worker-set.yaml
|
||||
----
|
||||
|
||||
.Verification
|
||||
|
||||
. Verify that pods were created by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n my-namespace
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
my-lws-0 1/1 Running 0 4s <1>
|
||||
my-lws-0-1 1/1 Running 0 3s
|
||||
my-lws-0-2 1/1 Running 0 3s
|
||||
my-lws-1 1/1 Running 0 7s <2>
|
||||
my-lws-1-1 1/1 Running 0 6s
|
||||
my-lws-1-2 1/1 Running 0 6s
|
||||
----
|
||||
<1> The leader pod for the first group.
|
||||
<2> The leader pod for the second group.
|
||||
|
||||
. Review the stateful sets by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get statefulsets
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY AGE
|
||||
my-lws 4/4 111s <1>
|
||||
my-lws-0 2/2 57s <2>
|
||||
my-lws-1 2/2 60s <3>
|
||||
----
|
||||
<1> The leader stateful set for all leader-worker groups.
|
||||
<2> The worker stateful set for the first group.
|
||||
<3> The worker stateful set for the second group.
|
||||
40
modules/lws-install-operator.adoc
Normal file
40
modules/lws-install-operator.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * ai_workloads/leader_worker_set/lws-managing.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="lws-install-operator_{context}"]
|
||||
= Installing the {lws-operator}
|
||||
|
||||
You can use the web console to install the {lws-operator}.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have access to the cluster with `cluster-admin` privileges.
|
||||
* You have access to the {product-title} web console.
|
||||
* You have installed the {cert-manager-operator}.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Log in to the {product-title} web console.
|
||||
|
||||
. Verify that the {cert-manager-operator} is installed.
|
||||
|
||||
. Install the {lws-operator}.
|
||||
.. Navigate to *Operators* -> *OperatorHub*.
|
||||
.. Enter *{lws-operator}* into the filter box.
|
||||
.. Select the *{lws-operator}* and click *Install*.
|
||||
.. On the *Install Operator* page:
|
||||
... The *Update channel* is set to *stable-v1.0*, which installs the latest stable release of {lws-operator} 1.0.
|
||||
... Under *Installation mode*, select *A specific namespace on the cluster*.
|
||||
... Under *Installed Namespace*, select *Operator recommended Namespace: openshift-lws-operator*.
|
||||
... Under *Update approval*, select one of the following update strategies:
|
||||
+
|
||||
* The *Automatic* strategy allows {olm-first} to automatically update the Operator when a new version is available.
|
||||
* The *Manual* strategy requires a user with appropriate credentials to approve the Operator update.
|
||||
... Click *Install*.
|
||||
|
||||
. Create the custom resource (CR) for the {lws-operator}:
|
||||
.. Navigate to *Installed Operators* -> *{lws-operator}*.
|
||||
.. Under *Provided APIs*, click *Create instance* in the *LeaderWorkerSetOperator* pane.
|
||||
.. Click *Create*.
|
||||
31
modules/lws-remove-resources.adoc
Normal file
31
modules/lws-remove-resources.adoc
Normal file
@@ -0,0 +1,31 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * ai_workloads/leader_worker_set/lws-uninstalling.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="lws-remove-resources_{context}"]
|
||||
= Uninstalling {lws-operator} resources
|
||||
|
||||
Optionally, after uninstalling the {lws-operator}, you can remove its related resources from your cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have access to the cluster with `cluster-admin` privileges.
|
||||
* You have access to the {product-title} web console.
|
||||
* You have uninstalled the {lws-operator}.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Log in to the {product-title} web console.
|
||||
|
||||
. Remove CRDs that were created when the {lws-operator} was installed:
|
||||
.. Navigate to *Administration* -> *CustomResourceDefinitions*.
|
||||
.. Enter `LeaderWorkerSetOperator` in the *Name* field to filter the CRDs.
|
||||
.. Click the Options menu {kebab} next to the *LeaderWorkerSetOperator* CRD and select *Delete CustomResourceDefinition*.
|
||||
.. In the confirmation dialog, click *Delete*.
|
||||
|
||||
. Delete the `openshift-lws-operator` namespace.
|
||||
.. Navigate to *Administration* -> *Namespaces*.
|
||||
.. Enter `openshift-lws-operator` into the filter box.
|
||||
.. Click the Options menu {kebab} next to the *openshift-lws-operator* entry and select *Delete Namespace*.
|
||||
.. In the confirmation dialog, enter `openshift-lws-operator` and click *Delete*.
|
||||
33
modules/lws-rn-1.0.0.adoc
Normal file
33
modules/lws-rn-1.0.0.adoc
Normal file
@@ -0,0 +1,33 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * ai_workloads/leader_worker_set/lws-release-notes.adoc
|
||||
|
||||
// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly.
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="lws-rn-1.0.0_{context}"]
|
||||
= Release notes for {lws-operator} 1.0.0
|
||||
|
||||
Issued: 18 September 2025
|
||||
|
||||
The following advisories are available for the {lws-operator} 1.0.0:
|
||||
|
||||
* link:https://access.redhat.com/errata/RHBA-2025:13974[RHBA-2025:13974]
|
||||
* link:https://access.redhat.com/errata/RHBA-2025:13574[RHBA-2025:13574]
|
||||
|
||||
[id="lws-rn-1.0.0-new-features_{context}"]
|
||||
== New features and enhancements
|
||||
|
||||
* This is the initial release of the {lws-operator}.
|
||||
|
||||
// No bugs to list since this is the initial release
|
||||
// [id="lws-rn-1.0.0-bug-fixes_{context}"]
|
||||
// == Bug fixes
|
||||
//
|
||||
// * TODO
|
||||
|
||||
// No known issues to list
|
||||
// [id="lws-rn-1.0.0-known-issues_{context}"]
|
||||
// == Known issues
|
||||
//
|
||||
// * TODO
|
||||
33
modules/lws-uninstall.adoc
Normal file
33
modules/lws-uninstall.adoc
Normal file
@@ -0,0 +1,33 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * ai_workloads/leader_worker_set/lws-uninstalling.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="lws-uninstall_{context}"]
|
||||
= Uninstalling the {lws-operator}
|
||||
|
||||
You can use the web console to uninstall the {lws-operator}.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have access to the cluster with `cluster-admin` privileges.
|
||||
* You have access to the {product-title} web console.
|
||||
* You have installed the {lws-operator}.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Log in to the {product-title} web console.
|
||||
|
||||
. Navigate to *Operators* -> *Installed Operators*.
|
||||
|
||||
. Select `openshift-lws-operator` from the *Project* dropdown list.
|
||||
|
||||
. Delete the `LeaderWorkerSetOperator` instance.
|
||||
.. Click *{lws-operator}* and select the *LeaderWorkerSetOperator* tab.
|
||||
.. Click the Options menu {kebab} next to the *cluster* entry and select *Delete LeaderWorkerSetOperator*.
|
||||
.. In the confirmation dialog, click *Delete*.
|
||||
|
||||
. Uninstall the {lws-operator}.
|
||||
.. Navigate to *Operators* -> *Installed Operators*.
|
||||
.. Click the Options menu {kebab} next to the *{lws-operator}* entry and click *Uninstall Operator*.
|
||||
.. In the confirmation dialog, click *Uninstall*.
|
||||
Reference in New Issue
Block a user