mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
JIRA:OSDOCS-2396 Add hardware_enablement category, add special resource operator docs
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
2a6995cf02
commit
498221ce27
@@ -2006,12 +2006,6 @@ Topics:
|
||||
- Name: Scaling the Cluster Monitoring Operator
|
||||
File: scaling-cluster-monitoring-operator
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
- Name: The Node Feature Discovery Operator
|
||||
File: psap-node-feature-discovery-operator
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
- Name: The Driver Toolkit
|
||||
File: psap-driver-toolkit
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
- Name: Planning your environment according to object maximums
|
||||
File: planning-your-environment-according-to-object-maximums
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
@@ -2042,6 +2036,19 @@ Topics:
|
||||
File: ztp-deploying-disconnected
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
---
|
||||
Name: Hardware enablement
|
||||
Dir: hardware_enablement
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
Topics:
|
||||
- Name: About hardware enablement
|
||||
File: about-hardware-enablement
|
||||
- Name: Driver Toolkit
|
||||
File: psap-driver-toolkit
|
||||
- Name: Special Resource Operator
|
||||
File: psap-special-resource-operator
|
||||
- Name: Node Feature Discovery Operator
|
||||
File: psap-node-feature-discovery-operator
|
||||
---
|
||||
Name: Backup and restore
|
||||
Dir: backup_and_restore
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
|
||||
12
hardware_enablement/about-hardware-enablement.adoc
Normal file
12
hardware_enablement/about-hardware-enablement.adoc
Normal file
@@ -0,0 +1,12 @@
|
||||
[id="about-hardware-enablement"]
|
||||
= About hardware enablement
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: about-hardware-enablement
|
||||
|
||||
toc::[]
|
||||
|
||||
Many applications require specialized hardware or software that depends on kernel modules or drivers. You can use driver containers to load out-of-tree kernel modules on {op-system-first} nodes. To deploy out-of-tree drivers during cluster installation, use the `kmods-via-containers` framework. To load drivers or kernel modules on an existing {product-title} cluster, {product-title} offers several tools:
|
||||
|
||||
* The Driver Toolkit is a container image that is a part of every {product-title} release. It contains the kernel packages and other common dependencies that are needed to build a driver or kernel module. The Driver Toolkit can be used as a base image for driver container image builds on {product-title}.
|
||||
* The Special Resource Operator (SRO) orchestrates the building and management of driver containers to load kernel modules and drivers on an existing OpenShift or Kubernetes cluster.
|
||||
* The Node Feature Discovery (NFD) Operator adds node labels for CPU capabilities, kernel version, PCIe device vendor IDs, and more.
|
||||
1
hardware_enablement/images
Symbolic link
1
hardware_enablement/images
Symbolic link
@@ -0,0 +1 @@
|
||||
../images
|
||||
1
hardware_enablement/modules
Symbolic link
1
hardware_enablement/modules
Symbolic link
@@ -0,0 +1 @@
|
||||
../modules
|
||||
@@ -1,5 +1,5 @@
|
||||
[id="driver-toolkit"]
|
||||
= The Driver Toolkit
|
||||
= Driver Toolkit
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: driver-toolkit
|
||||
|
||||
@@ -15,3 +15,8 @@ include::modules/psap-driver-toolkit.adoc[leveloffset=+1]
|
||||
include::modules/psap-driver-toolkit-pulling.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/psap-driver-toolkit-using.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_driver-toolkkit-id"]
|
||||
== Additional resources
|
||||
|
||||
* For more information about configuring registry storage for your cluster, see xref:../registry/configuring-registry-operator.adoc#registry-removed_configuring-registry-operator[Image Registry Operator in OpenShift Container Platform].
|
||||
@@ -1,5 +1,5 @@
|
||||
[id="node-feature-discovery-operator"]
|
||||
= The Node Feature Discovery Operator
|
||||
= Node Feature Discovery Operator
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: node-feature-discovery-operator
|
||||
|
||||
36
hardware_enablement/psap-special-resource-operator.adoc
Normal file
36
hardware_enablement/psap-special-resource-operator.adoc
Normal file
@@ -0,0 +1,36 @@
|
||||
[id="special-resource-operator"]
|
||||
= Special Resource Operator
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: special-resource-operator
|
||||
|
||||
toc::[]
|
||||
|
||||
Learn about the Special Resource Operator (SRO) and how you can use it to build and manage driver containers for loading kernel modules and device drivers on nodes in an {product-title} cluster.
|
||||
|
||||
|
||||
:FeatureName: The Special Resource Operator
|
||||
include::modules/technology-preview.adoc[leveloffset=+0]
|
||||
|
||||
include::modules/psap-special-resource-operator.adoc[leveloffset=+1]
|
||||
|
||||
[id="installing-special-resource-operator"]
|
||||
== Installing the Special Resource Operator
|
||||
|
||||
As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OpenShift CLI or the web console.
|
||||
|
||||
include::modules/psap-special-resource-operator-installing-using-cli.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/psap-special-resource-operator-installing-using-web-console.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/psap-special-resource-operator-using.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/psap-special-resource-operator-using-manifests.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/psap-special-resource-operator-using-configmaps.adoc[leveloffset=+2]
|
||||
|
||||
[id="additional-resources_special-resource-operator"]
|
||||
== Additional resources
|
||||
|
||||
* For information about restoring the Image Registry Operator state before using the Special Resource Operator, see
|
||||
xref:../registry/configuring-registry-operator.adoc#registry-removed_configuring-registry-operator[Image registry removed during installation].
|
||||
* For details about installing the NFD Operator see xref:psap-node-feature-discovery-operator.adoc#installing-the-node-feature-discovery-operator_node-feature-discovery-operator[Node Feature Discovery (NFD) Operator].
|
||||
@@ -1,8 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/psap-driver-toolkit.adoc
|
||||
// * hardware_enablement/psap-driver-toolkit.adoc
|
||||
|
||||
[id="pulling-the-driver-toolkit"]
|
||||
[id="pulling-the-driver-toolkit_{context}"]
|
||||
= Pulling the Driver Toolkit container image
|
||||
|
||||
The `driver-toolkit` image is available from the link:https://registry.redhat.io/[Container images section of the Red Hat Ecosystem Catalog] and in the {product-title} release payload. The image corresponding to the most recent minor release of {product-title} will be tagged with the version number in the catalog. The image URL for a specific release can be found using the `oc adm` CLI command.
|
||||
@@ -18,8 +18,8 @@ The driver-toolkit image for the latest minor release will be tagged with the mi
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Obtain the image pull secret needed to perform an installation of {product-title}, from the link:https://console.redhat.com/openshift/install/pull-secret[Pull Secret] page on the {cloud-redhat-com} site.
|
||||
* Install the OpenShift CLI (`oc`).
|
||||
* You obtained the image pull secret needed to perform an installation of {product-title}, from the link:https://cloud.redhat.com/openshift/install/pull-secret[Pull Secret] page on the {cloud-redhat-com} site.
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -1,20 +1,26 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/psap-driver-toolkit.adoc
|
||||
// * hardware_enablement/psap-driver-toolkit.adoc
|
||||
|
||||
[id="using-the-driver-toolkit"]
|
||||
[id="using-the-driver-toolkit_{context}"]
|
||||
= Using the Driver Toolkit
|
||||
|
||||
As an example, the Driver Toolkit can be used as the base image for building a very simple kernel module called simple-kmod.
|
||||
|
||||
[id="create-simple-kmod-image"]
|
||||
[NOTE]
|
||||
====
|
||||
The Driver Toolkit contains the necessary dependencies, `openssl`, `mokutil`, and `keyutils`, needed to sign a kernel module. However, in this example, the simple-kmod kernel module is not signed and therefore cannot be loaded on systems with `Secure Boot` enabled.
|
||||
====
|
||||
|
||||
[id="create-simple-kmod-image_{context}"]
|
||||
== Build and run the simple-kmod driver container on a cluster
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* An {product-title} cluster
|
||||
* Install the OpenShift CLI (`oc`).
|
||||
* Log in as a user with `cluster-admin` privileges.
|
||||
* You have a running {product-title} cluster.
|
||||
* You set the Image Registry Operator state to `Managed` for your cluster.
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
* You are logged into the OpenShift CLI as a user with `cluster-admin` privileges.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/psap-driver-toolkit.adoc
|
||||
// * hardware_enablement/psap-driver-toolkit.adoc
|
||||
|
||||
[id="about-driver-toolkit"]
|
||||
[id="about-driver-toolkit_{context}"]
|
||||
= About the Driver Toolkit
|
||||
|
||||
[discrete]
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/psap-node-feature-discovery-operator.adoc
|
||||
// * hardware_enablement/psap-node-feature-discovery-operator.adoc
|
||||
|
||||
[id="installing-the-node-feature-discovery-operator_{context}"]
|
||||
= Installing the Node Feature Discovery Operator
|
||||
|
||||
The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the NFD daemon set. As a cluster administrator, you can install the NFD Operator using the {product-title} CLI or the web console.
|
||||
The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the NFD daemon set. As a cluster administrator, you can install the NFD Operator by using the {product-title} CLI or the web console.
|
||||
|
||||
[id="install-operator-cli_{context}"]
|
||||
== Installing the NFD Operator using the CLI
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/psap-node-feature-discovery-operator.adoc
|
||||
// * hardware_enablement/psap-node-feature-discovery-operator.adoc
|
||||
|
||||
ifeval::["{context}" == "red-hat-operators"]
|
||||
:operators:
|
||||
|
||||
124
modules/psap-special-resource-operator-installing-using-cli.adoc
Normal file
124
modules/psap-special-resource-operator-installing-using-cli.adoc
Normal file
@@ -0,0 +1,124 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hardware_enablement/psap-special-resource-operator.adoc
|
||||
|
||||
[id="installing-the-special-resource-operator-using-cli_{context}"]
|
||||
= Installing the Special Resource Operator by using the CLI
|
||||
|
||||
As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OpenShift CLI.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have a running {product-title} cluster.
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
* You are logged into the OpenShift CLI as a user with `cluster-admin` privileges.
|
||||
* You installed the Node Feature Discovery (NFD) Operator.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a namespace for the Special Resource Operator:
|
||||
|
||||
.. Create the following `Namespace` custom resource (CR) that defines the `openshift-special-resource-operator` namespace, and then save the YAML in the `sro-namespace.yaml` file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: openshift-special-resource-operator
|
||||
----
|
||||
|
||||
.. Create the namespace by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f sro-namespace.yaml
|
||||
----
|
||||
|
||||
. Install SRO in the namespace you created in the previous step:
|
||||
|
||||
.. Create the following `OperatorGroup` CR and save the YAML in the `sro-operatorgroup.yaml` file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: operators.coreos.com/v1
|
||||
kind: OperatorGroup
|
||||
metadata:
|
||||
generateName: openshift-special-resource-operator-
|
||||
name: openshift-special-resource-operator
|
||||
namespace: openshift-special-resource-operator
|
||||
spec:
|
||||
targetNamespaces:
|
||||
- openshift-special-resource-operator
|
||||
----
|
||||
|
||||
.. Create the operator group by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f sro-operatorgroup.yaml
|
||||
----
|
||||
|
||||
.. Run the following `oc get` command to get the `channel` value required for the next step:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get packagemanifest special-resource-operator -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
4.9
|
||||
----
|
||||
|
||||
.. Create the following `Subscription` CR and save the YAML in the `sro-sub.yaml` file:
|
||||
+
|
||||
.Example subscription CR
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: operators.coreos.com/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: special-resource-operator
|
||||
namespace: openshift-special-resource-operator
|
||||
spec:
|
||||
channel: "4.9" <1>
|
||||
installPlanApproval: Automatic
|
||||
name: special-resource-operator
|
||||
source: redhat-operators
|
||||
sourceNamespace: openshift-marketplace
|
||||
----
|
||||
<1> Replace the channel value with the output from the previous command.
|
||||
|
||||
.. Create the subscription object by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f sro-sub.yaml
|
||||
----
|
||||
|
||||
.. Switch to the `openshift-special-resource-operator` project:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc project openshift-special-resource-operator
|
||||
----
|
||||
|
||||
.Verification
|
||||
|
||||
* To verify that the Operator deployment is successful, run:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
special-resource-controller-manager-7bfb544d45-xx62r 2/2 Running 0 2m28s
|
||||
----
|
||||
+
|
||||
A successful deployment shows a `Running` status.
|
||||
@@ -0,0 +1,49 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hardware_enablement/psap-special-resource-operator.adoc
|
||||
|
||||
[id="installing-the-special-resource-operator-using-web-console_{context}"]
|
||||
= Installing the Special Resource Operator by using the web console
|
||||
|
||||
As a cluster administrator, you can install the Special Resource Operator (SRO) by using the {product-title} web console.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed the Node Feature Discovery (NFD) Operator.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Log in to the {product-title} web console.
|
||||
. Create the required namespace for the Special Resource Operator:
|
||||
.. Navigate to *Administration* -> *Namespaces* and click *Create Namespace*.
|
||||
.. Enter `openshift-special-resource-operator` in the *Name* field and click *Create*.
|
||||
|
||||
. Install the Special Resource Operator:
|
||||
.. In the {product-title} web console, click *Operators* -> *OperatorHub*.
|
||||
|
||||
.. Choose *Special Resource Operator* from the list of available Operators, and then click *Install*.
|
||||
|
||||
.. On the *Install Operator* page, select *a specific namespace on the cluster*, select the namespace created in the previous section, and then click *Install*.
|
||||
|
||||
.Verification
|
||||
|
||||
To verify that the Special Resource Operator installed successfully:
|
||||
|
||||
. Navigate to the *Operators* -> *Installed Operators* page.
|
||||
. Ensure that *Special Resource Operator* is listed in the *openshift-special-resource-operator* project with a *Status* of *InstallSucceeded*.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
During installation, an Operator might display a *Failed* status. If the installation later succeeds with an *InstallSucceeded* message, you can ignore the *Failed* message.
|
||||
====
|
||||
+
|
||||
. If the Operator does not appear as installed, to troubleshoot further:
|
||||
+
|
||||
.. Navigate to the *Operators* -> *Installed Operators* page and inspect the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors under *Status*.
|
||||
.. Navigate to the *Workloads* -> *Pods* page and check the logs for pods in the `openshift-special-resource-operator` project.
|
||||
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The Node Feature Discovery (NFD) Operator is a dependency of the Special Resource Operator (SRO). If the NFD Operator is not installed before installing SRO, the Operator Lifecycle Manager will automatically install the NFD Operator. However, the required Node Feature Discovery operand will not be deployed automatically. The Node Feature Discovery Operator documentation provides details about how to deploy NFD by using the NFD Operator.
|
||||
====
|
||||
315
modules/psap-special-resource-operator-using-configmaps.adoc
Normal file
315
modules/psap-special-resource-operator-using-configmaps.adoc
Normal file
@@ -0,0 +1,315 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hardware_enablement/psap-special-resource-operator.adoc
|
||||
|
||||
[id="deploy-simple-kmod-using-configmap-chart"]
|
||||
= Building and running the simple-kmod SpecialResource by using a config map
|
||||
|
||||
In this example, the simple-kmod kernel module is used to show how SRO can manage a driver container which is defined in Helm chart templates stored in a config map.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have a running {product-title} cluster.
|
||||
* You set the Image Registry Operator state to `Managed` for your cluster.
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
* You are logged into the OpenShift CLI as a user with `cluster-admin` privileges.
|
||||
* You installed Node Feature Discovery (NFD) Operator.
|
||||
* You installed the Special Resource Operator.
|
||||
* You installed the Helm CLI (`helm`).
|
||||
|
||||
.Procedure
|
||||
. To create a simple-kmod `SpecialResource` object, define an image stream and build config to build the image, and a service account, role, role binding, and daemon set to run the container. The service account, role, and role binding are required to run the daemon set with the privileged security context so that the kernel module can be loaded.
|
||||
.. Create a `templates` directory, and change into it:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ mkdir -p chart/simple-kmod-0.0.1/templates
|
||||
----
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cd chart/simple-kmod-0.0.1/templates
|
||||
----
|
||||
|
||||
.. Save this YAML template for the image stream and build config in the `templates` directory as `0000-buildconfig.yaml`.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: image.openshift.io/v1
|
||||
kind: ImageStream
|
||||
metadata:
|
||||
labels:
|
||||
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} <1>
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} <1>
|
||||
spec: {}
|
||||
---
|
||||
apiVersion: build.openshift.io/v1
|
||||
kind: BuildConfig
|
||||
metadata:
|
||||
labels:
|
||||
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} <1>
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} <1>
|
||||
annotations:
|
||||
specialresource.openshift.io/wait: "true"
|
||||
specialresource.openshift.io/driver-container-vendor: simple-kmod
|
||||
specialresource.openshift.io/kernel-affine: "true"
|
||||
spec:
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/worker: ""
|
||||
runPolicy: "Serial"
|
||||
triggers:
|
||||
- type: "ConfigChange"
|
||||
- type: "ImageChange"
|
||||
source:
|
||||
git:
|
||||
ref: {{.Values.specialresource.spec.driverContainer.source.git.ref}}
|
||||
uri: {{.Values.specialresource.spec.driverContainer.source.git.uri}}
|
||||
type: Git
|
||||
strategy:
|
||||
dockerStrategy:
|
||||
dockerfilePath: Dockerfile.SRO
|
||||
buildArgs:
|
||||
- name: "IMAGE"
|
||||
value: {{ .Values.driverToolkitImage }}
|
||||
{{- range $arg := .Values.buildArgs }}
|
||||
- name: {{ $arg.name }}
|
||||
value: {{ $arg.value }}
|
||||
{{- end }}
|
||||
- name: KVER
|
||||
value: {{ .Values.kernelFullVersion }}
|
||||
output:
|
||||
to:
|
||||
kind: ImageStreamTag
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} <1>
|
||||
----
|
||||
<1> The templates such as `{{.Values.specialresource.metadata.name}}` are filled in by SRO, based on fields in the `SpecialResource` CR and variables known to the Operator such as `{{.Values.KernelFullVersion}}`
|
||||
|
||||
.. Save the following YAML template for the RBAC resources and daemon set in the `templates` directory as `1000-driver-container.yaml`:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- security.openshift.io
|
||||
resources:
|
||||
- securitycontextconstraints
|
||||
verbs:
|
||||
- use
|
||||
resourceNames:
|
||||
- privileged
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
namespace: {{.Values.specialresource.spec.namespace}}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
labels:
|
||||
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
annotations:
|
||||
specialresource.openshift.io/wait: "true"
|
||||
specialresource.openshift.io/state: "driver-container"
|
||||
specialresource.openshift.io/driver-container-vendor: simple-kmod
|
||||
specialresource.openshift.io/kernel-affine: "true"
|
||||
spec:
|
||||
updateStrategy:
|
||||
type: OnDelete
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
template:
|
||||
metadata:
|
||||
# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
|
||||
# reserves resources for critical add-on pods so that they can be rescheduled after
|
||||
# a failure. This annotation works in tandem with the toleration below.
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ""
|
||||
labels:
|
||||
app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
spec:
|
||||
serviceAccount: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
serviceAccountName: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
containers:
|
||||
- image: image-registry.openshift-image-registry.svc:5000/{{.Values.specialresource.spec.namespace}}/{{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}}
|
||||
name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
|
||||
imagePullPolicy: Always
|
||||
command: ["/sbin/init"]
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command: ["/bin/sh", "-c", "systemctl stop kmods-via-containers@{{.Values.specialresource.metadata.name}}"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/worker: ""
|
||||
feature.node.kubernetes.io/kernel-version.full: "{{.Values.KernelFullVersion}}"
|
||||
----
|
||||
|
||||
.. Change into the `chart/simple-kmod-0.0.1` directory:
|
||||
+
|
||||
[source, terminal]
|
||||
----
|
||||
$ cd ..
|
||||
----
|
||||
|
||||
.. Save the following YAML for the chart as `Chart.yaml` in the `chart/simple-kmod-0.0.1` directory:
|
||||
+
|
||||
[source, yaml]
|
||||
----
|
||||
apiVersion: v2
|
||||
name: simple-kmod
|
||||
description: Simple kmod will deploy a simple kmod driver-container
|
||||
icon: https://avatars.githubusercontent.com/u/55542927
|
||||
type: application
|
||||
version: 0.0.1
|
||||
appVersion: 1.0.0
|
||||
----
|
||||
|
||||
. From the `chart` directory, create the chart using the `helm package` command:
|
||||
+
|
||||
[source, terminal]
|
||||
----
|
||||
$ helm package simple-kmod-0.0.1/
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Successfully packaged chart and saved it to: /data/<username>/git/<github_username>/special-resource-operator/yaml-for-docs/chart/simple-kmod-0.0.1/simple-kmod-0.0.1.tgz
|
||||
----
|
||||
|
||||
. Create a configuration map to store the chart files:
|
||||
.. Create a directory for the config map files:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ mkdir cm
|
||||
----
|
||||
.. Copy the Helm chart into the `cm` directory:
|
||||
+
|
||||
[source, terminal]
|
||||
----
|
||||
$ cp simple-kmod-0.0.1.tgz cm/simple-kmod-0.0.1.tgz
|
||||
----
|
||||
.. Create an index file specifying the Helm repo that contains the Helm chart:
|
||||
+
|
||||
[source, terminal]
|
||||
----
|
||||
$ helm repo index cm --url=cm://simple-kmod/simple-kmod-chart
|
||||
----
|
||||
.. Create a namespace for the objects defined in the Helm chart:
|
||||
+
|
||||
[source, terminal]
|
||||
----
|
||||
$ oc create namespace simple-kmod
|
||||
----
|
||||
.. Create the config map object:
|
||||
+
|
||||
[source, terminal]
|
||||
----
|
||||
$ oc create cm simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/simple-kmod-0.0.1.tgz -n simple-kmod
|
||||
----
|
||||
|
||||
. Use the following `SpecialResource` manifest to deploy the simple-kmod object using the Helm chart that you created in the config map. Save this YAML as `simple-kmod-configmap.yaml`:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: sro.openshift.io/v1beta1
|
||||
kind: SpecialResource
|
||||
metadata:
|
||||
name: simple-kmod
|
||||
spec:
|
||||
#debug: true <1>
|
||||
namespace: simple-kmod
|
||||
chart:
|
||||
name: simple-kmod
|
||||
version: 0.0.1
|
||||
repository:
|
||||
name: example
|
||||
url: cm://simple-kmod/simple-kmod-chart <2>
|
||||
set:
|
||||
kind: Values
|
||||
apiVersion: sro.openshift.io/v1beta1
|
||||
kmodNames: ["simple-kmod", "simple-procfs-kmod"]
|
||||
buildArgs:
|
||||
- name: "KMODVER"
|
||||
value: "SRO"
|
||||
driverContainer:
|
||||
source:
|
||||
git:
|
||||
ref: "master"
|
||||
uri: "https://github.com/openshift-psap/kvc-simple-kmod.git"
|
||||
----
|
||||
<1> Optional: Uncomment the `#debug: true` line to have the YAML files in the chart printed in full in the Operator logs and to verify that the logs are created and templated properly.
|
||||
<2> The `spec.chart.repository.url` field tells SRO to look for the chart in a config map.
|
||||
|
||||
. From a command line, create the `SpecialResource` file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f simple-kmod-configmap.yaml
|
||||
----
|
||||
+
|
||||
The `simple-kmod` resources are deployed in the `simple-kmod` namespace as specified in the object manifest. After a short time, the build pod for the `simple-kmod` driver container starts running. The build completes after a few minutes, and then the driver container pods start running.
|
||||
|
||||
. Use `oc get pods` command to display the status of the build pods:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n simple-kmod
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
simple-kmod-driver-build-12813789169ac0ee-1-build 0/1 Completed 0 7m12s
|
||||
simple-kmod-driver-container-12813789169ac0ee-mjsnh 1/1 Running 0 8m2s
|
||||
simple-kmod-driver-container-12813789169ac0ee-qtkff 1/1 Running 0 8m2s
|
||||
----
|
||||
|
||||
. Use the `oc logs` command, along with the build pod name obtained from the `oc get pods` command above, to display the logs of the simple-kmod driver container image build:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs pod/simple-kmod-driver-build-12813789169ac0ee-1-build -n simple-kmod
|
||||
----
|
||||
|
||||
. To verify that the simple-kmod kernel modules are loaded, execute the `lsmod` command in one of the driver container pods that was returned from the `oc get pods` command above:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc exec -n simple-kmod -it pod/simple-kmod-driver-container-12813789169ac0ee-mjsnh -- lsmod | grep simple
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
simple_procfs_kmod 16384 0
|
||||
simple_kmod 16384 0
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you want to remove the simple-kmod kernel module from the node, delete the simple-kmod `SpecialResource` API object using the `oc delete` command. The kernel module is unloaded when the driver container pod is deleted.
|
||||
====
|
||||
101
modules/psap-special-resource-operator-using-manifests.adoc
Normal file
101
modules/psap-special-resource-operator-using-manifests.adoc
Normal file
@@ -0,0 +1,101 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hardware_enablement/psap-special-resource-operator.adoc
|
||||
|
||||
[id="deploy-simple-kmod-using-local-chart_{context}"]
|
||||
= Building and running the simple-kmod SpecialResource by using the templates from the SRO image
|
||||
|
||||
The SRO image contains a local repository of Helm charts including the templates for deploying the simple-kmod kernel module. In this example, the simple-kmod kernel module is used to show how SRO can manage a driver container that is defined in the internal SRO repository.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have a running {product-title} cluster.
|
||||
* You set the Image Registry Operator state to `Managed` for your cluster.
|
||||
* You installed the OpenShift CLI (`oc`).
|
||||
* You are logged into the OpenShift CLI as a user with `cluster-admin` privileges.
|
||||
* You installed the Node Feature Discovery (NFD) Operator.
|
||||
* You installed the Special Resource Operator.
|
||||
|
||||
.Procedure
|
||||
. To deploy the simple-kmod using the SRO image's local Helm repository, use the following `SpecialResource` manifest. Save this YAML as `simple-kmod-local.yaml`.
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: sro.openshift.io/v1beta1
|
||||
kind: SpecialResource
|
||||
metadata:
|
||||
name: simple-kmod
|
||||
spec:
|
||||
namespace: simple-kmod
|
||||
chart:
|
||||
name: simple-kmod
|
||||
version: 0.0.1
|
||||
repository:
|
||||
name: example
|
||||
url: file:///charts/example
|
||||
set:
|
||||
kind: Values
|
||||
apiVersion: sro.openshift.io/v1beta1
|
||||
kmodNames: ["simple-kmod", "simple-procfs-kmod"]
|
||||
buildArgs:
|
||||
- name: "KMODVER"
|
||||
value: "SRO"
|
||||
driverContainer:
|
||||
source:
|
||||
git:
|
||||
ref: "master"
|
||||
uri: "https://github.com/openshift-psap/kvc-simple-kmod.git"
|
||||
----
|
||||
|
||||
. Create the `SpecialResource`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -f simple-kmod-local.yaml
|
||||
----
|
||||
+
|
||||
The `simple-kmod` resources are deployed in the `simple-kmod` namespace as specified in the object manifest. After a short time, the build pod for the `simple-kmod` driver container starts running. The build completes after a few minutes, and then the driver container pods start running.
|
||||
|
||||
+
|
||||
. Use the `oc get pods` command to display the status of the pods:
|
||||
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -n simple-kmod
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
simple-kmod-driver-build-12813789169ac0ee-1-build 0/1 Completed 0 7m12s
|
||||
simple-kmod-driver-container-12813789169ac0ee-mjsnh 1/1 Running 0 8m2s
|
||||
simple-kmod-driver-container-12813789169ac0ee-qtkff 1/1 Running 0 8m2s
|
||||
----
|
||||
|
||||
. To display the logs of the simple-kmod driver container image build, use the `oc logs` command, along with the build pod name obtained above:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs pod/simple-kmod-driver-build-12813789169ac0ee-1-build -n simple-kmod
|
||||
----
|
||||
|
||||
. To verify that the simple-kmod kernel modules are loaded, execute the `lsmod` command in one of the driver container pods that was returned from the `oc get pods` command above.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc exec -n simple-kmod -it pod/simple-kmod-driver-container-12813789169ac0ee-mjsnh -- lsmod | grep simple
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
simple_procfs_kmod 16384 0
|
||||
simple_kmod 16384 0
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you want to remove the simple-kmod kernel module from the node, delete the simple-kmod `SpecialResource` API object using the `oc delete` command. The kernel module is unloaded when the driver container pod is deleted.
|
||||
====
|
||||
11
modules/psap-special-resource-operator-using.adoc
Normal file
11
modules/psap-special-resource-operator-using.adoc
Normal file
@@ -0,0 +1,11 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hardware_enablement/psap-special-resource-operator.adoc
|
||||
|
||||
[id="using-the-special-resource-operator_{context}"]
|
||||
= Using Special Resource Operator
|
||||
|
||||
Special Resource Operator (SRO) is used to manage the build and deployment of a driver container. The objects required to build and deploy the container can be defined in a Helm chart.
|
||||
|
||||
The examples in this section use the simple-kmod kernel module to demonstrate how to use SRO to build and run a driver container.
|
||||
In the first example, the SRO image contains a local repository of Helm charts including the templates for deploying the simple-kmod kernel module. In this case, a `SpecialResource` manifest is used to deploy the driver container. In the second example, the simple-kmod `SpecialResource` object points to a `ConfigMap` object that is created to store the Helm charts.
|
||||
10
modules/psap-special-resource-operator.adoc
Normal file
10
modules/psap-special-resource-operator.adoc
Normal file
@@ -0,0 +1,10 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hardware_enablement/psap-special-resource-operator.adoc
|
||||
|
||||
[id="about-special-resource-operator_{context}"]
|
||||
= About Special Resource Operator
|
||||
|
||||
Special Resource Operator (SRO) helps you manage the deployment of kernel modules and drivers on an existing {product-title} cluster. SRO can be used for a case as simple as building and loading a single kernel module, or as complex as deploying the driver, device plug-in, and monitoring stack for a hardware accelerator.
|
||||
|
||||
For loading kernel modules, SRO is designed around the use of driver containers. Driver containers are increasingly being used in cloud-native environments, especially when run on pure container operating systems, to deliver hardware drivers to the host. Driver containers extend the kernel stack beyond the out-of-the-box software and hardware features of a specific kernel. Driver containers work on various container-capable Linux distributions. With driver containers, the host operating system stays clean and there is no clash between different library versions or binaries on the host.
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/psap-node-feature-discovery-operator.adoc
|
||||
// * hardware_enablement/psap-node-feature-discovery-operator.adoc
|
||||
|
||||
[id="using-the-node-feature-discovery-operator_{context}"]
|
||||
= Using the Node Feature Discovery Operator
|
||||
|
||||
Reference in New Issue
Block a user