1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/installation-special-config-kmod.adoc

440 lines
12 KiB
Plaintext

// Module included in the following assemblies:
//
// * installing/installing-special-config.adoc
:_mod-docs-content-type: PROCEDURE
[id="installation-special-config-kmod_{context}"]
= Adding kernel modules to nodes
For most common hardware, the Linux kernel includes the device driver
modules needed to use that hardware when the computer starts up. For
some hardware, however, modules are not available in Linux. Therefore, you must
find a way to provide those modules to each host computer. This
procedure describes how to do that for nodes in an {product-title} cluster.
When a kernel module is first deployed by following these instructions,
the module is made available for the current kernel. If a new kernel
is installed, the kmods-via-containers software will rebuild and deploy
the module so a compatible version of that module is available with the
new kernel.
The way that this feature is able to keep the module up to date on each
node is by:
* Adding a systemd service to each node that starts at boot time to detect
if a new kernel has been installed and
* If a new kernel is detected, the
service rebuilds the module and installs it to the kernel
For information on the software needed for this procedure, see the
link:https://github.com/kmods-via-containers/kmods-via-containers[kmods-via-containers] github site.
A few important issues to keep in mind:
* This procedure is Technology Preview.
* Software tools and examples are not yet available in official RPM form
and can only be obtained for now from unofficial `github.com` sites noted in the procedure.
* Third-party kernel modules you might add through these procedures are not supported by Red Hat.
* In this procedure, the software needed to build your kernel modules is
deployed in a RHEL 8 container. Keep in mind that modules are rebuilt
automatically on each node when that node gets a new kernel. For that
reason, each node needs access to a `yum` repository that contains the
kernel and related packages needed to rebuild the module. That content
is best provided with a valid RHEL subscription.
[id="building-testing-kernel-module-container_{context}"]
== Building and testing the kernel module container
Before deploying kernel modules to your {product-title} cluster,
you can test the process on a separate RHEL system.
Gather the kernel module's source code, the KVC framework, and the
kmod-via-containers software. Then build and test the module. To do
that on a RHEL 8 system, do the following:
.Procedure
. Register a RHEL 8 system:
+
[source,terminal]
----
# subscription-manager register
----
. Attach a subscription to the RHEL 8 system:
+
[source,terminal]
----
# subscription-manager attach --auto
----
. Install software that is required to build the software and container:
+
[source,terminal]
----
# yum install podman make git -y
----
. Clone the `kmod-via-containers` repository:
.. Create a folder for the repository:
+
[source,terminal]
----
$ mkdir kmods; cd kmods
----
.. Clone the repository:
+
[source,terminal]
----
$ git clone https://github.com/kmods-via-containers/kmods-via-containers
----
. Install a KVC framework instance on your RHEL 8 build host to test the module.
This adds a `kmods-via-container` systemd service and loads it:
.. Change to the `kmod-via-containers` directory:
+
[source,terminal]
----
$ cd kmods-via-containers/
----
.. Install the KVC framework instance:
+
[source,terminal]
----
$ sudo make install
----
.. Reload the systemd manager configuration:
+
[source,terminal]
----
$ sudo systemctl daemon-reload
----
. Get the kernel module source code. The source code might be used to
build a third-party module that you do not
have control over, but is supplied by others. You will need content
similar to the content shown in the `kvc-simple-kmod` example that can
be cloned to your system as follows:
+
[source,terminal]
----
$ cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod
----
. Edit the configuration file, `simple-kmod.conf` file, in this example, and
change the name of the Dockerfile to `Dockerfile.rhel`:
.. Change to the `kvc-simple-kmod` directory:
+
[source,terminal]
----
$ cd kvc-simple-kmod
----
.. Rename the Dockerfile:
+
[source,terminal]
----
$ cat simple-kmod.conf
----
+
.Example Dockerfile
[source,terminal]
----
KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git"
KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel
KMOD_SOFTWARE_VERSION=dd1a7d4
KMOD_NAMES="simple-kmod simple-procfs-kmod"
----
. Create an instance of `kmods-via-containers@.service` for your kernel module,
`simple-kmod` in this example:
+
[source,terminal]
----
$ sudo make install
----
. Enable the `kmods-via-containers@.service` instance:
+
[source,terminal]
----
$ sudo kmods-via-containers build simple-kmod $(uname -r)
----
. Enable and start the systemd service:
+
[source,terminal]
----
$ sudo systemctl enable kmods-via-containers@simple-kmod.service --now
----
.. Review the service status:
+
[source,terminal]
----
$ sudo systemctl status kmods-via-containers@simple-kmod.service
----
+
.Example output
[source,terminal]
----
● kmods-via-containers@simple-kmod.service - Kmods Via Containers - simple-kmod
Loaded: loaded (/etc/systemd/system/kmods-via-containers@.service;
enabled; vendor preset: disabled)
Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago...
----
. To confirm that the kernel modules are loaded, use the `lsmod` command to list the modules:
+
[source,terminal]
----
$ lsmod | grep simple_
----
+
.Example output
[source,terminal]
----
simple_procfs_kmod 16384 0
simple_kmod 16384 0
----
. Optional. Use other methods to check that the `simple-kmod` example is working:
** Look for a "Hello world" message in the kernel ring buffer with `dmesg`:
+
[source,terminal]
----
$ dmesg | grep 'Hello world'
----
+
.Example output
[source,terminal]
----
[ 6420.761332] Hello world from simple_kmod.
----
** Check the value of `simple-procfs-kmod` in `/proc`:
+
[source,terminal]
----
$ sudo cat /proc/simple-procfs-kmod
----
+
.Example output
[source,terminal]
----
simple-procfs-kmod number = 0
----
** Run the `spkut` command to get more information from the module:
+
[source,terminal]
----
$ sudo spkut 44
----
+
.Example output
[source,terminal]
----
KVC: wrapper simple-kmod for 4.21.0-147.3.1.el8_1.x86_64
Running userspace wrapper using the kernel module container...
+ podman run -i --rm --privileged
simple-kmod-dd1a7d4:4.21.0-147.3.1.el8_1.x86_64 spkut 44
simple-procfs-kmod number = 0
simple-procfs-kmod number = 44
----
Going forward, when the system boots this service will check if a new
kernel is running. If there is a new kernel, the service builds a new
version of the kernel module and then loads it. If the module is already
built, it will just load it.
[id="provisioning-kernel-module-to-ocp_{context}"]
== Provisioning a kernel module to {product-title}
Depending on whether or not you must have the kernel module in place
when {product-title} cluster first boots, you can set up the
kernel modules to be deployed in one of two ways:
* **Provision kernel modules at cluster install time (day-1)**:
You can create the content as a `MachineConfig` object and provide it to `openshift-install`
by including it with a set of manifest files.
* **Provision kernel modules via Machine Config Operator (day-2)**: If you can wait until the
cluster is up and running to add your kernel module, you can deploy the kernel
module software via the Machine Config Operator (MCO).
In either case, each node needs to be able to get the kernel packages and related
software packages at the time that a new kernel is detected. There are a few ways
you can set up each node to be able to obtain that content.
* Provide RHEL entitlements to each node.
* Get RHEL entitlements from an existing RHEL host, from the `/etc/pki/entitlement` directory
and copy them to the same location as the other files you provide
when you build your Ignition config.
* Inside the Dockerfile, add pointers to a `yum` repository containing the kernel and other packages.
This must include new kernel packages as they are needed to match newly installed kernels.
[id="provision-kernel-modules-via-machineconfig_{context}"]
=== Provision kernel modules via a MachineConfig object
By packaging kernel module software with a `MachineConfig` object, you can
deliver that software to worker or control plane nodes at installation time
or via the Machine Config Operator.
.Procedure
. Register a RHEL 8 system:
+
[source,terminal]
----
# subscription-manager register
----
. Attach a subscription to the RHEL 8 system:
+
[source,terminal]
----
# subscription-manager attach --auto
----
. Install software needed to build the software:
+
[source,terminal]
----
# yum install podman make git -y
----
. Create a directory to host the kernel module and tooling:
+
[source,terminal]
----
$ mkdir kmods; cd kmods
----
. Get the `kmods-via-containers` software:
.. Clone the `kmods-via-containers` repository:
+
[source,terminal]
----
$ git clone https://github.com/kmods-via-containers/kmods-via-containers
----
.. Clone the `kvc-simple-kmod` repository:
+
[source,terminal]
----
$ git clone https://github.com/kmods-via-containers/kvc-simple-kmod
----
. Get your module software. In this example, `kvc-simple-kmod` is used.
. Create a fakeroot directory and populate it with files that you want to
deliver via Ignition, using the repositories cloned earlier:
.. Create the directory:
+
[source,terminal]
----
$ FAKEROOT=$(mktemp -d)
----
.. Change to the `kmod-via-containers` directory:
+
[source,terminal]
----
$ cd kmods-via-containers
----
.. Install the KVC framework instance:
+
[source,terminal]
----
$ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/
----
.. Change to the `kvc-simple-kmod` directory:
+
[source,terminal]
----
$ cd ../kvc-simple-kmod
----
.. Create the instance:
+
[source,terminal]
----
$ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/
----
. Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command:
+
[source,terminal]
----
$ cd .. && rm -rf kmod-tree && cp -Lpr ${FAKEROOT} kmod-tree
----
. Create a Butane config file, `99-simple-kmod.bu`, that embeds the kernel module tree and enables the systemd service.
+
[NOTE]
====
See "Creating machine configs with Butane" for information about Butane.
====
+
[source,yaml,subs="attributes+"]
----
variant: openshift
version: {product-version}.0
metadata:
name: 99-simple-kmod
labels:
machineconfiguration.openshift.io/role: worker <1>
storage:
trees:
- local: kmod-tree
systemd:
units:
- name: kmods-via-containers@simple-kmod.service
enabled: true
----
+
<1> To deploy on control plane nodes, change `worker` to `master`. To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type.
. Use Butane to generate a machine config YAML file, `99-simple-kmod.yaml`, containing the files and configuration to be delivered:
+
[source,terminal]
----
$ butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml
----
. If the cluster is not up yet, generate manifest files and add this file to the
`openshift` directory. If the cluster is already running, apply the file as follows:
+
[source,terminal]
----
$ oc create -f 99-simple-kmod.yaml
----
+
Your nodes will start the `kmods-via-containers@simple-kmod.service`
service and the kernel modules will be loaded.
. To confirm that the kernel modules are loaded, you can log in to a node
(using `oc debug node/<openshift-node>`, then `chroot /host`).
To list the modules, use the `lsmod` command:
+
[source,terminal]
----
$ lsmod | grep simple_
----
+
.Example output
[source,terminal]
----
simple_procfs_kmod 16384 0
simple_kmod 16384 0
----