1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/nodes-nodes-cgroups-2.adoc
2022-01-20 19:41:20 +00:00

142 lines
5.9 KiB
Plaintext

// Module included in the following assemblies:
//
// * nodes/nodes-nodes-working.adoc
// * post_installation_configuration/machine-configuration-tasks.adoc
[id="nodes-nodes-cgroups-2_{context}"]
= Enabling Linux control groups version 2 (cgroups v2)
You can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control groups version 2] (cgroups v2) on specific nodes in your cluster by using a machine config. The {product-title} process for enabling cgroups v2 disables all cgroups version 1 controllers and hierarchies.
[IMPORTANT]
====
The {product-title} cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time.
====
.Prerequisites
* You have a running {product-title} cluster that uses version 4.10 or later.
* You are logged in to the cluster as a user with administrative privileges.
* You have the `node-role.kubernetes.io` value for the node(s) you want to configure.
+
[source,terminal]
----
$ oc describe node <node-name>
----
+
.Example output
[source,terminal]
----
Name: ci-ln-v05w5m2-72292-5s9ht-worker-a-r6fpg
Roles: worker
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=n1-standard-4
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=us-central1
failure-domain.beta.kubernetes.io/zone=us-central1-a
kubernetes.io/arch=amd64
kubernetes.io/hostname=ci-ln-v05w5m2-72292-5s9ht-worker-a-r6fpg
kubernetes.io/os=linux
node-role.kubernetes.io/worker= <1>
#...
----
<1> This value is the node role you need.
.Procedure
. Enable cgroups v2 on nodes:
* Create a machine config file YAML, such as `worker-cgroups-v2.yaml`:
+
[source,yaml]
----
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: "worker" <1>
name: worker-enable-cgroups-v2
spec:
kernelArguments:
- systemd.unified_cgroup_hierarchy=1 <2>
- cgroup_no_v1="all" <3>
----
<1> Specifies the `node-role.kubernetes.io` value for the nodes you want to configure, such as `master`, `worker`, or `infra`.
<2> Enables cgroups v2 in systemd.
<3> Disables cgroups v1.
* Create the new machine config:
+
[source,terminal]
----
$ oc create -f worker-enable-cgroups-v2.yaml
----
. Check the machine configs to see that the new one was added:
+
[source,terminal]
----
$ oc get MachineConfig
----
+
.Example output
[source,terminal]
----
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-master-ssh 3.2.0 40m
99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
99-worker-ssh 3.2.0 40m
rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m
worker-enable-cgroups-v2 3.2.0 10s
----
. Check the nodes to see that scheduling on each affected node is disabled. This indicates that the change is being applied:
+
[source,terminal]
----
$ oc get nodes
----
+
.Example output
[source,terminal]
----
NAME STATUS ROLES AGE VERSION
ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.23.0
ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.23.0
ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.23.0
ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.23.0
ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.23.0
ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.23.0
----
. After a node returns to the `Ready` state, you can verify that cgroups v2 is enabled by checking that the `sys/fs/cgroup/cgroup.controllers` file is present on the node. This file is created by cgroups v2.
+
* Start a debug session for that node:
+
[source,terminal]
----
$ oc debug node/<node_name>
----
+
* Locate the `sys/fs/cgroup/cgroup.controllers` file. If this file is present, cgroups v2 is enabled on that node.
+
.Example output
[source,terminal]
----
cgroup.controllers cgroup.stat cpuset.cpus.effective io.stat pids
cgroup.max.depth cgroup.subtree_control cpuset.mems.effective kubepods.slice system.slice
cgroup.max.descendants cgroup.threads init.scope memory.pressure user.slice
cgroup.procs cpu.pressure io.pressure memory.stat
----
.Additional resources
* For information about enabling cgroups v2 during installation, see the _Optional parameters_ table in the _Installation configuration parameters_ section of your installation process.