1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

OSDOCS13996: Remove cGroup v1 in OCP 4.19

This commit is contained in:
Michael Burke
2025-04-16 12:29:36 -04:00
parent 1821157e6a
commit d10c72effd
17 changed files with 45 additions and 224 deletions

View File

@@ -657,8 +657,6 @@ Topics:
File: installing-customizing
- Name: Configuring your firewall
File: configuring-firewall
- Name: Enabling Linux control group version 1 (cgroup v1)
File: enabling-cgroup-v1
Distros: openshift-enterprise
- Name: Validation and troubleshooting
Dir: validation_and_troubleshooting
@@ -2832,8 +2830,6 @@ Topics:
- Name: Configuring your cluster to place pods on overcommited nodes
File: nodes-cluster-overcommit
Distros: openshift-enterprise
- Name: Configuring the Linux cgroup version on your nodes
File: nodes-cluster-cgroups-2
- Name: Enabling features using FeatureGates
File: nodes-cluster-enabling-features
Distros: openshift-enterprise,openshift-origin

View File

@@ -103,3 +103,5 @@ endif::openshift-dedicated,openshift-rosa[]
[id="about-admission-plug-ins"]
== About admission plugins
You can use xref:../architecture/admission-plug-ins.adoc#admission-plug-ins[admission plugins] to regulate how {product-title} functions. After a resource request is authenticated and authorized, admission plugins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to. Admission plugins are used to enforce security policies, resource limitations, configuration requirements, and other settings.
include::modules/architecture-about-cgroup-v2.adoc[leveloffset=+1]

View File

@@ -1,37 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
:context: nodes-cluster-cgroups-1
[id="enabling-cgroup-v1"]
= Enabling Linux control group version 1 (cgroup v1)
include::_attributes/common-attributes.adoc[]
toc::[]
As of {product-title} 4.14, {product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster. If you are using cgroup v1 on {product-title} 4.13 or earlier, migrating to {product-title} {product-version} will not automatically update your cgroup configuration to version 2. A fresh installation of {product-title} 4.14 or later will use cgroup v2 by default. However, you can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/index.html[Linux control group version 1] (cgroup v1) upon installation. Enabling cgroup v1 in {product-title} disables all cgroup v2 controllers and hierarchies in your cluster.
:FeatureName: cgroup v1
include::snippets/deprecated-feature.adoc[]
include::snippets/cgroupv2-vs-cgroupv1.adoc[]
ifndef::openshift-origin[]
You can switch between cgroup v1 and cgroup v2, as needed, by editing the `node.config` object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section.
endif::openshift-origin[]
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
// assemblies.
ifndef::openshift-origin[]
include::modules/nodes-clusters-cgroups-2-install.adoc[leveloffset=+1]
endif::openshift-origin[]
ifdef::openshift-origin[]
include::modules/nodes-clusters-cgroups-okd-configure.adoc[leveloffset=+1]
endif::openshift-origin[]
.Additional resources
* xref:../../installing/overview/index.adoc#ocp-installation-overview[OpenShift Container Platform installation overview]
* xref:../../nodes/clusters/nodes-cluster-cgroups-2.adoc#nodes-clusters-cgroups-2_nodes-cluster-cgroups-2[Configuring the Linux cgroup on your nodes]

View File

@@ -17,11 +17,6 @@ include::modules/installation-openstack-ovs-dpdk-requirements.adoc[leveloffset=+
You must configure {rh-openstack} before you install a cluster that uses SR-IOV on it.
When installing a cluster using SR-IOV, you must deploy clusters using cgroup v1. For more information, xref:../../installing/install_config/enabling-cgroup-v1.adoc#enabling-cgroup-v1[Enabling Linux control group version 1 (cgroup v1)].
:FeatureName: cgroup v1
include::snippets/deprecated-feature.adoc[]
include::modules/installation-osp-configuring-sr-iov.adoc[leveloffset=+2]
[id="installing-openstack-nfv-preparing-tasks-ovs-dpdk"]

View File

@@ -0,0 +1,22 @@
// Module included in the following assemblies:
//
// * architecture/index.adoc
:_mod-docs-content-type: PROCEDURE
[id="architecture-about-cgroup-v2_{context}"]
= About Linux cgroup version 2
{product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster.
cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, features such as link:https://www.kernel.org/doc/html/latest/accounting/psi.html[Pressure Stall Information], and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2.
[NOTE]
====
* If you run third-party monitoring and security agents that depend on the cgroup file system, update the agents to a version that supports cgroup v2.
* If you have configured cgroup v2 and run cAdvisor as a stand-alone daemon set for monitoring pods and containers, update cAdvisor to v0.43.0 or later.
* If you deploy Java applications, use versions that fully support cgroup v2, such as the following packages:
** OpenJDK / HotSpot: jdk8u372, 11.0.16, 15 and later
** NodeJs 20.3.0 and later
** IBM Semeru Runtimes: jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later
** IBM SDK Java Technology Edition Version (IBM Java): 8.0.7.15 and later
====

View File

@@ -14,9 +14,6 @@ The performance profile lets you control latency tuning aspects of nodes that be
You can use a performance profile to specify whether to update the kernel to kernel-rt, to allocate huge pages, and to partition the CPUs for performing housekeeping duties or running workloads.
:FeatureName: cgroup v1
include::snippets/deprecated-feature.adoc[]
[NOTE]
====
You can manually create the `PerformanceProfile` object or use the Performance Profile Creator (PPC) to generate a performance profile. See the additional resources below for more information on the PPC.

View File

@@ -1,28 +0,0 @@
// Module included in the following assemblies:
//
// * install/install_config/enabling-cgroup-v2
:_mod-docs-content-type: PROCEDURE
[id="nodes-clusters-cgroups-2-install_{context}"]
= Enabling Linux cgroup v1 during installation
You can enable Linux control group version 1 (cgroup v1) when you install a cluster by creating installation manifests.
:FeatureName: cgroup v1
include::snippets/deprecated-feature.adoc[]
.Procedure
. Create or edit the `node.config` object to specify the `v1` cgroup:
+
[source,yaml]
----
apiVersion: config.openshift.io/v1
kind: Node
metadata:
name: cluster
spec:
cgroupMode: "v1"
----
. Proceed with the installation as usual.

View File

@@ -18,29 +18,6 @@ Examples of kernel arguments you could set include:
* **nosmt**: Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider `nosmt` in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance.
ifndef::openshift-origin[]
* **systemd.unified_cgroup_hierarchy**: Enables link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2). cgroup v2 is the next version of the kernel link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/ch01[control group] and offers multiple improvements.
+
--
:FeatureName: cgroup v1
include::snippets/deprecated-feature.adoc[]
--
endif::openshift-origin[]
ifdef::openshift-origin[]
* **systemd.unified_cgroup_hierarchy**: Configures the version of Linux control group that is installed on your nodes: link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1.html[cgroup v1] or link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[cgroup v2]. cgroup v2 is the next version of the kernel link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/ch01[control group] and offers multiple improvements. However, it can have some unwanted effects on your nodes.
+
[NOTE]
====
cgroup v2 is enabled by default. To disable cgroup v2, use the `systemd.unified_cgroup_hierarchy=0` kernel argument, as shown in the following procedure.
====
+
--
:FeatureName: cgroup v1
include::snippets/deprecated-feature.adoc[]
--
endif::openshift-origin[]
* **enforcing=0**: Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging.
+
[WARNING]
@@ -88,7 +65,6 @@ rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12
rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.5.0 33m
----
ifndef::openshift-origin[]
. Create a `MachineConfig` object file that identifies the kernel argument (for example, `05-worker-kernelarg-selinuxpermissive.yaml`)
+
[source,yaml]
@@ -114,41 +90,6 @@ a kernel argument to configure SELinux permissive mode).
----
$ oc create -f 05-worker-kernelarg-selinuxpermissive.yaml
----
endif::openshift-origin[]
ifdef::openshift-origin[]
. Create a `MachineConfig` object file that identifies the kernel argument (for example, `05-worker-kernelarg-selinuxpermissive.yaml`)
+
[source,yaml]
----
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker <1>
name: 05-worker-kernelarg-selinuxpermissive <2>
spec:
config:
ignition:
version: 3.5.0
kernelArguments:
- enforcing=0 <3>
systemd.unified_cgroup_hierarchy=0 <4>
#...
----
+
<1> Applies the new kernel argument only to worker nodes.
<2> Named to identify where it fits among the machine configs (05) and what it does (adds
a kernel argument to configure SELinux permissive mode).
<3> Identifies the exact kernel argument as `enforcing=0`.
<4> Configures cgroup v1 on the associated nodes. cgroup v2 is the default.
. Create the new machine config:
+
[source,terminal]
----
$ oc create -f 05-worker-kernelarg-selinuxpermissive.yaml
----
endif::openshift-origin[]
. Check the machine configs to see that the new one was added:
+

View File

@@ -20,6 +20,8 @@ Engineering considerations::
--
Use the following information to plan telco core workloads and cluster resources:
include::snippets/nodes-cgroup-vi-removed.adoc[]
* CNF applications should conform to the latest version of https://redhat-best-practices-for-k8s.github.io/guide/[Red Hat Best Practices for Kubernetes].
* Use a mix of best-effort and burstable QoS pods as required by your applications.
** Use guaranteed QoS pods with proper configuration of reserved or isolated CPUs in the `PerformanceProfile` CR that configures the node.

View File

@@ -24,6 +24,9 @@ Limits and requirements::
For more information, see "Creating a performance profile".
Engineering considerations::
include::snippets/nodes-cgroup-vi-removed.adoc[]
* The minimum reserved capacity (`systemReserved`) required can be found by following the guidance in the link:https://access.redhat.com/solutions/5843241[Which amount of CPU and memory are recommended to reserve for the system in OpenShift 4 nodes?] Knowledgebase article.
* The actual required reserved CPU capacity depends on the cluster configuration and workload attributes.
* The reserved CPU value must be rounded up to a full core (2 hyper-threads) alignment.
@@ -46,12 +49,3 @@ With no configuration, the default queue count is one RX/TX queue per online CPU
====
Some drivers do not deallocate the interrupts even after reducing the queue count.
====
* If workloads running on the cluster require cgroup v1, you can configure nodes to use cgroup v1 as part of the initial cluster deployment.
See "Enabling Linux control group version 1 (cgroup v1)" and link:https://www.redhat.com/en/blog/rhel-9-changes-context-red-hat-openshift-workloads[Red Hat Enterprise Linux 9 changes in the context of Red Hat OpenShift workloads].
+
[NOTE]
====
Support for cgroup v1 is planned for removal in {product-title} 4.19.
Clusters running cgroup v1 must transition to cgroup v2.
====

View File

@@ -16,16 +16,8 @@ Limits and requirements::
* Cluster capabilities are not available for installer-provisioned installation methods.
Engineering considerations::
* In clusters running {product-title} 4.16 and later, the cluster does not automatically revert to cgroup v1 when a `PerformanceProfile` is applied.
If workloads running on the cluster require cgroup v1, the cluster must be configured for cgroup v1.
For more information, see "Enabling Linux control group version 1 (cgroup v1)".
You should make this configuration as part of the initial cluster deployment.
+
[NOTE]
====
Support for cgroup v1 is planned for removal in {product-title} 4.19.
Clusters running cgroup v1 must transition to cgroup v2.
====
include::snippets/nodes-cgroup-vi-removed.adoc[]
The following table lists the required platform tuning configurations:

View File

@@ -12,11 +12,6 @@ While the Kubernetes design is useful for simple deployments, this Layer 3 topol
UDN improves the flexibility and segmentation capabilities of the default Layer 3 topology for a Kubernetes pod network by enabling custom Layer 2 and Layer 3 network segments, where all these segments are isolated by default. These segments act as either primary or secondary networks for container pods and virtual machines that use the default OVN-Kubernetes CNI plugin. UDNs enable a wide range of network architectures and topologies, enhancing network flexibility, security, and performance.
[NOTE]
====
Nodes that use `cgroupv1` Linux Control Groups (cgroup) must be reconfigured from `cgroupv1` to `cgroupv2` before creating a user-defined network. For more information, see xref:../../../nodes/clusters/nodes-cluster-cgroups-2.adoc#nodes-cluster-cgroups-2[Configuring Linux cgroup].
====
A cluster administrator can use a UDN to create and define primary or secondary networks that span multiple namespaces at the cluster level by leveraging the `ClusterUserDefinedNetwork` custom resource (CR). Additionally, a cluster administrator or a cluster user can use a UDN to define secondary networks at the namespace level with the `UserDefinedNetwork` CR.
The following sections further emphasize the benefits and limitations of user-defined networks, the best practices when creating a `ClusterUserDefinedNetwork` or `UserDefinedNetwork` CR, how to create the CR, and additional configuration details that might be relevant to your deployment.
@@ -74,4 +69,4 @@ include::modules/opening-default-network-ports-udn.adoc[leveloffset=+1]
//[role="_additional-resources"]
//== Additional resources
// * xr3f../virtual docs that point to live migration of vms for 4.18's GA.
// * xr3f../virtual docs that point to live migration of vms for 4.18's GA.

View File

@@ -1,45 +0,0 @@
:_mod-docs-content-type: ASSEMBLY
:context: nodes-cluster-cgroups-2
[id="nodes-cluster-cgroups-2"]
= Configuring the Linux cgroup version on your nodes
include::_attributes/common-attributes.adoc[]
toc::[]
ifndef::openshift-origin[]
As of {product-title} 4.14, {product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster. If you are using cgroup v1 on {product-title} 4.13 or earlier, migrating to {product-title} 4.14 or later will not automatically update your cgroup configuration to version 2. A fresh installation of {product-title} 4.14 or later will use cgroup v2 by default. However, you can enable link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/index.html[Linux control group version 1] (cgroup v1) upon installation.
endif::openshift-origin[]
ifdef::openshift-origin[]
{product-title} uses link:https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html[Linux control group version 2] (cgroup v2) in your cluster.
endif::openshift-origin[]
:FeatureName: cgroup v1
include::snippets/deprecated-feature.adoc[]
include::snippets/cgroupv2-vs-cgroupv1.adoc[]
You can change between cgroup v1 and cgroup v2, as needed. Enabling cgroup v1 in {product-title} disables all cgroup v2 controllers and hierarchies in your cluster.
[NOTE]
====
* If you run third-party monitoring and security agents that depend on the cgroup file system, update the agents to a version that supports cgroup v2.
* If you have configured cgroup v2 and run cAdvisor as a stand-alone daemon set for monitoring pods and containers, update cAdvisor to v0.43.0 or later.
* If you deploy Java applications, use versions that fully support cgroup v2, such as the following packages:
** OpenJDK / HotSpot: jdk8u372, 11.0.16, 15 and later
** NodeJs 20.3.0 or later
** IBM Semeru Runtimes: jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later
** IBM SDK Java Technology Edition Version (IBM Java): 8.0.7.15 and later
====
// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
// assemblies.
include::modules/nodes-clusters-cgroups-2.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../installing/overview/index.adoc#ocp-installation-overview[OpenShift Container Platform installation overview]

View File

@@ -307,12 +307,6 @@ Applying autoscaling to an {product-title} cluster involves deploying a cluster
For more information, see xref:../machine_management/applying-autoscaling.adoc#applying-autoscaling[Applying autoscaling to an {product-title} cluster].
include::modules/nodes-clusters-cgroups-2.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../nodes/clusters/nodes-cluster-cgroups-2.adoc#nodes-cluster-cgroups-2[Configuring the Linux cgroup version on your nodes]
[id="post-install-tp-tasks"]
== Enabling Technology Preview features using FeatureGates

View File

@@ -40,8 +40,6 @@ include::modules/telco-core-cpu-partitioning-and-performance-tuning.adoc[levelof
* xref:../edge_computing/ztp-reference-cluster-configuration-for-vdu.adoc#ztp-du-configuring-host-firmware-requirements_sno-configure-for-vdu[Configuring host firmware for low latency and high performance]
* xref:../installing/install_config/enabling-cgroup-v1.adoc#nodes-clusters-cgroups-2-install_nodes-cluster-cgroups-1[Enabling Linux cgroup v1 during installation]
include::modules/telco-core-service-mesh.adoc[leveloffset=+2]
[role="_additional-resources"]

View File

@@ -1,10 +0,0 @@
// Text snippet included in the following modules:
//
// * installing/install-config/enabling-cgroup-v1.adoc
// * nodes/clusters/nodes-cluster-cgroups-2.adoc
:_mod-docs-content-type: SNIPPET
// * Text included in modules/nodes-cluster-cgroups-2.adoc as text, not a snippet because snippets cannt be in an ifdef. Also update there if you edit this text.
cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as link:https://www.kernel.org/doc/html/latest/accounting/psi.html[Pressure Stall Information], and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2.

View File

@@ -0,0 +1,13 @@
// Text snippet included in the following modules:
//
// * modules/telco-ran-cluster-tuning.adoc
// * modules/telco-core-cpu-partitioning-and-performance-tuning.adoc
// * modules/telco-core-application-workloads.adoc
:_mod-docs-content-type: SNIPPET
[NOTE]
====
As of {product-title} 4.19, cgroup v1 is no longer supported and has been removed. All workloads must now be compatible with cgroup v2. For more information, see link:https://www.redhat.com/en/blog/rhel-9-changes-context-red-hat-openshift-workloads[Red Hat Enterprise Linux 9 changes in the context of Red Hat OpenShift workloads].
====