1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

new node tuning topic for hosted control planes

This commit is contained in:
Laura Hinson
2022-10-05 13:36:55 -04:00
parent 42b66922e3
commit 83ae845ea8
3 changed files with 299 additions and 0 deletions

View File

@@ -0,0 +1,155 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/using-node-tuning-operator.adoc
:_content-type: PROCEDURE
[id="advanced-node-tuning-hosted-cluster_{context}"]
= Advanced node tuning for hosted clusters by setting kernel boot parameters
:FeatureName: Hosted control planes
include::snippets/technology-preview.adoc[]
For more advanced tuning in hosted control planes, which requires setting kernel boot parameters, you can also use the Node Tuning Operator. The following example shows how you can create a node pool with huge pages reserved.
.Procedure
. Create a `ConfigMap` object that contains a `Tuned` object manifest for creating 10 huge pages that are 2 MB in size. Save this `ConfigMap` manifest in a file named `tuned-hugepages.yaml`:
+
[source,yaml]
----
apiVersion: v1
kind: ConfigMap
metadata:
name: tuned-hugepages
namespace: clusters
data:
tuning: |
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: hugepages
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Boot time configuration for hugepages
include=openshift-node
[bootloader]
cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50
name: openshift-node-hugepages
recommend:
- priority: 20
profile: openshift-node-hugepages
----
+
[NOTE]
====
The `.spec.recommend.match` field is intentionally left blank. In this case, this `Tuned` object is applied to all nodes in the node pool where this `ConfigMap` object is referenced. Group nodes with the same hardware configuration into the same node pool. Otherwise, TuneD operands can calculate conflicting kernel parameters for two or more nodes that share the same node pool.
====
. Create the `ConfigMap` object in the management cluster:
+
[source,terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-hugepages.yaml
----
. Create a `NodePool` manifest YAML file, customize the upgrade type of the `NodePool`, and reference the `ConfigMap` object that you created in the `spec.tuningConfig` section. Create the `NodePool` manifest and save it in a file named `hugepages-nodepool.yaml` by using the `hypershift` CLI:
+
[source,yaml]
----
NODEPOOL_NAME=hugepages-example
INSTANCE_TYPE=m5.2xlarge
NODEPOOL_REPLICAS=2
hypershift create nodepool aws \
--cluster-name $CLUSTER_NAME \
--name $NODEPOOL_NAME \
--node-count $NODEPOOL_REPLICAS \
--instance-type $INSTANCE_TYPE \
--render > hugepages-nodepool.yaml
----
. In the `hugepages-nodepool.yaml` file, set `.spec.management.upgradeType` to `InPlace`, and set `.spec.tuningConfig` to reference the `tuned-hugepages` `ConfigMap` object that you created.
+
[source,yaml]
----
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
name: hugepages-nodepool
namespace: clusters
...
spec:
management:
...
upgradeType: InPlace
...
tuningConfig:
- name: tuned-hugepages
----
+
[NOTE]
====
To avoid the unnecessary re-creation of nodes when you apply the new `MachineConfig` objects, set `.spec.management.upgradeType` to `InPlace`. If you use the `Replace` upgrade type, nodes are fully deleted and new nodes can replace them when you apply the new kernel boot parameters that the TuneD operand calculated.
====
. Create the `NodePool` in the management cluster:
+
[source,terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f hugepages-nodepool.yaml
----
.Verification
After the nodes are available, the containerized TuneD daemon calculates the required kernel boot parameters based on the applied TuneD profile. After the nodes are ready and reboot once to apply the generated `MachineConfig` object, you can verify that the TuneD profile is applied and that the kernel boot parameters are set.
. List the `Tuned` objects in the hosted cluster:
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" get Tuneds -n openshift-cluster-node-tuning-operator
----
+
.Example output
[source,terminal]
----
NAME AGE
default 123m
hugepages-8dfb1fed 1m23s
rendered 123m
----
. List the `Profile` objects in the hosted cluster:
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" get Profiles -n openshift-cluster-node-tuning-operator
----
+
.Example output
[source,terminal]
----
NAME TUNED APPLIED DEGRADED AGE
nodepool-1-worker-1 openshift-node True False 132m
nodepool-1-worker-2 openshift-node True False 131m
hugepages-nodepool-worker-1 openshift-node-hugepages True False 4m8s
hugepages-nodepool-worker-2 openshift-node-hugepages True False 3m57s
----
+
Both of the worker nodes in the new `NodePool` have the `openshift-node-hugepages` profile applied.
. To confirm that the tuning was applied correctly, start a debug shell on a node and check `/proc/cmdline`.
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline
----
+
.Example output
[source,terminal]
----
BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-... hugepagesz=2M hugepages=50
----

View File

@@ -0,0 +1,135 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/using-node-tuning-operator.adoc
:_content-type: PROCEDURE
[id="node-tuning-hosted-cluster_{context}"]
= Configuring node tuning in a hosted cluster
//# Manage node-level tuning with the Node Tuning Operator
:FeatureName: Hosted control planes
include::snippets/technology-preview.adoc[]
To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain `Tuned` objects and referencing those config maps in your node pools.
.Procedure
. Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a `Tuned` manifest defines a profile that sets `vm.dirty_ratio` to 55 on nodes that contain the `tuned-1-node-label` node label with any value. Save the following `ConfigMap` manifest in a file named `tuned-1.yaml`:
+
[source,yaml]
----
apiVersion: v1
kind: ConfigMap
metadata:
name: tuned-1
namespace: clusters
data:
tuning: |
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: tuned-1
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Custom OpenShift profile
include=openshift-node
[sysctl]
vm.dirty_ratio="55"
name: tuned-1-profile
recommend:
- priority: 20
profile: tuned-1-profile
----
+
[NOTE]
====
If you do not add any labels to an entry in the `spec.recommend` section of the Tuned spec, node-pool-based matching is assumed, so the highest priority profile in the `spec.recommend` section is applied to nodes in the pool. Although you can achieve more fine-grained node-label-based matching by setting a label value in the Tuned `.spec.recommend.match` section, node labels will not persist during an upgrade unless you set the `.spec.management.upgradeType` value of the node pool to `InPlace`.
====
. Create the `ConfigMap` object in the management cluster:
+
[source, terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-1.yaml
----
. Reference the `ConfigMap` object in the `spec.tuningConfig` field of the node pool, either by editing a node pool or creating one. In this example, assume that you have only one `NodePool`, named `nodepool-1`, which contains 2 nodes.
+
[source,yaml]
----
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
...
name: nodepool-1
namespace: clusters
...
spec:
...
tuningConfig:
- name: tuned-1
status:
...
----
+
[NOTE]
====
You can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster.
====
.Verification
Now that you have created the `ConfigMap` object that contains a `Tuned` manifest and referenced it in a `NodePool`, the Node Tuning Operator syncs the `Tuned` objects into the hosted cluster. You can verify which `Tuned` objects are defined and which TuneD profiles are applied to each node.
. List the `Tuned` objects in the hosted cluster:
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" get Tuneds -n openshift-cluster-node-tuning-operator
----
+
.Example output
[source,terminal]
----
NAME AGE
default 7m36s
rendered 7m36s
tuned-1 65s
----
. List the `Profile` objects in the hosted cluster:
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" get Profiles -n openshift-cluster-node-tuning-operator
----
+
.Example output
[source,terminal]
----
NAME TUNED APPLIED DEGRADED AGE
nodepool-1-worker-1 tuned-1-profile True False 7m43s
nodepool-1-worker-2 tuned-1-profile True False 7m14s
----
+
[NOTE]
====
If no custom profiles are created, the `openshift-node` profile is applied by default.
====
. To confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values:
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio
----
+
.Example output
[source,terminal]
----
vm.dirty_ratio = 55
----

View File

@@ -22,3 +22,12 @@ include::modules/custom-tuning-specification.adoc[leveloffset=+1]
include::modules/custom-tuning-example.adoc[leveloffset=+1]
include::modules/node-tuning-operator-supported-tuned-daemon-plug-ins.adoc[leveloffset=+1]
include::modules/node-tuning-hosted-cluster.adoc[leveloffset=+1]
include::modules/advanced-node-tuning-hosted-cluster.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
For more information about hosted control planes, see link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/multicluster_engine/multicluster_engine_overview#hosted-control-planes-intro[Using hosted control plane clusters (Technology Preview)].