1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 06:46:26 +01:00

Refactoring the performance profile section for the content improvement initiative

This commit is contained in:
Ronan Hennessy
2024-08-12 11:18:28 +01:00
committed by openshift-cherrypick-robot
parent 09b0fb96fc
commit 419c0b8cc9
10 changed files with 346 additions and 211 deletions

View File

@@ -6,11 +6,25 @@
[id="cnf-about-the-profile-creator-tool_{context}"]
= About the Performance Profile Creator
The Performance Profile Creator (PPC) is a command-line tool, delivered with the Node Tuning Operator, used to create the performance profile.
The tool consumes `must-gather` data from the cluster and several user-supplied profile arguments. The PPC generates a performance profile that is appropriate for your hardware and topology.
The Performance Profile Creator (PPC) is a command-line tool, delivered with the Node Tuning Operator, that can help you to create a performance profile for your cluster.
The tool is run by one of the following methods:
Initially, you can use the PPC tool to process the `must-gather` data to display key performance configurations for your cluster, including the following information:
* Invoking `podman`
* NUMA cell partitioning with the allocated CPU IDs
* Hyper-Threading node configuration
* Calling a wrapper script
You can use this information to help you configure the performance profile.
.Running the PPC
Specify performance configuration arguments to the PPC tool to generate a proposed performance profile that is appropriate for your hardware, topology, and use-case.
You can run the PPC by using one of the following methods:
* Run the PPC by using Podman
* Run the PPC by using the wrapper script
[NOTE]
====
Using the wrapper script abstracts some of the more granular Podman tasks into an executable script. For example, the wrapper script handles tasks such as pulling and running the required container image, mounting directories into the container, and providing parameters directly to the container through Podman. Both methods achieve the same result.
====

View File

@@ -0,0 +1,87 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc
:_mod-docs-content-type: PROCEDURE
[id="creating-mcp-for-ppc_{context}"]
= Creating a machine config pool to target nodes for performance tuning
For multi-node clusters, you can define a machine config pool (MCP) to identify the target nodes that you want to configure with a performance profile.
In {sno} clusters, you must use the `master` MCP because there is only one node in the cluster. You do not need to create a separate MCP for {sno} clusters.
.Prerequisites
* You have `cluster-admin` role access.
* You installed the OpenShift CLI (`oc`).
.Procedure
. Label the target nodes for configuration by running the following command:
+
[source,terminal]
----
$ oc label node <node_name> node-role.kubernetes.io/worker-cnf="" <1>
----
<1> Replace `<node_name>` with the name of your node. This example applies the `worker-cnf` label.
. Create a `MachineConfigPool` resource containing the target nodes:
.. Create a YAML file that defines the `MachineConfigPool` resource:
+
.Example `mcp-worker-cnf.yaml` file
[source,yaml]
----
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
name: worker-cnf <1>
labels:
machineconfiguration.openshift.io/role: worker-cnf <2>
spec:
machineConfigSelector:
matchExpressions:
- {
key: machineconfiguration.openshift.io/role,
operator: In,
values: [worker, worker-cnf],
}
paused: false
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker-cnf: "" <3>
----
<1> Specify a name for the `MachineConfigPool` resource.
<2> Specify a unique label for the machine config pool.
<3> Specify the nodes with the target label that you defined.
.. Apply the `MachineConfigPool` resource by running the following command:
+
[source,terminal]
----
$ oc apply -f mcp-worker-cnf.yaml
----
+
.Example output
[source,terminal]
----
machineconfigpool.machineconfiguration.openshift.io/worker-cnf created
----
.Verification
* Check the machine config pools in your cluster by running the following command:
+
[source,terminal]
----
$ oc get mcp
----
+
.Example output
[source,terminal]
----
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-58433c7c3c1b4ed5ffef95234d451490 True False False 3 3 3 0 6h46m
worker rendered-worker-168f52b168f151e4f853259729b6azc4 True False False 2 2 2 0 6h46m
worker-cnf rendered-worker-cnf-168f52b168f151e4f853259729b6azc4 True False False 1 1 1 0 73s
----

View File

@@ -4,39 +4,18 @@
:_mod-docs-content-type: PROCEDURE
[id="gathering-data-about-your-cluster-using-must-gather_{context}"]
= Gathering data about your cluster using the must-gather command
= Gathering data about your cluster for the PPC
The Performance Profile Creator (PPC) tool requires `must-gather` data. As a cluster administrator, run the `must-gather` command to capture information about your cluster.
.Prerequisites
* Access to the cluster as a user with the `cluster-admin` role.
* The OpenShift CLI (`oc`) installed.
* You installed the OpenShift CLI (`oc`).
* You identified a target MCP that you want to configure with a performance profile.
.Procedure
. Optional: Verify that a matching machine config pool exists with a label:
+
[source,terminal]
----
$ oc describe mcp/worker-rt
----
+
.Example output
[source,terminal]
----
Name: worker-rt
Namespace:
Labels: machineconfiguration.openshift.io/role=worker-rt
----
. If a matching label does not exist add a label for a machine config pool (MCP) that matches with the MCP name:
+
[source,terminal]
----
$ oc label mcp <mcp_name> machineconfiguration.openshift.io/role=<mcp_name>
----
. Navigate to the directory where you want to store the `must-gather` data.
. Collect cluster information by running the following command:
@@ -45,13 +24,16 @@ $ oc label mcp <mcp_name> machineconfiguration.openshift.io/role=<mcp_name>
----
$ oc adm must-gather
----
+
The command creates a folder with the `must-gather` data in your local directory with a naming format similar to the following: `must-gather.local.1971646453781853027`.
. Optional: Create a compressed file from the `must-gather` directory:
+
[source,terminal]
----
$ tar cvaf must-gather.tar.gz must-gather/
$ tar cvaf must-gather.tar.gz <must_gather_folder> <1>
----
<1> Replace with the name of the `must-gather` data folder.
+
[NOTE]
====

View File

@@ -4,7 +4,7 @@
[id="how-to-run-podman-to-create-a-profile_{context}"]
= How to run podman to create a performance profile
// Is this example required for a specific reason?
The following example illustrates how to run `podman` to create a performance profile with 20 reserved CPUs that are to be split across the NUMA nodes.
Node hardware configuration:
@@ -18,7 +18,7 @@ Run `podman` to create the performance profile:
[source,terminal,subs="attributes+"]
----
$ podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v{product-version} --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml
$ podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:latest --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml
----
The created profile is described in the following YAML:

View File

@@ -6,13 +6,36 @@
[id="performance-profile-creator-arguments_{context}"]
= Performance Profile Creator arguments
.Performance Profile Creator arguments
.Required Performance Profile Creator arguments
[cols="30%,70%",options="header"]
|===
| Argument | Description
| `mcp-name`
|Name for MCP; for example, `worker-cnf` corresponding to the target machines.
| `must-gather-dir-path`
| The path of the must gather directory.
This argument is only required if you run the PPC tool by using Podman. If you use the PPC with the wrapper script, do not use this argument. Instead, specify the directory path to the `must-gather` tarball by using the `-t` option for the wrapper script.
| `reserved-cpu-count`
| Number of reserved CPUs. Use a natural number greater than zero.
| `rt-kernel`
| Enables real-time kernel.
Possible values: `true` or `false`.
|===
.Optional Performance Profile Creator arguments
[cols="30%,70%",options="header"]
|===
| Argument | Description
| `disable-ht`
a|Disable hyperthreading.
a|Disable Hyper-Threading.
Possible values: `true` or `false`.
@@ -20,48 +43,35 @@ Default: `false`.
[WARNING]
====
If this argument is set to `true` you should not disable hyperthreading in the BIOS. Disabling hyperthreading is accomplished with a kernel command line argument.
If this argument is set to `true` you should not disable Hyper-Threading in the BIOS. Disabling Hyper-Threading is accomplished with a kernel command line argument.
====
| --enable-hardware-tuning
|enable-hardware-tuning
a|Enable the setting of maximum CPU frequencies.
This parameter is optional.
To enable this feature, set the maximum frequency for applications running on isolated and reserved CPUs for both of the following:
To enable this feature, set the maximum frequency for applications running on isolated and reserved CPUs for both of the following fields:
* `spec.hardwareTuning.isolatedCpuFreq`
* `spec.hardwareTuning.reservedCpuFreq`
This is an advanced feature. If you configure hardware tuning, the generated `PerformanceProfile` includes warnings and guidance on how to set frequency settings.
| `info`
a| This captures cluster information and is used in discovery mode only. Discovery mode also requires the `must-gather-dir-path` argument. If any other arguments are set they are ignored.
a| This captures cluster information. This argument also requires the `must-gather-dir-path` argument. If any other arguments are set they are ignored.
Possible values:
* `log`
* `JSON`
+
[NOTE]
====
These options define the output format with the JSON format being reserved for debugging.
====
Default: `log`.
| `mcp-name`
|MCP name for example `worker-cnf` corresponding to the target machines. This parameter is required.
| `must-gather-dir-path`
| Must gather directory path. This parameter is required.
When the user runs the tool with the wrapper script `must-gather` is supplied by the script itself and the user must not specify it.
| `offlined-cpu-count`
a| Number of offlined CPUs.
[NOTE]
====
This must be a natural number greater than 0. If not enough logical processors are offlined then error messages are logged. The messages are:
Use a natural number greater than zero. If not enough logical processors are offlined, then error messages are logged. The messages are:
[source,terminal]
----
Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1]
@@ -77,8 +87,8 @@ a|The power consumption mode.
Possible values:
* `default`: CPU partitioning with enabled power management and basic low-latency.
* `low-latency`: Enhanced measures to improve latency figures.
* `default`: Performance achieved through CPU partitioning only.
* `low-latency`: Enhanced measures to improve latency.
* `ultra-low-latency`: Priority given to optimal latency, at the expense of power management.
Default: `default`.
@@ -92,21 +102,9 @@ Default: `false`.
| `profile-name`
| Name of the performance profile to create.
Default: `performance`.
| `reserved-cpu-count`
a| Number of reserved CPUs. This parameter is required.
[NOTE]
====
This must be a natural number. A value of 0 is not allowed.
====
| `rt-kernel`
| Enable real-time kernel. This parameter is required.
Possible values: `true` or `false`.
| `split-reserved-cpus-across-numa`
| Split the reserved CPUs across NUMA nodes.
@@ -131,4 +129,4 @@ Default: `restricted`.
Possible values: `true` or `false`.
Default: `false`.
|===
|===

View File

@@ -6,11 +6,22 @@
[id="running-the-performance-profile-creator-wrapper-script_{context}"]
= Running the Performance Profile Creator wrapper script
The performance profile wrapper script simplifies the running of the Performance Profile Creator (PPC) tool. It hides the complexities associated with running `podman` and specifying the mapping directories and it enables the creation of the performance profile.
The wrapper script simplifies the process of creating a performance profile with the Performance Profile Creator (PPC) tool. The script handles tasks such as pulling and running the required container image, mounting directories into the container, and providing parameters directly to the container through Podman.
For more information about the Performance Profile Creator arguments, see the section _"Performance Profile Creator arguments"_.
[IMPORTANT]
====
The PPC uses the `must-gather` data from your cluster to create the performance profile. If you make any changes to your cluster, such as relabeling a node targeted for performance configuration, you must re-create the `must-gather` data before running PPC again.
====
.Prerequisites
* Access to the cluster as a user with the `cluster-admin` role.
* A cluster installed on bare-metal hardware.
* You installed `podman` and the OpenShift CLI (`oc`).
* Access to the Node Tuning Operator image.
* You identified a machine config pool containing target nodes for configuration.
* Access to the `must-gather` tarball.
.Procedure
@@ -35,7 +46,7 @@ readonly IMG_EXISTS_CMD="${CONTAINER_RUNTIME} image exists"
readonly IMG_PULL_CMD="${CONTAINER_RUNTIME} image pull"
readonly MUST_GATHER_VOL="/must-gather"
NTO_IMG="registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v{product-version}"
NTO_IMG="registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v{product-version}"
MG_TARBALL=""
DATA_DIR=""
@@ -116,14 +127,27 @@ main "$@"
$ chmod a+x run-perf-profile-creator.sh
----
. Optional: Display the `run-perf-profile-creator.sh` command usage:
. Use Podman to authenticate to `registry.redhat.io` by running the following command:
+
[source,terminal]
----
$ podman login registry.redhat.io
----
+
[source,bash]
----
Username: <user_name>
Password: <password>
----
. Optional: Display help for the PPC tool by running the following command:
+
[source,terminal]
----
$ ./run-perf-profile-creator.sh -h
----
+
.Expected output
.Example output
+
[source,terminal]
----
@@ -132,8 +156,8 @@ Wrapper usage:
Options:
-h help for run-perf-profile-creator.sh
-p Node Tuning Operator image <1>
-t path to a must-gather tarball <2>
-p Node Tuning Operator image
-t path to a must-gather tarball
A tool that automates creation of Performance Profiles
Usage:
@@ -159,72 +183,62 @@ Flags:
+
[NOTE]
====
There two types of arguments:
* Wrapper arguments namely `-h`, `-p` and `-t`
* PPC arguments
You can optionally set a path for the Node Tuning Operator image using the `-p` option. If you do not set a path, the wrapper script uses the default image: `registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:vBranch Build`.
====
+
<1> Optional: Specify the Node Tuning Operator image. If not set, the default upstream image is used: `registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v{product-version}`.
<2> `-t` is a required wrapper script argument and specifies the path to a `must-gather` tarball.
. Run the performance profile creator tool in discovery mode:
+
[NOTE]
====
Discovery mode inspects your cluster using the output from `must-gather`. The output produced includes information on:
* The NUMA cell partitioning with the allocated CPU IDs
* Whether hyperthreading is enabled
Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool.
====
. To display information about the cluster, run the PPC tool with the `log` argument by running the following command:
+
[source,terminal]
----
$ ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log
$ ./run-perf-profile-creator.sh -t /<path_to_must_gather_dir>/must-gather.tar.gz -- --info=log
----
+
[NOTE]
====
The `info` option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging.
====
. Check the machine config pool:
+
[source,terminal]
----
$ oc get mcp
----
* `-t /<path_to_must_gather_dir>/must-gather.tar.gz` specifies the path to directory containing the must-gather tarball. This is a required argument for the wrapper script.
+
.Example output
[source,terminal]
----
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h
worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h
level=info msg="Cluster info:"
level=info msg="MCP 'master' nodes:"
level=info msg=---
level=info msg="MCP 'worker' nodes:"
level=info msg="Node: host.example.com (NUMA cells: 1, HT: true)"
level=info msg="NUMA cell 0 : [0 1 2 3]"
level=info msg="CPU(s): 4"
level=info msg="Node: host1.example.com (NUMA cells: 1, HT: true)"
level=info msg="NUMA cell 0 : [0 1 2 3]"
level=info msg="CPU(s): 4"
level=info msg=---
level=info msg="MCP 'worker-cnf' nodes:"
level=info msg="Node: host2.example.com (NUMA cells: 1, HT: true)"
level=info msg="NUMA cell 0 : [0 1 2 3]"
level=info msg="CPU(s): 4"
level=info msg=---
----
. Create a performance profile:
. Create a performance profile by running the following command.
+
[source,terminal]
----
$ ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml
$ ./run-perf-profile-creator.sh -t /path-to-must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml
----
+
This example uses sample PPC arguments and values.
+
* `--mcp-name=worker-cnf` specifies the `worker-=cnf` machine config pool.
* `--reserved-cpu-count=1` specifies one reserved CPU.
* `--rt-kernel=true` enables the real-time kernel.
* `--split-reserved-cpus-across-numa=false` disables reserved CPUs splitting across NUMA nodes.
* `--power-consumption-mode=ultra-low-latency` specifies minimal latency at the cost of increased power consumption.
* `--offlined-cpu-count=1` specifies one offlined CPUs.
+
[NOTE]
====
The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required:
* `reserved-cpu-count`
* `mcp-name`
* `rt-kernel`
The `mcp-name` argument in this example is set to `worker-cnf` based on the output of the command `oc get mcp`. For {sno} use `--mcp-name=master`.
====
// Can't the MCP name be whatever the user wants, regardless of SNO vs multi-mode?
. Review the created YAML file:
. Review the created YAML file by running the following command:
+
[source,terminal]
----
@@ -232,56 +246,41 @@ $ cat my-performance-profile.yaml
----
.Example output
+
[source,terminal]
[source,yaml]
----
---
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: performance
spec:
cpu:
isolated: 1-39,41-79
reserved: 0,40
isolated: 2-3
offlined: "1"
reserved: "0"
machineConfigPoolSelector:
machineconfiguration.openshift.io/role: worker-cnf
nodeSelector:
node-role.kubernetes.io/worker-cnf: ""
numa:
topologyPolicy: restricted
realTimeKernel:
enabled: false
enabled: true
workloadHints:
highPowerConsumption: true
perPodPowerManagement: false
realTime: true
----
+
[NOTE]
--
When you pass the argument `--enable-hardware-tuning` as a flag to the Performance Profile Creator, the generated `PerformanceProfile` includes guidance on how to set frequency settings as follows:
[source,yaml]
----
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: performance
spec:
……………………
……………………
#HardwareTuning is an advanced feature, and only intended to be used if
#user is aware of the vendor recommendation on maximum cpu frequency.
#The structure must follow
#
# hardwareTuning:
# isolatedCpuFreq: <Maximum frequency for applications running on isolated CPUs>
# reservedCpuFreq: <Maximum frequency for platform software running on reserved CPUs>
----
--
. Apply the generated profile:
+
[NOTE]
====
Install the Node Tuning Operator before applying the profile.
====
+
[source,terminal]
----
$ oc apply -f my-performance-profile.yaml
----
+
.Example output
[source,terminal]
----
performanceprofile.performance.openshift.io/performance created
----

View File

@@ -6,33 +6,43 @@
[id="running-the-performance-profile-profile-cluster-using-podman_{context}"]
= Running the Performance Profile Creator using Podman
As a cluster administrator, you can run `podman` and the Performance Profile Creator to create a performance profile.
As a cluster administrator, you can use Podman with the Performance Profile Creator (PPC) to create a performance profile.
For more information about the PPC arguments, see the section _"Performance Profile Creator arguments"_.
[IMPORTANT]
====
The PPC uses the `must-gather` data from your cluster to create the performance profile. If you make any changes to your cluster, such as relabeling a node targeted for performance configuration, you must re-create the `must-gather` data before running PPC again.
====
.Prerequisites
* Access to the cluster as a user with the `cluster-admin` role.
* A cluster installed on bare-metal hardware.
* A node with `podman` and OpenShift CLI (`oc`) installed.
* You installed `podman` and the OpenShift CLI (`oc`).
* Access to the Node Tuning Operator image.
* You identified a machine config pool containing target nodes for configuration.
* You have access to the `must-gather` data for your cluster.
.Procedure
. Check the machine config pool:
. Check the machine config pool by running the following command:
+
[source,terminal]
----
$ oc get mcp
----
.Example output
+
.Example output
[source,terminal]
----
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h
worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h
master rendered-master-58433c8c3c0b4ed5feef95434d455490 True False False 3 3 3 0 8h
worker rendered-worker-668f56a164f151e4a853229729b6adc4 True False False 2 2 2 0 8h
worker-cnf rendered-worker-cnf-668f56a164f151e4a853229729b6adc4 True False False 1 1 1 0 79m
----
. Use Podman to authenticate to `registry.redhat.io`:
. Use Podman to authenticate to `registry.redhat.io` by running the following command:
+
[source,terminal]
----
@@ -41,15 +51,15 @@ $ podman login registry.redhat.io
+
[source,bash]
----
Username: <username>
Username: <user_name>
Password: <password>
----
. Optional: Display help for the PPC tool:
. Optional: Display help for the PPC tool by running the following command:
+
[source,terminal,subs="attributes+"]
----
$ podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v{product-version} -h
$ podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v{product-version} -h
----
+
.Example output
@@ -78,55 +88,76 @@ Flags:
--user-level-networking Run with User level Networking(DPDK) enabled
----
. Run the Performance Profile Creator tool in discovery mode:
+
[NOTE]
====
Discovery mode inspects your cluster by using the output from `must-gather`.
The output produced includes information on the following conditions:
* The NUMA cell partitioning with the allocated CPU ids
* Whether Hyper-Threading is enabled
Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool.
====
. To display information about the cluster, run the PPC tool with the `log` argument by running the following command:
+
[source,terminal,subs="attributes+"]
----
$ podman run --entrypoint performance-profile-creator -v <path_to_must-gather>/must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v{product-version} --info log --must-gather-dir-path /must-gather
$ podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v{product-version} --info log --must-gather-dir-path /must-gather
----
+
[NOTE]
====
This command uses the performance profile creator as a new entry point to `podman`. It maps the `must-gather` data for the host into the container image and invokes the required user-supplied profile arguments to produce the `my-performance-profile.yaml` file.
* `--entrypoint performance-profile-creator` defines the performance profile creator as a new entry point to `podman`.
* `-v <path_to_must_gather>` specifies the path to either of the following components:
** The directory containing the `must-gather` data.
** An existing directory containing the `must-gather` decompressed .tar file.
* `--info log` specifies a value for the output format.
+
.Example output
[source,terminal]
----
level=info msg="Cluster info:"
level=info msg="MCP 'master' nodes:"
level=info msg=---
level=info msg="MCP 'worker' nodes:"
level=info msg="Node: host.example.com (NUMA cells: 1, HT: true)"
level=info msg="NUMA cell 0 : [0 1 2 3]"
level=info msg="CPU(s): 4"
level=info msg="Node: host1.example.com (NUMA cells: 1, HT: true)"
level=info msg="NUMA cell 0 : [0 1 2 3]"
level=info msg="CPU(s): 4"
level=info msg=---
level=info msg="MCP 'worker-cnf' nodes:"
level=info msg="Node: host2.example.com (NUMA cells: 1, HT: true)"
level=info msg="NUMA cell 0 : [0 1 2 3]"
level=info msg="CPU(s): 4"
level=info msg=---
----
The `-v` option can be the path to either of the following components:
* The `must-gather` output directory
* An existing directory containing the `must-gather` decompressed .tar file
The `info` option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging.
====
. Run `podman`:
. Create a performance profile by running the following command. The example uses sample PPC arguments and values:
+
[source,terminal,subs="attributes+"]
----
$ podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v{product-version} --mcp-name=worker-cnf --reserved-cpu-count=4 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=6 > my-performance-profile.yaml
$ podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v{product-version} --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml
----
+
* `-v <path_to_must_gather>` specifies the path to either of the following components:
** The directory containing the `must-gather` data.
** The directory containing the `must-gather` decompressed .tar file.
* `--mcp-name=worker-cnf` specifies the `worker-=cnf` machine config pool.
* `--reserved-cpu-count=1` specifies one reserved CPU.
* `--rt-kernel=true` enables the real-time kernel.
* `--split-reserved-cpus-across-numa=false` disables reserved CPUs splitting across NUMA nodes.
* `--power-consumption-mode=ultra-low-latency` specifies minimal latency at the cost of increased power consumption.
* `--offlined-cpu-count=1` specifies one offlined CPU.
+
[NOTE]
====
The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required:
* `reserved-cpu-count`
* `mcp-name`
* `rt-kernel`
The `mcp-name` argument in this example is set to `worker-cnf` based on the output of the command `oc get mcp`. For {sno} use `--mcp-name=master`.
====
+
.Example output
[source,terminal]
----
level=info msg="Nodes targeted by worker-cnf MCP are: [worker-2]"
level=info msg="NUMA cell(s): 1"
level=info msg="NUMA cell 0 : [0 1 2 3]"
level=info msg="CPU(s): 4"
level=info msg="1 reserved CPUs allocated: 0 "
level=info msg="2 isolated CPUs allocated: 2-3"
level=info msg="Additional Kernel Args based on configuration: []"
----
// Can't the MCP name be whatever the user wants, regardless of SNO vs multi-mode?
. Review the created YAML file:
. Review the created YAML file by running the following command:
+
[source,terminal]
----
@@ -136,15 +167,16 @@ $ cat my-performance-profile.yaml
+
[source,yaml]
----
---
apiVersion: performance.openshift.io/v2
kind: PerformanceProfile
metadata:
name: performance
spec:
cpu:
isolated: 2-39,48-79
offlined: 42-47
reserved: 0-1,40-41
isolated: 2-3
offlined: "1"
reserved: "0"
machineConfigPoolSelector:
machineconfiguration.openshift.io/role: worker-cnf
nodeSelector:
@@ -155,6 +187,7 @@ spec:
enabled: true
workloadHints:
highPowerConsumption: true
perPodPowerManagement: false
realTime: true
----
@@ -164,3 +197,9 @@ spec:
----
$ oc apply -f my-performance-profile.yaml
----
+
.Example output
[source,terminal]
----
performanceprofile.performance.openshift.io/performance created
----

View File

@@ -4,7 +4,7 @@
:_mod-docs-content-type: REFERENCE
[id="cnf-telco-core-reference-design-performance-profile-template_{context}"]
= Telco core reference design performance profile template
= Telco core reference design performance profile
The following performance profile configures node-level performance settings for {product-title} clusters on commodity hardware to host telco core workloads.

View File

@@ -4,7 +4,7 @@
:_mod-docs-content-type: REFERENCE
[id="cnf-telco-ran-reference-design-performance-profile-template_{context}"]
= Telco RAN DU reference design performance profile template
= Telco RAN DU reference design performance profile
The following performance profile configures node-level performance settings for {product-title} clusters on commodity hardware to host telco RAN DU workloads.

View File

@@ -9,21 +9,37 @@ toc::[]
Tune nodes for low latency by using the cluster performance profile.
You can restrict CPUs for infra and application containers, configure huge pages, Hyper-Threading, and configure CPU partitions for latency-sensitive processes.
[role="_additional-resources"]
.Additional resources
* xref:../../scalability_and_performance/low_latency_tuning/cnf-provisioning-low-latency-workloads.adoc#cnf-provisioning-low-latency-workloads[Provisioning real-time and low latency workloads]
[id="cnf-create-performance-profiles"]
== Creating a performance profile
Learn about the Performance Profile Creator (PPC) and how you can use it to create a performance profile.
You can create a cluster performance profile by using the Performance Profile Creator (PPC) tool. The PPC is a function of the Node Tuning Operator.
The PPC combines information about your cluster with user-supplied configurations to generate a performance profile that is appropriate to your hardware, topology and use-case.
[NOTE]
====
Performance profiles are applicable only to bare-metal environments where the cluster has direct access to the underlying hardware resources. You can configure performances profiles for both {sno} and multi-node clusters.
====
The following is a high-level workflow for creating and applying a performance profile in your cluster:
* Create a machine config pool (MCP) for nodes that you want to target with performance configurations. In {sno} clusters, you must use the `master` MCP because there is only one node in the cluster.
* Gather information about your cluster using the `must-gather` command.
* Use the PPC tool to create a performance profile by using either of the following methods:
** Run the PPC tool by using Podman.
** Run the PPC tool by using a wrapper script.
* Configure the performance profile for your use case and apply the performance profile to your cluster.
include::modules/cnf-about-the-profile-creator-tool.adoc[leveloffset=+2]
include::modules/cnf-gathering-data-about-cluster-using-must-gather.adoc[leveloffset=+2]
include::modules/cnf-creating-mcp-for-ppc.adoc[leveloffset=+2]
include::modules/cnf-running-the-performance-creator-profile.adoc[leveloffset=+2]
include::modules/cnf-gathering-data-about-cluster-using-must-gather.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
@@ -31,11 +47,11 @@ include::modules/cnf-running-the-performance-creator-profile.adoc[leveloffset=+2
* For more information about the `must-gather` tool,
see xref:../../support/gathering-cluster-data.adoc#nodes-nodes-managing[Gathering data about your cluster].
include::modules/cnf-how-run-podman-to-create-profile.adoc[leveloffset=+3]
include::modules/cnf-running-the-performance-creator-profile.adoc[leveloffset=+2]
include::modules/cnf-running-the-performance-creator-profile-offline.adoc[leveloffset=+3]
include::modules/cnf-running-the-performance-creator-profile-offline.adoc[leveloffset=+2]
include::modules/cnf-performance-profile-creator-arguments.adoc[leveloffset=+3]
include::modules/cnf-performance-profile-creator-arguments.adoc[leveloffset=+2]
[id="cnf-create-performance-profiles-reference"]
=== Reference performance profiles