mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
Reducing resource consumption
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
6b609775ad
commit
f52cf22af6
@@ -1308,7 +1308,9 @@ Topics:
|
||||
File: uninstalling-pipelines
|
||||
- Name: Creating CI/CD solutions for applications using OpenShift Pipelines
|
||||
File: creating-applications-with-cicd-pipelines
|
||||
- Name: Working with Pipelines using the Developer perspective
|
||||
- Name: Reducing resource consumption of OpenShift Pipelines
|
||||
File: reducing-pipelines-resource-consumption
|
||||
- Name: Working with OpenShift Pipelines using the Developer perspective
|
||||
File: working-with-pipelines-using-the-developer-perspective
|
||||
- Name: OpenShift Pipelines release notes
|
||||
File: op-release-notes
|
||||
|
||||
27
cicd/pipelines/reducing-pipelines-resource-consumption.adoc
Normal file
27
cicd/pipelines/reducing-pipelines-resource-consumption.adoc
Normal file
@@ -0,0 +1,27 @@
|
||||
[id="reducing-pipelines-resource-consumption"]
|
||||
= Reducing resource consumption of pipelines
|
||||
include::modules/common-attributes.adoc[]
|
||||
include::modules/pipelines-document-attributes.adoc[]
|
||||
:context: reducing-pipelines-resource-consumption
|
||||
|
||||
toc::[]
|
||||
|
||||
|
||||
If you use clusters in multi-tenant environments you must control the consumption of CPU, memory, and storage resources for each project and Kubernetes object. This helps prevent any one application from consuming too many resources and affecting other applications.
|
||||
|
||||
To define the final resource limits that are set on the resulting pods, {pipelines-title} use resource quota limits and limit ranges of the project in which they are executed.
|
||||
|
||||
To restrict resource consumption in your project, you can:
|
||||
|
||||
* xref:../../applications/quotas/quotas-setting-per-project.html[Set and manage resource quotas] to limit the aggregate resource consumption.
|
||||
* Use xref:../../nodes/clusters/nodes-cluster-limit-ranges.html[limit ranges to restrict resource consumption] for specific objects, such as pods, images, image streams, and persistent volume claims.
|
||||
|
||||
include::modules/op-understanding-pipelines-resource-consumption.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/op-mitigating-extra-pipeline-resource-consumption.adoc[leveloffset=+1]
|
||||
|
||||
== Additional Resources
|
||||
|
||||
* xref:../../applications/quotas/quotas-setting-per-project.html[Resource Quotas]
|
||||
* xref:../../nodes/clusters/nodes-cluster-limit-ranges.html[Restricting resource consumption using limit ranges]
|
||||
* link:https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#resources[Resource requests and limits in Kubernetes]
|
||||
@@ -0,0 +1,41 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// */openshift_pipelines/uninstalling-pipelines.adoc
|
||||
|
||||
[id='op-mitigating-extra-pipeline-resource-consumption_{context}']
|
||||
= Mitigating extra resource consumption in pipelines
|
||||
|
||||
When you have resource limits set on the containers in your pod, {product-title} sums up the resource limits requested as all containers run simultaneously.
|
||||
|
||||
To consume the minimum amount of resources needed to execute one step at a time in the invoked task, {pipelines-title} requests the maximum CPU, memory, and ephemeral storage as specified in the step that requires the most amount of resources. This ensures that the resource requirements of all the steps are met. Requests other than the maximum values are set to zero.
|
||||
|
||||
However, this behavior can lead to higher resource usage than required. If you use resource quotas, this could also lead to unschedulable pods.
|
||||
|
||||
For example, consider a task with two steps that uses scripts, and that does not define any resource limits and requests. The resulting pod has two init containers (one for entrypoint copy, the other for writing scripts) and two containers, one for each step.
|
||||
|
||||
{product-title} uses the limit range set up for the project to compute required resource requests and limits.
|
||||
For this example, set the following limit range in the project:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: mem-min-max-demo-lr
|
||||
spec:
|
||||
limits:
|
||||
- max:
|
||||
memory: 1Gi
|
||||
min:
|
||||
memory: 500Mi
|
||||
type: Container
|
||||
----
|
||||
|
||||
In this scenario, each init container uses a request memory of 1Gi (the max limit of the limit range), and each container uses a request memory of 500Mi. Thus, the total memory request for the pod is 2Gi.
|
||||
|
||||
If the same limit range is used with a task of ten steps, the final memory request is 5Gi, which is higher than what each step actually needs, that is 500Mi (since each step runs after the other).
|
||||
|
||||
Thus, to reduce resource consumption of resources, you can:
|
||||
|
||||
* Reduce the number of steps in a given task by grouping different steps into one bigger step, using the script feature, and the same image. This reduces the minimum requested resource.
|
||||
* Distribute steps that are relatively independent of each other and can run on their own to multiple tasks instead of a single task. This lowers the number of steps in each task, making the request for each task smaller, and the scheduler can then run them when the resources are available.
|
||||
55
modules/op-understanding-pipelines-resource-consumption.adoc
Normal file
55
modules/op-understanding-pipelines-resource-consumption.adoc
Normal file
@@ -0,0 +1,55 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// */openshift_pipelines/uninstalling-pipelines.adoc
|
||||
|
||||
[id='op-understanding-pipelines-resource-consumption_{context}']
|
||||
= Understanding resource consumption in pipelines
|
||||
|
||||
Each task consists of a number of required steps to be executed in a particular order defined in the `steps` field of the `Task` resource. Every task runs as a pod, and each step runs as a container within that pod.
|
||||
|
||||
Steps are executed one at a time. The pod that executes the task only requests enough resources to run a single container image (step) in the task at a time, and thus does not store resources for all the steps in the task.
|
||||
|
||||
The `Resources` field in the `steps` spec specifies the limits for resource consumption.
|
||||
By default, the resource requests for the CPU, memory, and ephemeral storage are set to `BestEffort` (zero) values or to the minimums set through limit ranges in that project.
|
||||
|
||||
.Example configuration of resource requests and limits for a step
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
steps:
|
||||
- name: <step_name>
|
||||
resources:
|
||||
requests:
|
||||
memory: 2Gi
|
||||
cpu: 600m
|
||||
limits:
|
||||
memory: 4Gi
|
||||
cpu: 900m
|
||||
----
|
||||
|
||||
When the `LimitRange` parameter and the minimum values for container resource requests are specified in the project in which the pipeline and task runs are executed, {pipelines-title} looks at all the `LimitRange` values in the project and uses the minimum values instead of zero.
|
||||
|
||||
.Example configuration of limit range parameters at a project level
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: LimitRange
|
||||
metadata:
|
||||
name: <limit_container_resource>
|
||||
spec:
|
||||
limits:
|
||||
- max:
|
||||
cpu: "600m"
|
||||
memory: "2Gi"
|
||||
min:
|
||||
cpu: "200m"
|
||||
memory: "100Mi"
|
||||
default:
|
||||
cpu: "500m"
|
||||
memory: "800Mi"
|
||||
defaultRequest:
|
||||
cpu: "100m"
|
||||
memory: "100Mi"
|
||||
type: Container
|
||||
...
|
||||
----
|
||||
Reference in New Issue
Block a user