1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/admin-quota-usage.adoc
2025-11-04 12:35:55 +00:00

362 lines
13 KiB
Plaintext

// Module included in the following assemblies:
//
// ../scalability_and_performance/compute-resource-quotas.adoc
:_mod-docs-content-type: PROCEDURE
[id="admin-quota-usage_{context}"]
= Admin quota usage
== Quota enforcement
After a resource quota for a project is first created, the project restricts the ability to create any new resources that can violate a quota constraint until it has calculated updated usage statistics.
After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource.
When you delete a resource, your quota use is decremented during the next full recalculation of quota statistics for the project.
A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value.
If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the
quota constraint violated, and what their currently observed usage stats are in the system.
== Requests compared to limits
When allocating compute resources by quota, each container can specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values.
If the quota has a value specified for `requests.cpu` or `requests.memory`, then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for `limits.cpu` or `limits.memory`, then it requires that every incoming container specify an explicit limit for those resources.
== Sample resource quota definitions
.Example core-object-counts.yaml
[source,yaml]
----
apiVersion: v1
kind: ResourceQuota
metadata:
name: core-object-counts
spec:
hard:
configmaps: "10" # <1>
persistentvolumeclaims: "4" # <2>
replicationcontrollers: "20" # <3>
secrets: "10" # <4>
services: "10" # <5>
----
<1> The total number of `ConfigMap` objects that can exist in the project.
<2> The total number of persistent volume claims (PVCs) that can exist in the project.
<3> The total number of replication controllers that can exist in the project.
<4> The total number of secrets that can exist in the project.
<5> The total number of services that can exist in the project.
.Example openshift-object-counts.yaml
[source,yaml]
----
apiVersion: v1
kind: ResourceQuota
metadata:
name: openshift-object-counts
spec:
hard:
openshift.io/imagestreams: "10" # <1>
----
<1> The total number of image streams that can exist in the project.
.Example compute-resources.yaml
[source,yaml]
----
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
pods: "4" # <1>
requests.cpu: "1" # <2>
requests.memory: 1Gi # <3>
requests.ephemeral-storage: 2Gi # <4>
limits.cpu: "2" # <5>
limits.memory: 2Gi # <6>
limits.ephemeral-storage: 4Gi # <7>
----
<1> The total number of pods in a non-terminal state that can exist in the project.
<2> Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core.
<3> Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi.
<4> Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi.
<5> Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores.
<6> Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi.
<7> Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi.
.Example besteffort.yaml
[source,yaml]
----
apiVersion: v1
kind: ResourceQuota
metadata:
name: besteffort
spec:
hard:
pods: "1" # <1>
scopes:
- BestEffort # <2>
----
<1> The total number of pods in a non-terminal state with *BestEffort* quality of service that can exist in the project.
<2> Restricts the quota to only matching pods that have *BestEffort* quality of service for either memory or CPU.
.Example compute-resources-long-running.yaml
[source,yaml]
----
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources-long-running
spec:
hard:
pods: "4" # <1>
limits.cpu: "4" # <2>
limits.memory: "2Gi" # <3>
limits.ephemeral-storage: "4Gi" # <4>
scopes:
- NotTerminating <5>
----
<1> The total number of pods in a non-terminal state.
<2> Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
<3> Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
<4> Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value.
<5> Restricts the quota to only matching pods where `spec.activeDeadlineSeconds` is set to `nil`. Build pods will fall under `NotTerminating` unless the `RestartNever` policy is applied.
.Example compute-resources-time-bound.yaml
[source,yaml]
----
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources-time-bound
spec:
hard:
pods: "2" # <1>
limits.cpu: "1" # <2>
limits.memory: "1Gi" # <3>
limits.ephemeral-storage: "1Gi" # <4>
scopes:
- Terminating <5>
----
<1> The total number of pods in a non-terminal state.
<2> Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
<3> Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
<4> Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value.
<5> Restricts the quota to only matching pods where `spec.activeDeadlineSeconds >=0`. For example, this quota would charge for build pods, but not long running pods such as a web server or database.
.Example storage-consumption.yaml
[source,yaml]
----
apiVersion: v1
kind: ResourceQuota
metadata:
name: storage-consumption
spec:
hard:
persistentvolumeclaims: "10" # <1>
requests.storage: "50Gi" # <2>
gold.storageclass.storage.k8s.io/requests.storage: "10Gi" # <3>
silver.storageclass.storage.k8s.io/requests.storage: "20Gi" # <4>
silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" # <5>
bronze.storageclass.storage.k8s.io/requests.storage: "0" # <6>
bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" # <7>
----
<1> The total number of persistent volume claims in a project
<2> Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value.
<3> Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value.
<4> Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value.
<5> Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value.
<6> Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to `0`, it means bronze storage class cannot request storage.
<7> Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to `0`, it means bronze storage class cannot create claims.
== Creating a quota
To create a quota, first define the quota in a file. Then use that file to apply it to a project. See the Additional resources section for a link describing this.
[source,terminal]
----
$ oc create -f <resource_quota_definition> [-n <project_name>]
----
Here is an example using the `core-object-counts.yaml` resource quota definition and the `demoproject` project name:
[source,terminal]
----
$ oc create -f core-object-counts.yaml -n demoproject
----
== Creating object count quotas
You can create an object count quota for all {product-title} standard namespaced resource types, such as `BuildConfig`, and `DeploymentConfig`. An object quota count places a defined quota on all standard namespaced resource types.
When using a resource quota, an object is charged against the quota if it exists in server storage. These types of quotas are useful to protect against exhaustion of storage resources.
To configure an object count quota for a resource, run the following command:
[source,terminal]
----
$ oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota>
----
.Example showing object count quota:
[source,terminal]
----
$ oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4
resourcequota "test" created
----
[source,terminal]
----
$ oc describe quota test
----
.Example output
[source,terminal]
----
Name: test
Namespace: quota
Resource Used Hard
-------- ---- ----
count/deployments.extensions 0 2
count/pods 0 3
count/replicasets.extensions 0 4
count/secrets 0 4
----
This example limits the listed resources to the hard limit in each project in the cluster.
== Viewing a quota
You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's `Quota` page.
You can also use the CLI to view quota details:
. First, get the list of quotas defined in the project. For example, for a project called `demoproject`:
+
[source,terminal]
----
$ oc get quota -n demoproject
----
+
.Example output
[source,terminal]
----
NAME AGE
besteffort 11m
compute-resources 2m
core-object-counts 29m
----
. Describe the quota you are interested in, for example the `core-object-counts` quota:
+
[source,terminal]
----
$ oc describe quota core-object-counts -n demoproject
----
+
.Example output
[source,terminal]
----
Name: core-object-counts
Namespace: demoproject
Resource Used Hard
-------- ---- ----
configmaps 3 10
persistentvolumeclaims 0 4
replicationcontrollers 3 20
secrets 9 10
services 2 10
----
ifdef::openshift-origin,openshift-enterprise[]
== Configuring quota synchronization period
When a set of resources are deleted, the synchronization time frame of resources is determined by the `resource-quota-sync-period` setting in the `/etc/origin/master/master-config.yaml` file.
Before quota usage is restored, a user can encounter problems when attempting to reuse the resources. You can change the `resource-quota-sync-period` setting to have the set of resources regenerate in the needed amount of time (in seconds) for the resources to be once again available:
.Example `resource-quota-sync-period` setting
[source,yaml]
----
kubernetesMasterConfig:
apiLevels:
- v1beta3
- v1
apiServerArguments: null
controllerArguments:
resource-quota-sync-period:
- "10s"
----
After making any changes, restart the controller services to apply them.
[source,terminal]
----
$ master-restart api
----
[source,terminal]
----
$ master-restart controllers
----
Adjusting the regeneration time can be helpful for creating resources and determining resource usage when automation is used.
[NOTE]
====
The `resource-quota-sync-period` setting balances system performance. Reducing the sync period can result in a heavy load on the controller.
====
endif::[]
ifdef::openshift-origin,openshift-enterprise,openshift-dedicated[]
== Explicit quota to consume a resource
If a resource is not managed by quota, a user has no restriction on the amount of resource that can be consumed. For example, if there is no quota on storage related to the gold storage class, the amount of gold storage a project can create is unbounded.
For high-cost compute or storage resources, administrators can require an explicit quota be granted to consume a resource. For example, if a project was not explicitly given quota for storage related to the gold storage class, users of that project would not be able to create any storage of that type.
In order to require explicit quota to consume a particular resource, the following stanza should be added to the master-config.yaml.
[source,yaml]
----
admissionConfig:
pluginConfig:
ResourceQuota:
configuration:
apiVersion: resourcequota.admission.k8s.io/v1alpha1
kind: Configuration
limitedResources:
- resource: persistentvolumeclaims <1>
matchContains:
- gold.storageclass.storage.k8s.io/requests.storage <2>
----
<1> The group or resource to whose consumption is limited by default.
<2> The name of the resource tracked by quota associated with the group/resource to limit by default.
In the above example, the quota system intercepts every operation that creates or updates a `PersistentVolumeClaim`. It checks what resources controlled by quota would be consumed. If there is no covering quota for those resources in the project, the request is denied. In this example, if a user creates a `PersistentVolumeClaim` that uses storage associated with the gold storage class and there is no matching quota in the project, the request is denied.
endif::[]