mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Add quota docs to v4
This commit is contained in:
@@ -144,6 +144,15 @@ Topics:
|
||||
File: creating-project-other-user
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
---
|
||||
Name: Administering clusters
|
||||
Dir: administering_clusters
|
||||
Distros: openshift-origin, openshift-enterprise
|
||||
Topics:
|
||||
- Name: Setting quotas per project
|
||||
File: quotas-setting-per-project
|
||||
- Name: Setting quotas across multiple projects
|
||||
File: quotas-setting-across-multiple-projects
|
||||
---
|
||||
Name: Networking
|
||||
Dir: networking
|
||||
Distros: openshift-*
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
[id='setting-quotas-across-multiple-projects']
|
||||
= Setting quotas across multiple projects
|
||||
{product-author}
|
||||
{product-version}
|
||||
:data-uri:
|
||||
:icons:
|
||||
:experimental:
|
||||
:toc: macro
|
||||
:toc-title:
|
||||
:prewrap!:
|
||||
:context: setting-quotas-across-multiple-projects
|
||||
|
||||
toc::[]
|
||||
|
||||
{nbsp} +
|
||||
A multi-project quota, defined by a `ClusterResourceQuota` object, allows quotas
|
||||
to be shared across multiple projects. Resources used in each selected project
|
||||
will be aggregated and that aggregate will be used to limit resources across all
|
||||
the selected projects.
|
||||
|
||||
include::modules/quotas-selecting-projects.adoc[leveloffset=+1]
|
||||
include::modules/quotas-viewing-clusterresourcequotas.adoc[leveloffset=+1]
|
||||
include::modules/quotas-selection-granularity.adoc[leveloffset=+1]
|
||||
31
administering_clusters/quotas-setting-per-project.adoc
Normal file
31
administering_clusters/quotas-setting-per-project.adoc
Normal file
@@ -0,0 +1,31 @@
|
||||
[id='quotas-setting-per-project']
|
||||
= Setting quotas per project
|
||||
{product-author}
|
||||
{product-version}
|
||||
:data-uri:
|
||||
:icons:
|
||||
:experimental:
|
||||
:toc: macro
|
||||
:toc-title:
|
||||
:prewrap!:
|
||||
:context: quotas-setting-per-project
|
||||
|
||||
toc::[]
|
||||
|
||||
{nbsp} +
|
||||
A resource quota, defined by a `ResourceQuota` object, provides constraints that
|
||||
limit aggregate resource consumption per project. It can limit the quantity of
|
||||
objects that can be created in a project by type, as well as the total amount of
|
||||
compute resources and storage that may be consumed by resources in that project.
|
||||
|
||||
include::modules/quotas-resources-managed.adoc[leveloffset=+1]
|
||||
include::modules/quotas-scopes.adoc[leveloffset=+1]
|
||||
include::modules/quotas-enforcement.adoc[leveloffset=+1]
|
||||
include::modules/quotas-requests-vs-limits.adoc[leveloffset=+1]
|
||||
include::modules/quotas-sample-resource-quotas-def.adoc[leveloffset=+1]
|
||||
include::modules/quotas-creating-a-quota.adoc[leveloffset=+1]
|
||||
include::modules/quotas-creating-object-count-quotas.adoc[leveloffset=+2]
|
||||
include::modules/setting-resource-quota-for-extended-resources.adoc[leveloffset=+2]
|
||||
include::modules/quotas-viewing-quotas.adoc[leveloffset=+1]
|
||||
include::modules/quotas-configuring-quota-sync-period.adoc[leveloffset=+1]
|
||||
include::modules/quotas-requiring-explicit-quota.adoc[leveloffset=+1]
|
||||
@@ -1,2 +0,0 @@
|
||||
Please delete this file once you have assemblies here.
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
../images
|
||||
@@ -1 +0,0 @@
|
||||
../modules
|
||||
49
modules/quotas-configuring-quota-sync-period.adoc
Normal file
49
modules/quotas-configuring-quota-sync-period.adoc
Normal file
@@ -0,0 +1,49 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quotas-configuring-quota-sync-period']
|
||||
[[]]
|
||||
= Configuring quota synchronization period
|
||||
|
||||
When a set of resources are deleted, but before quota usage is restored, a user
|
||||
might encounter problems when attempting to reuse the resources. The
|
||||
synchronization time frame of resources is determined by the
|
||||
`resource-quota-sync-period` setting, which can be configured by a cluster
|
||||
administrator.
|
||||
|
||||
Adjusting the regeneration time can be helpful for creating resources and
|
||||
determining resource usage when automation is used.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The `resource-quota-sync-period` setting is designed to balance system
|
||||
performance. Reducing the sync period can result in a heavy load on the master.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
To configure the quota synchronization period:
|
||||
|
||||
. Change the `resource-quota-sync-period` setting to have the set of resources
|
||||
regenerate at the desired amount of time (in seconds) and for the resources to
|
||||
be available again:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
kubernetesMasterConfig:
|
||||
apiLevels:
|
||||
- v1beta3
|
||||
- v1
|
||||
apiServerArguments: null
|
||||
controllerArguments:
|
||||
resource-quota-sync-period:
|
||||
- "10s"
|
||||
----
|
||||
|
||||
. Restart the master services to apply the changes:
|
||||
+
|
||||
----
|
||||
# master-restart api
|
||||
# master-restart controllers
|
||||
----
|
||||
26
modules/quotas-creating-a-quota.adoc
Normal file
26
modules/quotas-creating-a-quota.adoc
Normal file
@@ -0,0 +1,26 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quotas-creating-a-quota']
|
||||
= Creating a quota
|
||||
|
||||
You can create a quota to constrain resource usage in a given project.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Define the quota in a file. See
|
||||
xref:../administering_clusters/quotas-setting-per-project.adoc#quotas-sample-resource-quota-definitions[Sample resource quota definitions]
|
||||
for examples.
|
||||
|
||||
. Use the file to create the quota and apply it to a project:
|
||||
+
|
||||
----
|
||||
$ oc create -f <file> [-n <project_name>]
|
||||
----
|
||||
+
|
||||
For example:
|
||||
+
|
||||
----
|
||||
$ oc create -f core-object-counts.yaml -n demoproject
|
||||
----
|
||||
53
modules/quotas-creating-object-count-quotas.adoc
Normal file
53
modules/quotas-creating-object-count-quotas.adoc
Normal file
@@ -0,0 +1,53 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quota-creating-object-count-quotas-{context}']
|
||||
== Creating object count quotas
|
||||
|
||||
You can create an object count quota for all {product-title} standard namespaced
|
||||
resource types, such as `BuildConfig`, and `DeploymentConfig`. An object quota
|
||||
count places a defined quota on all standard namespaced resource types.
|
||||
|
||||
When using a resource quota, an object is charged against the quota if it exists
|
||||
in server storage. These types of quotas are useful to protect against
|
||||
exhaustion of storage resources.
|
||||
|
||||
.Procedure
|
||||
|
||||
To configure an object count quota for a resource:
|
||||
|
||||
. Run the following command:
|
||||
+
|
||||
----
|
||||
$ oc create quota <name> \
|
||||
--hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> <1>
|
||||
----
|
||||
<1> `<resource>` is the name of the resource, and `<group>` is the API group, if
|
||||
applicable. Use the `kubectl api-resources` command for a list of resources and
|
||||
their associated API groups.
|
||||
+
|
||||
For example:
|
||||
+
|
||||
----
|
||||
$ oc create quota test \
|
||||
--hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4
|
||||
resourcequota "test" created
|
||||
----
|
||||
+
|
||||
This example limits the listed resources to the hard limit in each project in
|
||||
the cluster.
|
||||
|
||||
. Verify that the quota was created:
|
||||
+
|
||||
----
|
||||
$ oc describe quota test
|
||||
Name: test
|
||||
Namespace: quota
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
count/deployments.extensions 0 2
|
||||
count/pods 0 3
|
||||
count/replicasets.extensions 0 4
|
||||
count/secrets 0 4
|
||||
----
|
||||
25
modules/quotas-enforcement.adoc
Normal file
25
modules/quotas-enforcement.adoc
Normal file
@@ -0,0 +1,25 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quota-enforcement-{context}']
|
||||
= Quota enforcement
|
||||
|
||||
After a resource quota for a project is first created, the project restricts the
|
||||
ability to create any new resources that may violate a quota constraint until it
|
||||
has calculated updated usage statistics.
|
||||
|
||||
After a quota is created and usage statistics are updated, the project accepts
|
||||
the creation of new content. When you create or modify resources, your quota
|
||||
usage is incremented immediately upon the request to create or modify the
|
||||
resource.
|
||||
|
||||
When you delete a resource, your quota use is decremented during the next full
|
||||
recalculation of quota statistics for the project. A configurable amount of time
|
||||
determines how long it takes to reduce quota usage statistics to their current
|
||||
observed system value.
|
||||
|
||||
If project modifications exceed a quota usage limit, the server denies the
|
||||
action, and an appropriate error message is returned to the user explaining the
|
||||
quota constraint violated, and what their currently observed usage statistics
|
||||
are in the system.
|
||||
16
modules/quotas-requests-vs-limits.adoc
Normal file
16
modules/quotas-requests-vs-limits.adoc
Normal file
@@ -0,0 +1,16 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quotas-requests-vs-limits']
|
||||
= Requests versus limits
|
||||
|
||||
When allocating compute resources, each container might specify a request and a
|
||||
limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any
|
||||
of these values.
|
||||
|
||||
If the quota has a value specified for `requests.cpu` or `requests.memory`,
|
||||
then it requires that every incoming container make an explicit request for
|
||||
those resources. If the quota has a value specified for `limits.cpu` or
|
||||
`limits.memory`, then it requires that every incoming container specify an
|
||||
explicit limit for those resources.
|
||||
49
modules/quotas-requiring-explicit-quota.adoc
Normal file
49
modules/quotas-requiring-explicit-quota.adoc
Normal file
@@ -0,0 +1,49 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quota-requiring-explicit-quota-{context}']
|
||||
== Requiring explicit quota to consume a resource
|
||||
|
||||
If a resource is not managed by quota, a user has no restriction on the amount
|
||||
of resource that can be consumed. For example, if there is no quota on storage
|
||||
related to the gold storage class, the amount of gold storage a project can
|
||||
create is unbounded.
|
||||
|
||||
For high-cost compute or storage resources, administrators might want to require
|
||||
an explicit quota be granted in order to consume a resource. For example, if a
|
||||
project was not explicitly given quota for storage related to the gold storage
|
||||
class, users of that project would not be able to create any storage of that
|
||||
type.
|
||||
|
||||
.Procedure
|
||||
|
||||
To require explicit quota to consume a particular resource:
|
||||
|
||||
. Add the following stanza to the master configuration:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
admissionConfig:
|
||||
pluginConfig:
|
||||
ResourceQuota:
|
||||
configuration:
|
||||
apiVersion: resourcequota.admission.k8s.io/v1alpha1
|
||||
kind: Configuration
|
||||
limitedResources:
|
||||
- resource: persistentvolumeclaims <1>
|
||||
matchContains:
|
||||
- gold.storageclass.storage.k8s.io/requests.storage <2>
|
||||
----
|
||||
<1> The group/resource to whose consumption is limited by default.
|
||||
<2> The name of the resource tracked by quota associated with the group/resource to
|
||||
limit by default.
|
||||
+
|
||||
In the above example, the quota system intercepts every operation that
|
||||
creates or updates a `PersistentVolumeClaim`. It checks what resources understood
|
||||
by quota would be consumed, and if there is no covering quota for those resources
|
||||
in the project, the request is denied.
|
||||
+
|
||||
In this example, if a user creates a `PersistentVolumeClaim` that uses storage
|
||||
associated with the gold storage class, and there is no matching quota in the
|
||||
project, the request is denied.
|
||||
123
modules/quotas-resources-managed.adoc
Normal file
123
modules/quotas-resources-managed.adoc
Normal file
@@ -0,0 +1,123 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quotas-resources-managed-{context}']
|
||||
= Resources managed by quotas
|
||||
|
||||
The following describes the set of compute resources and object types that can
|
||||
be managed by a quota.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
A pod is in a terminal state if `status.phase in (Failed, Succeeded)` is true.
|
||||
====
|
||||
|
||||
.Compute resources managed by quota
|
||||
[cols="3a,8a",options="header"]
|
||||
|===
|
||||
|
||||
|Resource Name |Description
|
||||
|
||||
|`cpu`
|
||||
|The sum of CPU requests across all pods in a non-terminal state cannot exceed
|
||||
this value. `cpu` and `requests.cpu` are the same value and can be used
|
||||
interchangeably.
|
||||
|
||||
|`memory`
|
||||
|The sum of memory requests across all pods in a non-terminal state cannot
|
||||
exceed this value. `memory` and `requests.memory` are the same value and can
|
||||
be used interchangeably.
|
||||
|
||||
|`ephemeral-storage`
|
||||
|The sum of local ephemeral storage requests across all pods in a non-terminal
|
||||
state cannot exceed this value. `ephemeral-storage` and
|
||||
`requests.ephemeral-storage` are the same value and can be used
|
||||
interchangeably. This resource is available only if you enabled the ephemeral
|
||||
storage technology preview. This feature is disabled by
|
||||
default.
|
||||
|
||||
|`requests.cpu`
|
||||
|The sum of CPU requests across all pods in a non-terminal state cannot exceed
|
||||
this value. `cpu` and `requests.cpu` are the same value and can be used
|
||||
interchangeably.
|
||||
|
||||
|`requests.memory`
|
||||
|The sum of memory requests across all pods in a non-terminal state cannot
|
||||
exceed this value. `memory` and `requests.memory` are the same value and can
|
||||
be used interchangeably.
|
||||
|
||||
|`requests.ephemeral-storage`
|
||||
|The sum of ephemeral storage requests across all pods in a non-terminal state
|
||||
cannot exceed this value. `ephemeral-storage` and
|
||||
`requests.ephemeral-storage` are the same value and can be used
|
||||
interchangeably. This resource is available only if you enabled the ephemeral
|
||||
storage technology preview. This feature is disabled by default.
|
||||
|
||||
|`limits.cpu`
|
||||
|The sum of CPU limits across all pods in a non-terminal state cannot exceed
|
||||
this value.
|
||||
|
||||
|`limits.memory`
|
||||
|The sum of memory limits across all pods in a non-terminal state cannot exceed
|
||||
this value.
|
||||
|
||||
|`limits.ephemeral-storage`
|
||||
|The sum of ephemeral storage limits across all pods in a non-terminal state
|
||||
cannot exceed this value. This resource is available only if you enabled the
|
||||
ephemeral storage technology preview. This feature is disabled by default.
|
||||
|===
|
||||
|
||||
.Storage resources managed by quota
|
||||
[cols="3a,8a",options="header"]
|
||||
|===
|
||||
|
||||
|Resource Name |Description
|
||||
|
||||
|`requests.storage`
|
||||
|The sum of storage requests across all persistent volume claims in any state
|
||||
cannot exceed this value.
|
||||
|
||||
|`persistentvolumeclaims`
|
||||
|The total number of persistent volume claims that can exist in the project.
|
||||
|
||||
|`<storage-class-name>.storageclass.storage.k8s.io/requests.storage`
|
||||
|The sum of storage requests across all persistent volume claims in any state
|
||||
that have a matching storage class, cannot exceed this value.
|
||||
|
||||
|`<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims`
|
||||
|The total number of persistent volume claims with a matching storage class that
|
||||
can exist in the project.
|
||||
|===
|
||||
|
||||
[id='quotas-object-counts-managed-{context}']
|
||||
.Object counts managed by quota
|
||||
[cols="3a,8a",options="header"]
|
||||
|===
|
||||
|
||||
|Resource Name |Description
|
||||
|
||||
|`pods`
|
||||
|The total number of pods in a non-terminal state that can exist in the project.
|
||||
|
||||
|`replicationcontrollers`
|
||||
|The total number of replication controllers that can exist in the project.
|
||||
|
||||
|`resourcequotas`
|
||||
|The total number of resource quotas that can exist in the project.
|
||||
|
||||
|`services`
|
||||
|The total number of services that can exist in the project.
|
||||
|
||||
|`secrets`
|
||||
|The total number of secrets that can exist in the project.
|
||||
|
||||
|`configmaps`
|
||||
|The total number of `ConfigMap` objects that can exist in the project.
|
||||
|
||||
|`persistentvolumeclaims`
|
||||
|The total number of persistent volume claims that can exist in the project.
|
||||
|
||||
|`openshift.io/imagestreams`
|
||||
|The total number of image streams that can exist in the project.
|
||||
|===
|
||||
166
modules/quotas-sample-resource-quotas-def.adoc
Normal file
166
modules/quotas-sample-resource-quotas-def.adoc
Normal file
@@ -0,0 +1,166 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quotas-sample-resource-quota-definitions']
|
||||
= Sample resource quota definitions
|
||||
|
||||
.`core-object-counts.yaml`
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: core-object-counts
|
||||
spec:
|
||||
hard:
|
||||
configmaps: "10" <1>
|
||||
persistentvolumeclaims: "4" <2>
|
||||
replicationcontrollers: "20" <3>
|
||||
secrets: "10" <4>
|
||||
services: "10" <5>
|
||||
----
|
||||
<1> The total number of `ConfigMap` objects that can exist in the project.
|
||||
<2> The total number of persistent volume claims (PVCs) that can exist in the
|
||||
project.
|
||||
<3> The total number of replication controllers that can exist in the project.
|
||||
<4> The total number of secrets that can exist in the project.
|
||||
<5> The total number of services that can exist in the project.
|
||||
|
||||
.`openshift-object-counts.yaml`
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: openshift-object-counts
|
||||
spec:
|
||||
hard:
|
||||
openshift.io/imagestreams: "10" <1>
|
||||
----
|
||||
<1> The total number of image streams that can exist in the project.
|
||||
|
||||
.`compute-resources.yaml`
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: compute-resources
|
||||
spec:
|
||||
hard:
|
||||
pods: "4" <1>
|
||||
requests.cpu: "1" <2>
|
||||
requests.memory: 1Gi <3>
|
||||
requests.ephemeral-storage: 2Gi <4>
|
||||
limits.cpu: "2" <5>
|
||||
limits.memory: 2Gi <6>
|
||||
limits.ephemeral-storage: 4Gi <7>
|
||||
----
|
||||
<1> The total number of pods in a non-terminal state that can exist in the
|
||||
project.
|
||||
<2> Across all pods in a non-terminal state, the sum of CPU requests cannot
|
||||
exceed 1 core.
|
||||
<3> Across all pods in a non-terminal state, the sum of memory requests cannot
|
||||
exceed 1Gi.
|
||||
<4> Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot
|
||||
exceed 2Gi.
|
||||
<5> Across all pods in a non-terminal state, the sum of CPU limits cannot exceed
|
||||
2 cores.
|
||||
<6> Across all pods in a non-terminal state, the sum of memory limits cannot
|
||||
exceed 2Gi.
|
||||
<7> Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot
|
||||
exceed 4Gi.
|
||||
|
||||
.`besteffort.yaml`
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: besteffort
|
||||
spec:
|
||||
hard:
|
||||
pods: "1" <1>
|
||||
scopes:
|
||||
- BestEffort <2>
|
||||
----
|
||||
<1> The total number of pods in a non-terminal state with `BestEffort` quality
|
||||
of service that can exist in the project.
|
||||
<2> Restricts the quota to only matching pods that have `BestEffort` quality of
|
||||
service for either memory or CPU.
|
||||
|
||||
.`compute-resources-long-running.yaml`
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: compute-resources-long-running
|
||||
spec:
|
||||
hard:
|
||||
pods: "4" <1>
|
||||
limits.cpu: "4" <2>
|
||||
limits.memory: "2Gi" <3>
|
||||
limits.ephemeral-storage: "4Gi" <4>
|
||||
scopes:
|
||||
- NotTerminating <5>
|
||||
----
|
||||
<1> The total number of pods in a non-terminal state.
|
||||
<2> Across all pods in a non-terminal state, the sum of CPU limits cannot exceed
|
||||
this value.
|
||||
<3> Across all pods in a non-terminal state, the sum of memory limits cannot exceed
|
||||
this value.
|
||||
<4> Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed
|
||||
this value.
|
||||
<5> Restricts the quota to only matching pods where `spec.activeDeadlineSeconds` is
|
||||
set to `nil`. Build pods will fall under `NotTerminating` unless the
|
||||
`RestartNever` policy is applied.
|
||||
|
||||
.`compute-resources-time-bound.yaml`
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: compute-resources-time-bound
|
||||
spec:
|
||||
hard:
|
||||
pods: "2" <1>
|
||||
limits.cpu: "1" <2>
|
||||
limits.memory: "1Gi" <3>
|
||||
limits.ephemeral-storage: "1Gi" <4>
|
||||
scopes:
|
||||
- Terminating <5>
|
||||
----
|
||||
<1> The total number of pods in a non-terminal state.
|
||||
<2> Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
|
||||
<3> Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
|
||||
<4> Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value.
|
||||
<5> Restricts the quota to only matching pods where `spec.activeDeadlineSeconds >=0`. For example,
|
||||
this quota would charge for build or deployer pods, but not long running pods like a web server or database.
|
||||
|
||||
.*storage-consumption.yaml*
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: storage-consumption
|
||||
spec:
|
||||
hard:
|
||||
persistentvolumeclaims: "10" <1>
|
||||
requests.storage: "50Gi" <2>
|
||||
gold.storageclass.storage.k8s.io/requests.storage: "10Gi" <3>
|
||||
silver.storageclass.storage.k8s.io/requests.storage: "20Gi" <4>
|
||||
silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" <5>
|
||||
bronze.storageclass.storage.k8s.io/requests.storage: "0" <6>
|
||||
bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" <7>
|
||||
----
|
||||
<1> The total number of persistent volume claims in a project
|
||||
<2> Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value.
|
||||
<3> Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value.
|
||||
<4> Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value.
|
||||
<5> Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value.
|
||||
<6> Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to `0`, it means bronze storage class cannot request storage.
|
||||
<7> Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to `0`, it means bronze storage class cannot create claims.
|
||||
57
modules/quotas-scopes.adoc
Normal file
57
modules/quotas-scopes.adoc
Normal file
@@ -0,0 +1,57 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quotas-scopes-{context}']
|
||||
= Quota scopes
|
||||
|
||||
Each quota can have an associated set of _scopes_. A quota only measures usage
|
||||
for a resource if it matches the intersection of enumerated scopes.
|
||||
|
||||
Adding a scope to a quota restricts the set of resources to which that quota can
|
||||
apply. Specifying a resource outside of the allowed set results in a validation
|
||||
error.
|
||||
|
||||
|===
|
||||
|
||||
|Scope |Description
|
||||
|
||||
|`Terminating`
|
||||
|Match pods where `spec.activeDeadlineSeconds >= 0`.
|
||||
|
||||
|`NotTerminating`
|
||||
|Match pods where `spec.activeDeadlineSeconds` is `nil`.
|
||||
|
||||
|`BestEffort`
|
||||
|Match pods that have best effort quality of service for either `cpu` or
|
||||
`memory`.
|
||||
|
||||
|`NotBestEffort`
|
||||
|Match pods that do not have best effort quality of service for `cpu` and
|
||||
`memory`.
|
||||
|===
|
||||
|
||||
A `BestEffort` scope restricts a quota to limiting the following resources:
|
||||
|
||||
- `pods`
|
||||
|
||||
A `Terminating`, `NotTerminating`, and `NotBestEffort` scope restricts a quota
|
||||
to tracking the following resources:
|
||||
|
||||
- `pods`
|
||||
- `memory`
|
||||
- `requests.memory`
|
||||
- `limits.memory`
|
||||
- `cpu`
|
||||
- `requests.cpu`
|
||||
- `limits.cpu`
|
||||
- `ephemeral-storage`
|
||||
- `requests.ephemeral-storage`
|
||||
- `limits.ephemeral-storage`
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Ephemeral storage requests and limits apply only if you enabled the
|
||||
ephemeral storage technology preview. This feature is
|
||||
disabled by default.
|
||||
====
|
||||
98
modules/quotas-selecting-projects.adoc
Normal file
98
modules/quotas-selecting-projects.adoc
Normal file
@@ -0,0 +1,98 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-across-multiple-projects.adoc
|
||||
|
||||
[id='quotas-setting-projects-{context}']
|
||||
= Selecting multiple projects during quota creation
|
||||
|
||||
When creating quotas, you can select multiple projects based on annotation
|
||||
selection, label selection, or both.
|
||||
|
||||
.Procedure
|
||||
|
||||
. To select projects based on annotations, run the following command:
|
||||
+
|
||||
----
|
||||
$ oc create clusterquota for-user \
|
||||
--project-annotation-selector openshift.io/requester=<user_name> \
|
||||
--hard pods=10 \
|
||||
--hard secrets=20
|
||||
----
|
||||
+
|
||||
This creates the following `ClusterResourceQuota` object:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ClusterResourceQuota
|
||||
metadata:
|
||||
name: for-user
|
||||
spec:
|
||||
quota: <1>
|
||||
hard:
|
||||
pods: "10"
|
||||
secrets: "20"
|
||||
selector:
|
||||
annotations: <2>
|
||||
openshift.io/requester: <user_name>
|
||||
labels: null <3>
|
||||
status:
|
||||
namespaces: <4>
|
||||
- namespace: ns-one
|
||||
status:
|
||||
hard:
|
||||
pods: "10"
|
||||
secrets: "20"
|
||||
used:
|
||||
pods: "1"
|
||||
secrets: "9"
|
||||
total: <5>
|
||||
hard:
|
||||
pods: "10"
|
||||
secrets: "20"
|
||||
used:
|
||||
pods: "1"
|
||||
secrets: "9"
|
||||
----
|
||||
<1> The `ResourceQuotaSpec` object that will be enforced over the selected projects.
|
||||
<2> A simple key/value selector for annotations.
|
||||
<3> A label selector that can be used to select projects.
|
||||
<4> A per-namespace map that describes current quota usage in each selected project.
|
||||
<5> The aggregate usage across all selected projects.
|
||||
+
|
||||
This multi-project quota document controls all projects requested by
|
||||
`<user_name>` using the default project request endpoint. You are limited to 10
|
||||
pods and 20 secrets.
|
||||
|
||||
. Similarly, to select projects based on labels, run this command:
|
||||
+
|
||||
----
|
||||
$ oc create clusterresourcequota for-name \ <1>
|
||||
--project-label-selector=name=frontend \ <2>
|
||||
--hard=pods=10 --hard=secrets=20
|
||||
----
|
||||
+
|
||||
<1> Both `clusterresourcequota` and `clusterquota` are aliases of the same
|
||||
command. `for-name` is the name of the `clusterresourcequota` object.
|
||||
<2> To select projects by label, provide a key-value pair by using the format `--project-label-selector=key=value`.
|
||||
+
|
||||
This creates the following `ClusterResourceQuota` object definition:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: ClusterResourceQuota
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: for-name
|
||||
spec:
|
||||
quota:
|
||||
hard:
|
||||
pods: "10"
|
||||
secrets: "20"
|
||||
selector:
|
||||
annotations: null
|
||||
labels:
|
||||
matchLabels:
|
||||
name: frontend
|
||||
----
|
||||
11
modules/quotas-selection-granularity.adoc
Normal file
11
modules/quotas-selection-granularity.adoc
Normal file
@@ -0,0 +1,11 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-across-multiple-projects.adoc
|
||||
|
||||
[id='quotas-selection-granularity-{context}']
|
||||
= Selection granularity
|
||||
|
||||
Because of the locking consideration when claiming quota allocations, the number of
|
||||
active projects selected by a multi-project quota is an important consideration.
|
||||
Selecting more than 100 projects under a single multi-project quota can have
|
||||
detrimental effects on API server responsiveness in those projects.
|
||||
36
modules/quotas-viewing-clusterresourcequotas.adoc
Normal file
36
modules/quotas-viewing-clusterresourcequotas.adoc
Normal file
@@ -0,0 +1,36 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-across-multiple-projects.adoc
|
||||
|
||||
[id='quotas-viewing-clusterresourcequotas-{context}']
|
||||
= Viewing applicable ClusterResourceQuotas
|
||||
|
||||
A project administrator is not allowed to create or modify the multi-project
|
||||
quota that limits his or her project, but the administrator is allowed to view the
|
||||
multi-project quota documents that are applied to his or her project. The
|
||||
project administrator can do this via the `AppliedClusterResourceQuota`
|
||||
resource.
|
||||
|
||||
.Procedure
|
||||
|
||||
. To view quotas applied to a project, run:
|
||||
+
|
||||
----
|
||||
$ oc describe AppliedClusterResourceQuota
|
||||
----
|
||||
+
|
||||
For example:
|
||||
+
|
||||
----
|
||||
Name: for-user
|
||||
Namespace: <none>
|
||||
Created: 19 hours ago
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
Label Selector: <null>
|
||||
AnnotationSelector: map[openshift.io/requester:<user-name>]
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
pods 1 10
|
||||
secrets 9 20
|
||||
----
|
||||
40
modules/quotas-viewing-quotas.adoc
Normal file
40
modules/quotas-viewing-quotas.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/quotas-setting-per-project.adoc
|
||||
|
||||
[id='quota-viewing-quotas-{context}']
|
||||
= Viewing a quota
|
||||
|
||||
You can view usage statistics related to any hard limits defined in a project's
|
||||
quota by navigating in the web console to the project's *Quota* page.
|
||||
|
||||
You can also use the CLI to view quota details.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Get the list of quotas defined in the project. For example, for a project called
|
||||
`demoproject`:
|
||||
+
|
||||
----
|
||||
$ oc get quota -n demoproject
|
||||
NAME AGE
|
||||
besteffort 11m
|
||||
compute-resources 2m
|
||||
core-object-counts 29m
|
||||
----
|
||||
|
||||
. Describe the quota you are interested in, for example the `core-object-counts`
|
||||
quota:
|
||||
+
|
||||
----
|
||||
$ oc describe quota core-object-counts -n demoproject
|
||||
Name: core-object-counts
|
||||
Namespace: demoproject
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
configmaps 3 10
|
||||
persistentvolumeclaims 0 4
|
||||
replicationcontrollers 3 20
|
||||
secrets 9 10
|
||||
services 2 10
|
||||
----
|
||||
121
modules/setting-resource-quota-for-extended-resources.adoc
Normal file
121
modules/setting-resource-quota-for-extended-resources.adoc
Normal file
@@ -0,0 +1,121 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * administering_clusters/setting-quotas-per-project.adoc
|
||||
|
||||
[id='setting-resource-quota-for-extended-resources-{context}']
|
||||
= Setting resource quota for extended resources
|
||||
|
||||
Overcommitment of resources is not allowed for extended resources, so you must
|
||||
specify `requests` and `limits` for the same extended resource in a quota.
|
||||
Currently, only quota items with the prefix `requests.` is allowed for extended
|
||||
resources. The following is an example scenario of how to set resource quota for
|
||||
the GPU resource `nvidia.com/gpu`.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Determine how many GPUs are available on a node in your cluster. For example:
|
||||
+
|
||||
----
|
||||
# oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'
|
||||
openshift.com/gpu-accelerator=true
|
||||
Capacity:
|
||||
nvidia.com/gpu: 2
|
||||
Allocatable:
|
||||
nvidia.com/gpu: 2
|
||||
nvidia.com/gpu 0 0
|
||||
----
|
||||
+
|
||||
In this example, 2 GPUs are available.
|
||||
|
||||
. Set a quota in the namespace `nvidia`. In this example, the quota is `1`:
|
||||
+
|
||||
----
|
||||
# cat gpu-quota.yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
name: gpu-quota
|
||||
namespace: nvidia
|
||||
spec:
|
||||
hard:
|
||||
requests.nvidia.com/gpu: 1
|
||||
----
|
||||
|
||||
. Create the quota:
|
||||
+
|
||||
----
|
||||
# oc create -f gpu-quota.yaml
|
||||
resourcequota/gpu-quota created
|
||||
----
|
||||
|
||||
. Verify that the namespace has the correct quota set:
|
||||
+
|
||||
----
|
||||
# oc describe quota gpu-quota -n nvidia
|
||||
Name: gpu-quota
|
||||
Namespace: nvidia
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
requests.nvidia.com/gpu 0 1
|
||||
----
|
||||
|
||||
. Run a pod that asks for a single GPU:
|
||||
+
|
||||
----
|
||||
# oc create pod gpu-pod.yaml
|
||||
----
|
||||
+
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
generateName: gpu-pod-
|
||||
namespace: nvidia
|
||||
spec:
|
||||
restartPolicy: OnFailure
|
||||
containers:
|
||||
- name: rhel7-gpu-pod
|
||||
image: rhel7
|
||||
env:
|
||||
- name: NVIDIA_VISIBLE_DEVICES
|
||||
value: all
|
||||
- name: NVIDIA_DRIVER_CAPABILITIES
|
||||
value: "compute,utility"
|
||||
- name: NVIDIA_REQUIRE_CUDA
|
||||
value: "cuda>=5.0"
|
||||
command: ["sleep"]
|
||||
args: ["infinity"]
|
||||
resources:
|
||||
limits:
|
||||
nvidia.com/gpu: 1
|
||||
----
|
||||
|
||||
. Verify that the pod is running:
|
||||
+
|
||||
----
|
||||
# oc get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
gpu-pod-s46h7 1/1 Running 0 1m
|
||||
----
|
||||
|
||||
. Verify that the quota `Used` counter is correct:
|
||||
+
|
||||
----
|
||||
# oc describe quota gpu-quota -n nvidia
|
||||
Name: gpu-quota
|
||||
Namespace: nvidia
|
||||
Resource Used Hard
|
||||
-------- ---- ----
|
||||
requests.nvidia.com/gpu 1 1
|
||||
----
|
||||
|
||||
. Attempt to create a second GPU pod in the `nvidia` namespace. This is
|
||||
technically available on the node because it has 2 GPUs:
|
||||
+
|
||||
----
|
||||
# oc create -f gpu-pod.yaml
|
||||
Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1
|
||||
----
|
||||
+
|
||||
This *Forbidden* error message is expected because you have a quota of 1 GPU and
|
||||
this pod tried to allocate a second GPU, which exceeds its quota.
|
||||
Reference in New Issue
Block a user