mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-06 15:46:57 +01:00
@@ -3,7 +3,7 @@
|
||||
// * serverless/eventing/event-sources/serverless-apiserversource.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="apiserversource-yaml_context"]
|
||||
[id="apiserversource-yaml_{context}"]
|
||||
= Creating an API server source by using YAML files
|
||||
|
||||
Creating Knative resources by using YAML files uses a declarative API, which enables you to describe event sources declaratively and in a reproducible manner. To create an API server source by using YAML, you must create a YAML file that defines an `ApiServerSource` object, then apply it by using the `oc apply` command.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * nodes/nodes-dashboard-using.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="nodes-dashboard-using-identify-critical-cpu-kubelet"]
|
||||
[id="nodes-dashboard-using-identify-critical-cpu-kubelet_{context}"]
|
||||
= Nodes with Kubelet system reserved CPU utilization > 50%
|
||||
|
||||
The *Nodes with Kubelet system reserved CPU utilization > 50%* query calculates the percentage of the CPU that the Kubelet system is currently using from system reserved.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * nodes/nodes-dashboard-using.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="nodes-dashboard-using-identify-critical-memory-crio"]
|
||||
[id="nodes-dashboard-using-identify-critical-memory-crio_{context}"]
|
||||
= Nodes with CRI-O system reserved memory utilization > 50%
|
||||
|
||||
The *Nodes with CRI-O system reserved memory utilization > 50%* query calculates all nodes where the percentage of used memory reserved for the CRI-O system is greater than or equal to 50%. In this case, memory usage is defined by the resident set size (RSS), which is the portion of the CRI-O system's memory held in RAM.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * nodes/nodes-dashboard-using.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="nodes-dashboard-using-identify-critical-memory-kubelet"]
|
||||
[id="nodes-dashboard-using-identify-critical-memory-kubelet_{context}"]
|
||||
= Nodes with Kubelet system reserved memory utilization > 50%
|
||||
|
||||
The *Nodes with Kubelet system reserved memory utilization > 50%* query indicates nodes where the Kubelet's system reserved memory utilization exceeds 50%. The query examines the memory that the Kubelet process itself is consuming on a node.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * nodes/nodes-dashboard-using.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="nodes-dashboard-using-identify-critical-memory"]
|
||||
[id="nodes-dashboard-using-identify-critical-memory_{context}"]
|
||||
= Nodes with system reserved memory utilization > 80%
|
||||
|
||||
The *Nodes with system reserved memory utilization > 80%* query calculates the percentage of system reserved memory that is utilized for each node. The calculation divides the total resident set size (RSS) by the total memory capacity of the node subtracted from the allocatable memory. RSS is the portion of the system's memory occupied by a process that is held in main memory (RAM). Nodes are flagged if their resulting value equals or exceeds an 80% threshold.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * nodes/nodes-dashboard-using.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="nodes-dashboard-using-identify-critical-pulls"]
|
||||
[id="nodes-dashboard-using-identify-critical-pulls_{context}"]
|
||||
= Failure rate for image pulls in the last hour
|
||||
|
||||
The *Failure rate for image pulls in the last hour* query divides the total number of failed image pulls by the sum of successful and failed image pulls to provide a ratio of failures.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * nodes/nodes-dashboard-using.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="nodes-dashboard-using-identify-critical-top3"]
|
||||
[id="nodes-dashboard-using-identify-critical-top3_{context}"]
|
||||
= Top 3 containers with the most OOM kills in the last day
|
||||
|
||||
The *Top 3 containers with the most OOM kills in the last day* query fetches details regarding the top three containers that have experienced the most Out-Of-Memory (OOM) kills in the previous day.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
// * /serverless/eventing/event-sources/serverless-custom-event-sources.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="serverless-sinkbinding-intro_context"]
|
||||
[id="serverless-sinkbinding-intro_{context}"]
|
||||
= Sink binding
|
||||
|
||||
The `SinkBinding` object supports decoupling event production from delivery addressing. Sink binding is used to connect _event producers_ to an event consumer, or _sink_. An event producer is a Kubernetes resource that embeds a `PodSpec` template and produces events. A sink is an addressable Kubernetes object that can receive events.
|
||||
|
||||
Reference in New Issue
Block a user