mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Nodes Nodes docs fixes during ROSA review
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
75bebd51e9
commit
96782f720b
@@ -2351,7 +2351,7 @@ Topics:
|
||||
File: nodes-nodes-resources-configuring
|
||||
- Name: Allocating specific CPUs for nodes in a cluster
|
||||
File: nodes-nodes-resources-cpus
|
||||
- Name: Configuring the TLS security profile for the kubelet
|
||||
- Name: Enabling TLS security profiles for the kubelet
|
||||
File: nodes-nodes-tls
|
||||
Distros: openshift-enterprise,openshift-origin
|
||||
# - Name: Monitoring for problems in your nodes
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
// * machine_management/creating-infrastructure-machinesets.adoc
|
||||
// * nodes/nodes/nodes-nodes-creating-infrastructure-nodes.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="creating-an-infra-node_{context}"]
|
||||
@@ -62,10 +64,9 @@ apiVersion: config.openshift.io/v1
|
||||
kind: Scheduler
|
||||
metadata:
|
||||
name: cluster
|
||||
...
|
||||
spec:
|
||||
defaultNodeSelector: topology.kubernetes.io/region=us-east-1 <1>
|
||||
...
|
||||
# ...
|
||||
----
|
||||
<1> This example node selector deploys pods on nodes in the `us-east-1` region by default.
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
// * nodes/nodes-nodes-graceful-shutdown
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="nodes-nodes-activating-graceful-shutdown_{context}"]
|
||||
[id="nodes-nodes-configuring-graceful-shutdown_{context}"]
|
||||
= Configuring graceful node shutdown
|
||||
|
||||
To configure graceful node shutdown, create a `KubeletConfig` custom resource (CR) to specify a shutdown grace period for pods on a set of nodes. The graceful node shutdown feature minimizes interruption to workloads that run on these pods.
|
||||
@@ -35,6 +35,7 @@ spec:
|
||||
kubeletConfig:
|
||||
shutdownGracePeriod: "3m" <2>
|
||||
shutdownGracePeriodCriticalPods: "2m" <3>
|
||||
#...
|
||||
----
|
||||
<1> This example applies shutdown grace periods to nodes with the `worker` role.
|
||||
<2> Define a time period for regular pods to shut down.
|
||||
@@ -109,7 +110,7 @@ $ cat /etc/kubernetes/kubelet.conf
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
...
|
||||
#...
|
||||
“memorySwap”: {},
|
||||
“containerLogMaxSize”: “50Mi”,
|
||||
“logging”: {
|
||||
@@ -124,6 +125,7 @@ $ cat /etc/kubernetes/kubelet.conf
|
||||
“shutdownGracePeriod”: “10m0s”, <1>
|
||||
“shutdownGracePeriodCriticalPods”: “3m0s”
|
||||
}
|
||||
#...
|
||||
----
|
||||
+
|
||||
<1> Ensure that the log messages for `shutdownGracePeriodRequested` and `shutdownGracePeriodCriticalPods` match the values set in the `KubeletConfig` CR.
|
||||
|
||||
@@ -50,6 +50,7 @@ metadata:
|
||||
labels:
|
||||
pools.operator.machineconfiguration.openshift.io/worker: "" <1>
|
||||
name: worker
|
||||
#...
|
||||
----
|
||||
<1> The label appears under Labels.
|
||||
+
|
||||
@@ -105,6 +106,7 @@ spec:
|
||||
imageMinimumGCAge: 5m <8>
|
||||
imageGCHighThresholdPercent: 80 <9>
|
||||
imageGCLowThresholdPercent: 75 <10>
|
||||
#...
|
||||
----
|
||||
<1> Name for the object.
|
||||
<2> Specify the label from the machine config pool.
|
||||
|
||||
@@ -117,6 +117,7 @@ spec:
|
||||
kernelArguments:
|
||||
- enforcing=0 <3>
|
||||
systemd.unified_cgroup_hierarchy=0 <4>
|
||||
#...
|
||||
----
|
||||
+
|
||||
<1> Applies the new kernel argument only to worker nodes.
|
||||
|
||||
@@ -66,6 +66,7 @@ spec:
|
||||
systemReserved:
|
||||
cpu: 2000m
|
||||
memory: 1Gi
|
||||
#...
|
||||
----
|
||||
<1> Assign a name to CR.
|
||||
<2> Specify the label to apply the configuration change, this is the label you added to the machine config pool.
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="nodes-nodes-managing-max-pods-about_{context}"]
|
||||
[id="nodes-nodes-managing-max-pods-proc_{context}"]
|
||||
= Configuring the maximum number of pods per node
|
||||
|
||||
Two parameters control the maximum number of pods that can be scheduled to a node: `podsPerCore` and `maxPods`. If you use both options, the lower of the two limits the number of pods on a node.
|
||||
@@ -38,6 +38,7 @@ metadata:
|
||||
labels:
|
||||
pools.operator.machineconfiguration.openshift.io/worker: "" <1>
|
||||
name: worker
|
||||
#...
|
||||
----
|
||||
<1> The label appears under Labels.
|
||||
+
|
||||
@@ -68,6 +69,7 @@ spec:
|
||||
kubeletConfig:
|
||||
podsPerCore: 10 <3>
|
||||
maxPods: 250 <4>
|
||||
#...
|
||||
----
|
||||
<1> Assign a name to CR.
|
||||
<2> Specify the label from the machine config pool.
|
||||
|
||||
@@ -39,6 +39,7 @@ spec:
|
||||
values:
|
||||
- default
|
||||
topologyKey: kubernetes.io/hostname
|
||||
#...
|
||||
----
|
||||
<1> Stanza to configure pod anti-affinity.
|
||||
<2> Defines a preferred rule.
|
||||
|
||||
@@ -41,7 +41,7 @@ metadata:
|
||||
labels:
|
||||
pools.operator.machineconfiguration.openshift.io/worker: "" <1>
|
||||
name: worker
|
||||
...
|
||||
#...
|
||||
----
|
||||
<1> The label appears under `Labels`.
|
||||
+
|
||||
@@ -70,6 +70,7 @@ spec:
|
||||
machineConfigPoolSelector:
|
||||
matchLabels:
|
||||
pools.operator.machineconfiguration.openshift.io/worker: "" <3>
|
||||
#...
|
||||
----
|
||||
<1> Assign a name to CR.
|
||||
<2> Add the `autoSizingReserved` parameter set to `true` to allow {product-title} to automatically determine and allocate the `system-reserved` resources on the nodes associated with the specified label. To disable automatic allocation on those nodes, set this parameter to `false`.
|
||||
|
||||
@@ -45,6 +45,7 @@ metadata:
|
||||
labels:
|
||||
pools.operator.machineconfiguration.openshift.io/worker: "" <1>
|
||||
name: worker
|
||||
#...
|
||||
----
|
||||
<1> The label appears under Labels.
|
||||
+
|
||||
@@ -76,6 +77,7 @@ spec:
|
||||
systemReserved: <3>
|
||||
cpu: 1000m
|
||||
memory: 1Gi
|
||||
#...
|
||||
----
|
||||
<1> Assign a name to CR.
|
||||
<2> Specify the label from the machine config pool.
|
||||
|
||||
@@ -34,7 +34,7 @@ Labels: machineconfiguration.openshift.io/mco-built-in=
|
||||
Annotations: <none>
|
||||
API Version: machineconfiguration.openshift.io/v1
|
||||
Kind: MachineConfigPool
|
||||
...
|
||||
#...
|
||||
----
|
||||
<1> Get the MCP label.
|
||||
|
||||
@@ -52,6 +52,7 @@ spec:
|
||||
machineConfigPoolSelector:
|
||||
matchLabels:
|
||||
pools.operator.machineconfiguration.openshift.io/worker: "" <3>
|
||||
#...
|
||||
----
|
||||
<1> Specify a name for the CR.
|
||||
<2> Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP.
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes/nodes-nodes-managing.:_content-type: PROCEDURE
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="nodes-nodes-swap-memory_{context}"]
|
||||
|
||||
@@ -63,6 +67,7 @@ spec:
|
||||
failSwapOn: false <1>
|
||||
memorySwap:
|
||||
swapBehavior: LimitedSwap <2>
|
||||
#...
|
||||
----
|
||||
<1> Set to `false` to enable swap memory use on the associated nodes. Set to `true` to disable swap memory use.
|
||||
<2> Specify the swap memory behavior. If unspecified, the default is `LimitedSwap`.
|
||||
|
||||
@@ -195,7 +195,7 @@ Events: <11>
|
||||
Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk
|
||||
Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID
|
||||
Normal Starting 6d kubelet, m01.example.com Starting kubelet.
|
||||
...
|
||||
#...
|
||||
----
|
||||
<1> The name of the node.
|
||||
<2> The role of the node, either `master` or `worker`.
|
||||
|
||||
@@ -59,6 +59,7 @@ metadata:
|
||||
namespace: openshift-machine-api
|
||||
spec:
|
||||
replicas: 2
|
||||
#...
|
||||
----
|
||||
====
|
||||
|
||||
|
||||
@@ -50,6 +50,7 @@ metadata:
|
||||
spec:
|
||||
mastersSchedulable: false <1>
|
||||
status: {}
|
||||
#...
|
||||
----
|
||||
<1> Set to `true` to allow control plane nodes to be schedulable, or `false` to
|
||||
disallow control plane nodes to be schedulable.
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="nodes-nodes-working-setting-booleans"]
|
||||
[id="nodes-nodes-working-setting-booleans_{context}"]
|
||||
|
||||
= Setting SELinux booleans
|
||||
|
||||
@@ -46,6 +46,7 @@ spec:
|
||||
WantedBy=multi-user.target graphical.target
|
||||
enabled: true
|
||||
name: setsebool.service
|
||||
#...
|
||||
----
|
||||
+
|
||||
|
||||
|
||||
@@ -43,6 +43,7 @@ metadata:
|
||||
name: webconsole-7f7f6
|
||||
labels:
|
||||
unhealthy: 'true'
|
||||
#...
|
||||
----
|
||||
====
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/tls-profiles.adoc
|
||||
// * nodes/nodes/nodes-nodes-tls.adoc
|
||||
|
||||
ifeval::["{context}" == "tls-security-profiles"]
|
||||
:tls:
|
||||
@@ -29,6 +30,7 @@ spec:
|
||||
machineConfigPoolSelector:
|
||||
matchLabels:
|
||||
pools.operator.machineconfiguration.openshift.io/worker: ""
|
||||
#...
|
||||
----
|
||||
|
||||
You can see the ciphers and the minimum TLS version of the configured TLS security profile in the `kubelet.conf` file on a configured node.
|
||||
@@ -61,6 +63,7 @@ spec:
|
||||
machineConfigPoolSelector:
|
||||
matchLabels:
|
||||
pools.operator.machineconfiguration.openshift.io/worker: "" <4>
|
||||
#...
|
||||
----
|
||||
+
|
||||
<1> Specify the TLS security profile type (`Old`, `Intermediate`, or `Custom`). The default is `Intermediate`.
|
||||
@@ -108,9 +111,9 @@ sh-4.4# cat /etc/kubernetes/kubelet.conf
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
kind: KubeletConfiguration
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
...
|
||||
"kind": "KubeletConfiguration",
|
||||
"apiVersion": "kubelet.config.k8s.io/v1beta1",
|
||||
#...
|
||||
"tlsCipherSuites": [
|
||||
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
|
||||
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
|
||||
@@ -120,6 +123,7 @@ apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
|
||||
],
|
||||
"tlsMinVersion": "VersionTLS12",
|
||||
#...
|
||||
----
|
||||
|
||||
ifeval::["{context}" == "tls-security-profiles"]
|
||||
|
||||
@@ -48,7 +48,7 @@ through several tasks:
|
||||
* Change node configuration using a custom resource definition (CRD), or the `kubeletConfig` object.
|
||||
* Configure nodes to allow or disallow the scheduling of pods. Healthy worker nodes with a `Ready` status allow pod placement by default while the control plane nodes do not; you can change this default behavior by xref:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-marking_nodes-nodes-working[configuring the worker nodes to be unschedulable] and xref:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-marking_nodes-nodes-working[the control plane nodes to be schedulable].
|
||||
* xref:../nodes/nodes/nodes-nodes-resources-configuring.adoc#nodes-nodes-resources-configuring[Allocate resources for nodes] using the `system-reserved` setting. You can allow {product-title} to automatically determine the optimal `system-reserved` CPU and memory resources for your nodes, or you can manually determine and set the best resources for your nodes.
|
||||
* xref:../nodes/nodes/nodes-nodes-managing-max-pods.adoc#nodes-nodes-managing-max-pods-about_nodes-nodes-managing-max-pods[Configure the number of pods that can run on a node] based on the number of processor cores on the node, a hard limit, or both.
|
||||
* xref:../nodes/nodes/nodes-nodes-managing-max-pods.adoc#nodes-nodes-managing-max-pods-proc_nodes-nodes-managing-max-pods[Configure the number of pods that can run on a node] based on the number of processor cores on the node, a hard limit, or both.
|
||||
* Reboot a node gracefully using xref:../nodes/nodes/nodes-nodes-rebooting.adoc#nodes-nodes-rebooting-affinity_nodes-nodes-rebooting[pod anti-affinity].
|
||||
* xref:../nodes/nodes/nodes-nodes-working.adoc#deleting-nodes[Delete a node from a cluster] by scaling down the cluster using a compute machine set. To delete a node from a bare-metal cluster, you must first drain all pods on the node and then manually delete the node.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user