diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 10e73093ac..9064a6e4a7 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2351,7 +2351,7 @@ Topics: File: nodes-nodes-resources-configuring - Name: Allocating specific CPUs for nodes in a cluster File: nodes-nodes-resources-cpus - - Name: Configuring the TLS security profile for the kubelet + - Name: Enabling TLS security profiles for the kubelet File: nodes-nodes-tls Distros: openshift-enterprise,openshift-origin # - Name: Monitoring for problems in your nodes diff --git a/modules/creating-an-infra-node.adoc b/modules/creating-an-infra-node.adoc index 397957a500..dfc6e9ea91 100644 --- a/modules/creating-an-infra-node.adoc +++ b/modules/creating-an-infra-node.adoc @@ -1,6 +1,8 @@ // Module included in the following assemblies: // // * post_installation_configuration/cluster-tasks.adoc +// * machine_management/creating-infrastructure-machinesets.adoc +// * nodes/nodes/nodes-nodes-creating-infrastructure-nodes.adoc :_content-type: PROCEDURE [id="creating-an-infra-node_{context}"] @@ -62,10 +64,9 @@ apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster -... spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 <1> -... +# ... ---- <1> This example node selector deploys pods on nodes in the `us-east-1` region by default. diff --git a/modules/nodes-nodes-configuring-graceful-shutdown.adoc b/modules/nodes-nodes-configuring-graceful-shutdown.adoc index bbedc21481..dd2bbe4413 100644 --- a/modules/nodes-nodes-configuring-graceful-shutdown.adoc +++ b/modules/nodes-nodes-configuring-graceful-shutdown.adoc @@ -2,7 +2,7 @@ // * nodes/nodes-nodes-graceful-shutdown :_content-type: PROCEDURE -[id="nodes-nodes-activating-graceful-shutdown_{context}"] +[id="nodes-nodes-configuring-graceful-shutdown_{context}"] = Configuring graceful node shutdown To configure graceful node shutdown, create a `KubeletConfig` custom resource (CR) to specify a shutdown grace period for pods on a set of nodes. The graceful node shutdown feature minimizes interruption to workloads that run on these pods. @@ -35,6 +35,7 @@ spec: kubeletConfig: shutdownGracePeriod: "3m" <2> shutdownGracePeriodCriticalPods: "2m" <3> +#... ---- <1> This example applies shutdown grace periods to nodes with the `worker` role. <2> Define a time period for regular pods to shut down. @@ -109,7 +110,7 @@ $ cat /etc/kubernetes/kubelet.conf .Example output [source,terminal] ---- -... +#... “memorySwap”: {}, “containerLogMaxSize”: “50Mi”, “logging”: { @@ -124,6 +125,7 @@ $ cat /etc/kubernetes/kubelet.conf “shutdownGracePeriod”: “10m0s”, <1> “shutdownGracePeriodCriticalPods”: “3m0s” } +#... ---- + <1> Ensure that the log messages for `shutdownGracePeriodRequested` and `shutdownGracePeriodCriticalPods` match the values set in the `KubeletConfig` CR. diff --git a/modules/nodes-nodes-garbage-collection-configuring.adoc b/modules/nodes-nodes-garbage-collection-configuring.adoc index 7c1f863c8c..042b786c38 100644 --- a/modules/nodes-nodes-garbage-collection-configuring.adoc +++ b/modules/nodes-nodes-garbage-collection-configuring.adoc @@ -50,6 +50,7 @@ metadata: labels: pools.operator.machineconfiguration.openshift.io/worker: "" <1> name: worker +#... ---- <1> The label appears under Labels. + @@ -105,6 +106,7 @@ spec: imageMinimumGCAge: 5m <8> imageGCHighThresholdPercent: 80 <9> imageGCLowThresholdPercent: 75 <10> +#... ---- <1> Name for the object. <2> Specify the label from the machine config pool. diff --git a/modules/nodes-nodes-kernel-arguments.adoc b/modules/nodes-nodes-kernel-arguments.adoc index 8bc9b301a7..77becf727b 100644 --- a/modules/nodes-nodes-kernel-arguments.adoc +++ b/modules/nodes-nodes-kernel-arguments.adoc @@ -117,6 +117,7 @@ spec: kernelArguments: - enforcing=0 <3> systemd.unified_cgroup_hierarchy=0 <4> +#... ---- + <1> Applies the new kernel argument only to worker nodes. diff --git a/modules/nodes-nodes-managing-about.adoc b/modules/nodes-nodes-managing-about.adoc index 69551ea539..4c45ef2110 100644 --- a/modules/nodes-nodes-managing-about.adoc +++ b/modules/nodes-nodes-managing-about.adoc @@ -66,6 +66,7 @@ spec: systemReserved: cpu: 2000m memory: 1Gi +#... ---- <1> Assign a name to CR. <2> Specify the label to apply the configuration change, this is the label you added to the machine config pool. diff --git a/modules/nodes-nodes-managing-max-pods-proc.adoc b/modules/nodes-nodes-managing-max-pods-proc.adoc index 24581aa55e..b0d4a72adb 100644 --- a/modules/nodes-nodes-managing-max-pods-proc.adoc +++ b/modules/nodes-nodes-managing-max-pods-proc.adoc @@ -4,7 +4,7 @@ // * post_installation_configuration/node-tasks.adoc :_content-type: PROCEDURE -[id="nodes-nodes-managing-max-pods-about_{context}"] +[id="nodes-nodes-managing-max-pods-proc_{context}"] = Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: `podsPerCore` and `maxPods`. If you use both options, the lower of the two limits the number of pods on a node. @@ -38,6 +38,7 @@ metadata: labels: pools.operator.machineconfiguration.openshift.io/worker: "" <1> name: worker +#... ---- <1> The label appears under Labels. + @@ -68,6 +69,7 @@ spec: kubeletConfig: podsPerCore: 10 <3> maxPods: 250 <4> +#... ---- <1> Assign a name to CR. <2> Specify the label from the machine config pool. diff --git a/modules/nodes-nodes-rebooting-affinity.adoc b/modules/nodes-nodes-rebooting-affinity.adoc index bef8c7ee51..2afff38bf6 100644 --- a/modules/nodes-nodes-rebooting-affinity.adoc +++ b/modules/nodes-nodes-rebooting-affinity.adoc @@ -39,6 +39,7 @@ spec: values: - default topologyKey: kubernetes.io/hostname +#... ---- <1> Stanza to configure pod anti-affinity. <2> Defines a preferred rule. diff --git a/modules/nodes-nodes-resources-configuring-auto.adoc b/modules/nodes-nodes-resources-configuring-auto.adoc index 00920919f0..d80311e24e 100644 --- a/modules/nodes-nodes-resources-configuring-auto.adoc +++ b/modules/nodes-nodes-resources-configuring-auto.adoc @@ -41,7 +41,7 @@ metadata: labels: pools.operator.machineconfiguration.openshift.io/worker: "" <1> name: worker - ... +#... ---- <1> The label appears under `Labels`. + @@ -70,6 +70,7 @@ spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" <3> +#... ---- <1> Assign a name to CR. <2> Add the `autoSizingReserved` parameter set to `true` to allow {product-title} to automatically determine and allocate the `system-reserved` resources on the nodes associated with the specified label. To disable automatic allocation on those nodes, set this parameter to `false`. diff --git a/modules/nodes-nodes-resources-configuring-setting.adoc b/modules/nodes-nodes-resources-configuring-setting.adoc index 6f38c84fcd..d90963039d 100644 --- a/modules/nodes-nodes-resources-configuring-setting.adoc +++ b/modules/nodes-nodes-resources-configuring-setting.adoc @@ -45,6 +45,7 @@ metadata: labels: pools.operator.machineconfiguration.openshift.io/worker: "" <1> name: worker +#... ---- <1> The label appears under Labels. + @@ -76,6 +77,7 @@ spec: systemReserved: <3> cpu: 1000m memory: 1Gi +#... ---- <1> Assign a name to CR. <2> Specify the label from the machine config pool. diff --git a/modules/nodes-nodes-resources-cpus-reserve.adoc b/modules/nodes-nodes-resources-cpus-reserve.adoc index ea660634fd..69bbe25ff0 100644 --- a/modules/nodes-nodes-resources-cpus-reserve.adoc +++ b/modules/nodes-nodes-resources-cpus-reserve.adoc @@ -34,7 +34,7 @@ Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool -... +#... ---- <1> Get the MCP label. @@ -52,6 +52,7 @@ spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" <3> +#... ---- <1> Specify a name for the CR. <2> Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP. diff --git a/modules/nodes-nodes-swap-memory.adoc b/modules/nodes-nodes-swap-memory.adoc index 2a964d22fe..e6c290c45c 100644 --- a/modules/nodes-nodes-swap-memory.adoc +++ b/modules/nodes-nodes-swap-memory.adoc @@ -1,3 +1,7 @@ +// Module included in the following assemblies: +// +// * nodes/nodes/nodes-nodes-managing.:_content-type: PROCEDURE + :_content-type: PROCEDURE [id="nodes-nodes-swap-memory_{context}"] @@ -63,6 +67,7 @@ spec: failSwapOn: false <1> memorySwap: swapBehavior: LimitedSwap <2> +#... ---- <1> Set to `false` to enable swap memory use on the associated nodes. Set to `true` to disable swap memory use. <2> Specify the swap memory behavior. If unspecified, the default is `LimitedSwap`. diff --git a/modules/nodes-nodes-viewing-listing.adoc b/modules/nodes-nodes-viewing-listing.adoc index 480c148498..c485f0ef97 100644 --- a/modules/nodes-nodes-viewing-listing.adoc +++ b/modules/nodes-nodes-viewing-listing.adoc @@ -195,7 +195,7 @@ Events: <11> Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. - ... +#... ---- <1> The name of the node. <2> The role of the node, either `master` or `worker`. diff --git a/modules/nodes-nodes-working-deleting.adoc b/modules/nodes-nodes-working-deleting.adoc index dab38934d2..185486e5a3 100644 --- a/modules/nodes-nodes-working-deleting.adoc +++ b/modules/nodes-nodes-working-deleting.adoc @@ -59,6 +59,7 @@ metadata: namespace: openshift-machine-api spec: replicas: 2 +#... ---- ==== diff --git a/modules/nodes-nodes-working-master-schedulable.adoc b/modules/nodes-nodes-working-master-schedulable.adoc index c57aa4ac64..cf37a145bf 100644 --- a/modules/nodes-nodes-working-master-schedulable.adoc +++ b/modules/nodes-nodes-working-master-schedulable.adoc @@ -50,6 +50,7 @@ metadata: spec: mastersSchedulable: false <1> status: {} +#... ---- <1> Set to `true` to allow control plane nodes to be schedulable, or `false` to disallow control plane nodes to be schedulable. diff --git a/modules/nodes-nodes-working-setting-booleans.adoc b/modules/nodes-nodes-working-setting-booleans.adoc index 8b3956c758..4bfcf5629a 100644 --- a/modules/nodes-nodes-working-setting-booleans.adoc +++ b/modules/nodes-nodes-working-setting-booleans.adoc @@ -4,7 +4,7 @@ :_content-type: PROCEDURE -[id="nodes-nodes-working-setting-booleans"] +[id="nodes-nodes-working-setting-booleans_{context}"] = Setting SELinux booleans @@ -46,6 +46,7 @@ spec: WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service +#... ---- + diff --git a/modules/nodes-nodes-working-updating.adoc b/modules/nodes-nodes-working-updating.adoc index 96ad8dff6f..a1c6e636c5 100644 --- a/modules/nodes-nodes-working-updating.adoc +++ b/modules/nodes-nodes-working-updating.adoc @@ -43,6 +43,7 @@ metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' +#... ---- ==== diff --git a/modules/tls-profiles-kubelet-configuring.adoc b/modules/tls-profiles-kubelet-configuring.adoc index ce6b5db609..e571d51123 100644 --- a/modules/tls-profiles-kubelet-configuring.adoc +++ b/modules/tls-profiles-kubelet-configuring.adoc @@ -1,6 +1,7 @@ // Module included in the following assemblies: // // * security/tls-profiles.adoc +// * nodes/nodes/nodes-nodes-tls.adoc ifeval::["{context}" == "tls-security-profiles"] :tls: @@ -29,6 +30,7 @@ spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" +#... ---- You can see the ciphers and the minimum TLS version of the configured TLS security profile in the `kubelet.conf` file on a configured node. @@ -61,6 +63,7 @@ spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" <4> +#... ---- + <1> Specify the TLS security profile type (`Old`, `Intermediate`, or `Custom`). The default is `Intermediate`. @@ -108,9 +111,9 @@ sh-4.4# cat /etc/kubernetes/kubelet.conf .Example output [source,terminal] ---- -kind: KubeletConfiguration -apiVersion: kubelet.config.k8s.io/v1beta1 - ... + "kind": "KubeletConfiguration", + "apiVersion": "kubelet.config.k8s.io/v1beta1", +#... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", @@ -120,6 +123,7 @@ apiVersion: kubelet.config.k8s.io/v1beta1 "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", +#... ---- ifeval::["{context}" == "tls-security-profiles"] diff --git a/nodes/index.adoc b/nodes/index.adoc index 4f3f658934..d0292a5f60 100644 --- a/nodes/index.adoc +++ b/nodes/index.adoc @@ -48,7 +48,7 @@ through several tasks: * Change node configuration using a custom resource definition (CRD), or the `kubeletConfig` object. * Configure nodes to allow or disallow the scheduling of pods. Healthy worker nodes with a `Ready` status allow pod placement by default while the control plane nodes do not; you can change this default behavior by xref:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-marking_nodes-nodes-working[configuring the worker nodes to be unschedulable] and xref:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-marking_nodes-nodes-working[the control plane nodes to be schedulable]. * xref:../nodes/nodes/nodes-nodes-resources-configuring.adoc#nodes-nodes-resources-configuring[Allocate resources for nodes] using the `system-reserved` setting. You can allow {product-title} to automatically determine the optimal `system-reserved` CPU and memory resources for your nodes, or you can manually determine and set the best resources for your nodes. -* xref:../nodes/nodes/nodes-nodes-managing-max-pods.adoc#nodes-nodes-managing-max-pods-about_nodes-nodes-managing-max-pods[Configure the number of pods that can run on a node] based on the number of processor cores on the node, a hard limit, or both. +* xref:../nodes/nodes/nodes-nodes-managing-max-pods.adoc#nodes-nodes-managing-max-pods-proc_nodes-nodes-managing-max-pods[Configure the number of pods that can run on a node] based on the number of processor cores on the node, a hard limit, or both. * Reboot a node gracefully using xref:../nodes/nodes/nodes-nodes-rebooting.adoc#nodes-nodes-rebooting-affinity_nodes-nodes-rebooting[pod anti-affinity]. * xref:../nodes/nodes/nodes-nodes-working.adoc#deleting-nodes[Delete a node from a cluster] by scaling down the cluster using a compute machine set. To delete a node from a bare-metal cluster, you must first drain all pods on the node and then manually delete the node.