mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Merge pull request #105727 from rohennes/TELCODOCS-2620-4-21
[enterprise-4.21]Core RDS 4.21 updates
This commit is contained in:
@@ -4,7 +4,6 @@
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="telco-core-additional-storage-solutions_{context}"]
|
||||
|
||||
= Additional storage solutions
|
||||
You can use other storage solutions to provide persistent storage for telco core clusters.
|
||||
The configuration and integration of these solutions is outside the scope of the reference design specifications (RDS).
|
||||
|
||||
26
modules/telco-core-cert-manager-operator.adoc
Normal file
26
modules/telco-core-cert-manager-operator.adoc
Normal file
@@ -0,0 +1,26 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="telco-core-cert-manager-operator_{context}"]
|
||||
= cert-manager Operator
|
||||
|
||||
New in this release::
|
||||
* The cert-manager Operator is a new optional component in this release.
|
||||
|
||||
Description::
|
||||
+
|
||||
--
|
||||
The cert-manager Operator for {product-title} manages the lifecycle of TLS certificates for cluster components and workloads.
|
||||
The cert-manager Operator automates certificate issuance, renewal, and rotation, eliminating manual certificate management.
|
||||
The reference configuration includes the cert-manager Operator to optionally manage certificates for the API server and ingress controller endpoints.
|
||||
--
|
||||
|
||||
Limits and requirements::
|
||||
|
||||
* The reference configuration includes only the ACME DNS01 challenge type for platform certificate issuance.
|
||||
|
||||
Engineering considerations::
|
||||
|
||||
* Use {rh-rhacm} `CertificatePolicy` resources on the hub cluster to monitor certificate expiration and compliance across managed clusters.
|
||||
@@ -43,7 +43,7 @@ This parameter determines the frequency of probes used to detect when the select
|
||||
The recommended value of this parameter is `1` second.
|
||||
** When EgressIP is configured with multiple egress nodes, the failover time is expected to be on the order of seconds or longer.
|
||||
** On nodes with additional network interfaces EgressIP traffic will egress through the interface on which the EgressIP address has been assigned.
|
||||
See the "Configuring an egress IP address".
|
||||
For more information, see "Configuring an egress IP address".
|
||||
* Pod-level SR-IOV bonding mode must be set to `active-backup` and a value in `miimon` must be set (`100` is recommended).
|
||||
|
||||
Engineering considerations::
|
||||
|
||||
@@ -7,14 +7,13 @@
|
||||
= CPU partitioning and performance tuning
|
||||
|
||||
New in this release::
|
||||
* Disable RPS - resource use for pod networking should be accounted for on application CPUs
|
||||
* Better isolation of control plane on schedulable control-plane nodes
|
||||
* Support for schedulable control-plane in the NUMA Resources Operator
|
||||
* Additional guidance on upgrade for Telco Core clusters
|
||||
* Optional support for the `acpi_idle` CPUIdle driver.
|
||||
* The `systemReserved` field replaces the `autoSizingReserved` field to specify 11Gi memory for worker nodes and 30Gi for control plane nodes.
|
||||
* Enable triggering a kernel panic through a non-maskable interrupt for system recovery and diagnostic purposes when `x86_64` architecture nodes become unresponsive.
|
||||
|
||||
Description::
|
||||
CPU partitioning improves performance and reduces latency by separating sensitive workloads from general-purpose tasks, interrupts, and driver work queues.
|
||||
The CPUs allocated to those auxiliary processes are referred to as _reserved_ in the following sections.
|
||||
The CPUs allocated to those auxiliary processes are referred to as *reserved* in the following sections.
|
||||
In a system with Hyper-Threading enabled, a CPU is one hyper-thread.
|
||||
|
||||
Limits and requirements::
|
||||
@@ -32,6 +31,8 @@ Engineering considerations::
|
||||
All workloads must now be compatible with `cgroup v2`.
|
||||
For more information, see link:https://www.redhat.com/en/blog/rhel-9-changes-context-red-hat-openshift-workloads[Red Hat Enterprise Linux 9 changes in the context of Red Hat OpenShift workloads].
|
||||
* The minimum reserved capacity (`systemReserved`) required can be found by following the guidance in link:https://access.redhat.com/solutions/5843241[Which amount of CPU and memory are recommended to reserve for the system in OCP 4 nodes?].
|
||||
** The specific values must be custom-tuned for each cluster based on its size and application workload.
|
||||
** The minimum recommended `systemReserved` memory is 11Gi for worker nodes and 30Gi for control plane nodes.
|
||||
* For schedulable control planes, the minimum recommended reserved capacity is at least 16 CPUs.
|
||||
* The actual required reserved CPU capacity depends on the cluster configuration and workload attributes.
|
||||
* The reserved CPU value must be rounded up to a full core (2 hyper-threads) alignment.
|
||||
@@ -52,6 +53,9 @@ With no configuration, the default queue count is one RX/TX queue per online CPU
|
||||
* The irdma kernel module might result in the allocation of too many interrupt vectors on systems with high core counts.
|
||||
To prevent this condition the reference configuration excludes this kernel module from loading through a kernel commandline argument in the `PerformanceProfile` resource.
|
||||
Typically Core workloads do not require this kernel module.
|
||||
* To enable the `acpi_idle` CPUIdle driver, for example for Intel FlexRAN workloads, add `intel_idle.max_cstate=0` to the `additionalKernelArgs` list in the `PerformanceProfile` resource.
|
||||
* The `TunedPerformancePatch.yaml` file in the reference configures the `kernel.panic_on_unrecovered_nmi` sysctl parameter to enable triggering a kernel panic through BMC Non-Maskable Interrupt (NMI) on x86_64 architures.
|
||||
This provides a mechanism to force a kernel panic for system recovery and diagnostic purposes when nodes become unresponsive.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -22,4 +22,5 @@ Disconnected configuration,`idms.yaml`,Defines a list of mirrored repository dig
|
||||
Disconnected configuration,`operator-hub.yaml`,Defines an OperatorHub configuration which disables all default sources.,No
|
||||
Monitoring and observability,`monitoring-config-cm.yaml`,Configuring storage and retention for Prometheus and Alertmanager.,Yes
|
||||
Power management,`PerformanceProfile.yaml`,"Defines a performance profile resource, specifying CPU isolation, hugepages configuration, and workload hints for performance optimization on selected nodes.",No
|
||||
Power management,`TunedPerformancePatch.yaml`,"Applies performance tuning overrides for worker profiles and enables kernel panic on non-maskable interrupts (NMI) for system recovery on unresponsive nodes.",No
|
||||
|====
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
|====
|
||||
Component,Reference CR,Description,Optional
|
||||
Baseline,`Network.yaml`,"Configures the default cluster network, specifying OVN Kubernetes settings like routing via the host. It also allows the definition of additional networks, including custom CNI configurations, and enables the use of MultiNetworkPolicy CRs for network policies across multiple networks.",No
|
||||
Baseline,`networkAttachmentDefinition.yaml`,Optional. Defines a NetworkAttachmentDefinition resource specifying network configuration details such as node selector and CNI configuration.,Yes
|
||||
Baseline,`networkAttachmentDefinition.yaml`,Optional. Defines a NetworkAttachmentDefinition resource specifying network configuration details such as node selector and CNI configuration.,No
|
||||
Load Balancer,`addr-pool.yaml`,Configures MetalLB to manage a pool of IP addresses with auto-assign enabled for dynamic allocation of IPs from the specified range.,No
|
||||
Load Balancer,`bfd-profile.yaml`,"Configures bidirectional forwarding detection (BFD) with customized intervals, detection multiplier, and modes for quicker network fault detection and load balancing failover.",No
|
||||
Load Balancer,`bgp-advr.yaml`,"Defines a BGP advertisement resource for MetalLB, specifying how an IP address pool is advertised to BGP peers. This enables fine-grained control over traffic routing and announcements.",No
|
||||
|
||||
@@ -9,6 +9,6 @@
|
||||
.Resource tuning CRs
|
||||
[cols="4*", options="header", format=csv]
|
||||
|====
|
||||
Component,Reference CR,Description,Optional
|
||||
System reserved capacity,`control-plane-system-reserved.yaml`,"Optional. Configures kubelet, enabling auto-sizing reserved resources for the control plane node pool.",Yes
|
||||
Component,Reference CR,Description,Optional
|
||||
System reserved capacity,`control-plane-system-reserved.yaml`,"Optional. Configures kubelet, enabling auto-sizing reserved resources for the control plane node pool.",No
|
||||
|====
|
||||
|
||||
21
modules/telco-core-crs-security.adoc
Normal file
21
modules/telco-core-crs-security.adoc
Normal file
@@ -0,0 +1,21 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// *
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="security-crs_{context}"]
|
||||
= Security reference CRs
|
||||
|
||||
.Security CRs
|
||||
[cols="4*", options="header", format=csv]
|
||||
|====
|
||||
Component,Reference CR,Description,Optional
|
||||
Cert-Manager,`certManagerNS.yaml`,Defines the cert-manager-operator namespace.,Yes
|
||||
Cert-Manager,`certManagerOperatorgroup.yaml`,Defines the OperatorGroup for cert-manager.,Yes
|
||||
Cert-Manager,`certManagerSubscription.yaml`,Installs the OpenShift cert-manager operator.,Yes
|
||||
Cert-Manager,`certManagerClusterIssuer.yaml`,Configures an ACME ClusterIssuer using Let's Encrypt with DNS-01 challenge.,Yes
|
||||
Cert-Manager,`apiServerCertificate.yaml`,Creates a certificate for the API Server endpoint.,Yes
|
||||
Cert-Manager,`ingressCertificate.yaml`,Creates a wildcard certificate for the Ingress/Router.,Yes
|
||||
Cert-Manager,`apiServerConfig.yaml`,Configures OpenShift to use the cert-manager generated API Server certificate.,Yes
|
||||
Cert-Manager,`ingressControllerConfig.yaml`,Configures OpenShift to use the cert-manager generated Ingress certificate.,Yes
|
||||
|====
|
||||
@@ -9,9 +9,10 @@
|
||||
.Storage CRs
|
||||
[cols="4*", options="header", format=csv]
|
||||
|====
|
||||
Component,Reference CR,Description,Optional
|
||||
External ODF configuration,`01-rook-ceph-external-cluster-details.secret.yaml`,Defines a Secret resource containing base64-encoded configuration data for an external Ceph cluster in the `openshift-storage` namespace.,No
|
||||
External ODF configuration,`02-ocs-external-storagecluster.yaml`,Defines an OpenShift Container Storage (OCS) storage resource which configures the cluster to use an external storage back end.,No
|
||||
External ODF configuration,`odfNS.yaml`,Creates the monitored `openshift-storage` namespace for the OpenShift Data Foundation Operator.,No
|
||||
External ODF configuration,`odfOperGroup.yaml`,"Creates the Operator group in the `openshift-storage` namespace, allowing the OpenShift Data Foundation Operator to watch and manage resources.",No
|
||||
Component,Reference CR,Description,Optional
|
||||
External ODF configuration,`01-rook-ceph-external-cluster-details.secret.yaml`,Defines a Secret resource containing base64-encoded configuration data for an external Ceph cluster in the openshift-storage namespace.,No
|
||||
External ODF configuration,`02-ocs-external-storagecluster.yaml`,Defines an OpenShift Container Storage (OCS) storage resource which configures the cluster to use an external storage back end.,No
|
||||
External ODF configuration,`odfNS.yaml`,Creates the monitored openshift-storage namespace for the OpenShift Data Foundation Operator.,No
|
||||
External ODF configuration,`odfOperGroup.yaml`,"Creates the Operator group in the openshift-storage namespace, allowing the OpenShift Data Foundation Operator to watch and manage resources.",No
|
||||
External ODF configuration,`odfSubscription.yaml`,"Creates the OpenShift Data Foundation Operator subscription, managed through OLM.",No
|
||||
|====
|
||||
|
||||
@@ -6,44 +6,4 @@
|
||||
[id="telco-core-deployment-planning_{context}"]
|
||||
= Deployment planning
|
||||
|
||||
`MachineConfigPools` (MCPs) custom resource (CR) enable the subdivision of worker nodes in telco core clusters into different node groups based on customer planning parameters.
|
||||
Careful deployment planning using MCPs is crucial to minimize deployment and upgrade time and, more importantly, to minimize interruption of telco-grade services during cluster upgrades.
|
||||
|
||||
*Description*
|
||||
|
||||
Telco core clusters can use MachineConfigPools (MCPs) to split worker nodes into additional separate roles, for example, due to different hardware profiles.
|
||||
This allows custom tuning for each role and also plays a critical function in speeding up a telco core cluster deployment or upgrade.
|
||||
Multiple MCPs can be used to properly plan cluster upgrades across one or multiple maintenance windows.
|
||||
This is crucial because telco-grade services might otherwise be affected if careful planning is not considered.
|
||||
|
||||
During cluster upgrades, you can pause MCPs while you upgrade the control plane. See "Performing a canary rollout update" for more information. This ensures that worker nodes are not rebooted and running workloads remain unaffected until the MCP is unpaused.
|
||||
|
||||
Using careful MCP planning, you can control the timing and order of which set of nodes are upgraded at any time. For more information on how to use MCPs to plan telco upgrades, see "Applying MachineConfigPool labels to nodes before the update".
|
||||
|
||||
Before beginning the initial deployment, keep the following engineering considerations in mind regarding MCPs:
|
||||
|
||||
**PerformanceProfile and Tuned profile association:**
|
||||
|
||||
When using PerformanceProfiles, remember that each Machine Config Pool (MCP) must be linked to exactly one PerformanceProfile or Tuned profile definition.
|
||||
Consequently, even if the desired configuration is identical for multiple MCPs, each MCP still requires its own dedicated PerformanceProfile definition.
|
||||
|
||||
**Planning your MCP labeling strategy:**
|
||||
|
||||
Plan your MCP labeling with an appropriate strategy to split your worker nodes depending on parameters
|
||||
such as:
|
||||
|
||||
* The worker node type: identifying a group of nodes with equivalent hardware profile, for example workers for control plane Network Functions (NFs) and workers for user data plane NFs.
|
||||
* The number of worker nodes per worker node type.
|
||||
* The minimum number of MCPs required for an equivalent hardware profile is 1, but could be larger for larger clusters.
|
||||
For example, you may design for more MCPs per hardware profile to support a more granular upgrade where a smaller percentage of the cluster capacity is affected with each step.
|
||||
* The update strategy for nodes within an MCP is by upgrade requirements and the chosen `maxUnavailable` value:
|
||||
** Number of maintenance windows allowed.
|
||||
** Duration of a maintenance window.
|
||||
** Total number of worker nodes.
|
||||
** Desired `maxUnavailable` (number of nodes updated concurrently) for the MCP.
|
||||
* CNF requirements for worker nodes, in terms of:
|
||||
** Minimum availability per Pod required during an upgrade, configured with a pod disruption budget (PDB). PDBs are crucial to maintain telco service level Agreements (SLAs) during upgrades. For more information about PDB, see "Understanding how to use pod disruption budgets to specify the number of pods that must be up".
|
||||
** Minimum true high availability required per Pod, such that each replica runs on separate hardware.
|
||||
** Pod affinity and anti-affinity link: For more information about how to use pod affinity and anti-affinity, see "Placing pods relative to other pods using affinity and anti-affinity rules".
|
||||
* Duration and number of upgrade maintenance windows during which telco-grade services might be affected.
|
||||
|
||||
Proper deployment planning is essential for telco core clusters to ensure high availability, minimize service disruption during upgrades, and optimize cluster performance.
|
||||
@@ -4,5 +4,5 @@
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="telco-core-deployment_{context}"]
|
||||
= Deployment
|
||||
= Deployment components
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="telco-core-gitops-operator-and-ztp-plugins_{context}"]
|
||||
= GitOps Operator and ZTP plugins
|
||||
= GitOps Operator and {ztp} plugins
|
||||
|
||||
New in this release::
|
||||
* No reference design updates in this release.
|
||||
@@ -15,45 +15,31 @@ Description::
|
||||
The GitOps Operator provides a GitOps driven infrastructure for managing cluster deployment and configuration.
|
||||
Cluster definitions and configuration are maintained in a Git repository.
|
||||
|
||||
ZTP plugins provide support for generating `Installation` CRs from `SiteConfig` CRs and automatically wrapping configuration CRs in policies based on {rh-rhacm} `PolicyGenerator` CRs.
|
||||
|
||||
The SiteConfig Operator provides improved support for generation of `Installation` CRs from `ClusterInstance` CRs.
|
||||
The SiteConfig Operator generates installation CRs from `ClusterInstance` CRs.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Using `ClusterInstance` CRs for cluster installation is preferred over the `SiteConfig` custom resource with ZTP plugin method.
|
||||
From {product-title} 4.21, you must use `ClusterInstance` CRs and the SiteConfig Operator to define managed cluster installations.
|
||||
With this release, support for defining managed cluster installations with the `SiteConfig` CR is removed.
|
||||
====
|
||||
|
||||
You should structure the Git repository according to release version, with all necessary artifacts (`SiteConfig`, `ClusterInstance`, `PolicyGenerator`, and `PolicyGenTemplate`, and supporting reference CRs) included.
|
||||
This enables deploying and managing multiple versions of the {product-title} and configuration versions to clusters simultaneously and through upgrades.
|
||||
{ztp} plugins provide support for automatically wrapping configuration CRs in policies based on {rh-rhacm} `PolicyGenerator` CRs.
|
||||
|
||||
You should structure the Git repository according to the release version, with all necessary artifacts added to the repository, such as the `ClusterInstance`, `PolicyGenerator`, and supporting reference CRs.
|
||||
This structure enables deploying and managing multiple versions of the {product-title} and configuration versions to clusters simultaneously and through upgrades.
|
||||
|
||||
The recommended Git structure keeps reference CRs in a directory separate from customer or partner provided content.
|
||||
This means that you can import reference updates by simply overwriting existing content.
|
||||
Customer or partner supplied CRs can be provided in a parallel directory to the reference CRs for easy inclusion in the generated configuration policies.
|
||||
|
||||
--
|
||||
|
||||
Limits and requirements::
|
||||
// Scale results ACM-17868
|
||||
* Each ArgoCD application supports up to 1000 nodes.
|
||||
* Each ArgoCD application supports up to 1000 nodes on a hub cluster conforming to the Hub RDS.
|
||||
Multiple ArgoCD applications can be used to achieve the maximum number of clusters supported by a single hub cluster.
|
||||
* The `SiteConfig` CR must use the `extraManifests.searchPaths` field to reference the reference manifests.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Since {product-title} 4.15, the `spec.extraManifestPath` field is deprecated.
|
||||
====
|
||||
* Use the `extraManifestsRefs` field in the `ClusterInstance` CR to reference `ConfigMap` resources that contain additional manifests to apply at install time.
|
||||
|
||||
Engineering considerations::
|
||||
* Set the `MachineConfigPool` (`MCP`) CR `paused` field to true during a cluster upgrade maintenance window and set the `maxUnavailable` field to the maximum tolerable value.
|
||||
This prevents multiple cluster node reboots during upgrade, which results in a shorter overall upgrade.
|
||||
When you unpause the `mcp` CR, all the configuration changes are applied with a single reboot.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
During installation, custom `MCP` CRs can be paused along with setting `maxUnavailable` to 100% to improve installation times.
|
||||
====
|
||||
|
||||
* To avoid confusion or unintentional overwriting when updating content, you should use unique and distinguishable names for custom CRs in the `reference-crs/` directory under core-overlay and extra manifests in git.
|
||||
* The `SiteConfig` CR allows multiple extra-manifest paths.
|
||||
When file names overlap in multiple directory paths, the last file found in the directory order list takes precedence.
|
||||
|
||||
* The `ClusterInstance` CR allows multiple `ConfigMap` references in the `extraManifestsRefs` field for additional install-time manifests.
|
||||
|
||||
@@ -3,13 +3,15 @@
|
||||
// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
|
||||
[id="telco-core-kubelet-settings_{context}"]
|
||||
= Kubelet Settings
|
||||
|
||||
Some CNF workloads make use of sysctls which are not in the list of system-wide safe sysctls. Generally network sysctls are namespaced and can be enabled by using the `kubeletconfig.experimental` annotation in the PerformanceProfile as a string of JSON in the form `allowedUnsafeSysctls`.
|
||||
Some CNF workloads make use of sysctls which are not in the list of system-wide safe `sysctls`.
|
||||
Generally network sysctls are namespaced and can be enabled by using the `kubeletconfig.experimental` annotation in the PerformanceProfile as a string of JSON in the form `allowedUnsafeSysctls`.
|
||||
|
||||
.Example snippet showing allowedUnsafeSysctls
|
||||
Additionally, the `systemReserved` memory can be configured through the same `kubeletconfig.experimental` annotation to reserve memory for system daemons and kernel processes.
|
||||
|
||||
.Example snippet showing allowedUnsafeSysctls and systemReserved
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -17,13 +19,18 @@ apiVersion: performance.openshift.io/v2
|
||||
kind: PerformanceProfile
|
||||
metadata:
|
||||
name: {{ .metadata.name }}
|
||||
annotations:kubeletconfig.experimental: |
|
||||
{"allowedUnsafeSysctls":["net.ipv6.conf.all.accept_ra"]}
|
||||
# ...
|
||||
annotations:
|
||||
# allowedUnsafeSysctls: some pods want the kernel stack to ignore IPv6 router Advertisement.
|
||||
# systemReserved: when used, it should be tailored for each environment.
|
||||
kubeletconfig.experimental: |
|
||||
{
|
||||
"allowedUnsafeSysctls":["net.ipv6.conf.all.accept_ra"],
|
||||
"systemReserved":{"memory":"11Gi"}
|
||||
}
|
||||
----
|
||||
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Although these are namespaced they may allow a pod to consume memory or other resources beyond any limits specified in the pod description. You must ensure that these sysctls do not exhaust platform resources.
|
||||
Although these are namespaced they may allow a pod to consume memory or other resources beyond any limits specified in the pod description. You must ensure that these `sysctls` do not exhaust platform resources.
|
||||
====
|
||||
|
||||
|
||||
@@ -11,7 +11,8 @@ New in this release::
|
||||
* No reference design updates in this release
|
||||
|
||||
Description::
|
||||
The Cluster Logging Operator enables collection and shipping of logs off the node for remote archival and analysis. The reference configuration uses Kafka to ship audit and infrastructure logs to a remote archive.
|
||||
The Cluster Logging Operator enables collection and shipping of logs off the node for remote archival and analysis.
|
||||
The reference configuration uses Kafka to ship audit and infrastructure logs to a remote archive.
|
||||
|
||||
Limits and requirements::
|
||||
Not applicable
|
||||
|
||||
@@ -12,7 +12,7 @@ New in this release::
|
||||
Description::
|
||||
The Cluster Monitoring Operator (CMO) is included by default in {product-title} and provides monitoring (metrics, dashboards, and alerting) for the platform components and optionally user projects.
|
||||
You can customize the default log retention period, custom alert rules, and so on.
|
||||
+
|
||||
|
||||
Configuration of the monitoring stack is done through a single string value in the cluster-monitoring-config ConfigMap. The reference tuning tuning merges content from two requirements:
|
||||
|
||||
* Prometheus configuration is extended to forward alerts to the ACM hub cluster for alert aggregation.
|
||||
@@ -20,7 +20,7 @@ If desired this configuration can be extended to forward to additional locations
|
||||
* Prometheus retention period is reduced from the default.
|
||||
The primary metrics storage is expected to be external to the cluster.
|
||||
Metrics storage on the Core cluster is expected to be a backup to that central store and available for local troubleshooting purposes.
|
||||
+
|
||||
|
||||
In addition to the default configuration, the following metrics are expected to be configured for telco core clusters:
|
||||
|
||||
* Pod CPU and memory metrics and alerts for user workloads
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
= Node Configuration
|
||||
|
||||
New in this release::
|
||||
* No reference design updates in this release.
|
||||
* Consult the 4.21 release notes regarding the decrease in the default maximum open files soft limit for containers in this release.
|
||||
|
||||
Limits and requirements::
|
||||
* Analyze additional kernel modules to determine impact on CPU load, system performance, and ability to meet KPIs.
|
||||
|
||||
@@ -11,30 +11,23 @@ New in this release::
|
||||
* No reference design updates in this release.
|
||||
|
||||
Description::
|
||||
{rh-storage} is a software-defined storage service for containers. {rh-storage} can be deployed in one of two modes:
|
||||
+
|
||||
--
|
||||
* Internal mode, where {rh-storage} software components are deployed as software containers directly on the {product-title} cluster nodes, together with other containerized applications.
|
||||
* External mode, where {rh-storage} is deployed on a dedicated storage cluster, which is usually a separate Red Hat Ceph Storage cluster running on {op-system-base-full}.
|
||||
--
|
||||
+
|
||||
{rh-storage} is a software-defined storage service for containers.
|
||||
{rh-storage} can be deployed in one of two modes:
|
||||
* Internal mode, where {rh-storage} software components are deployed as software containers directly on the OpenShift cluster nodes, together with other containerized applications.
|
||||
* External mode, where {rh-storage} is deployed on a dedicated storage cluster, which is usually a separate Red Hat Ceph Storage cluster running on Red{nbsp}Hat Enterprise Linux.
|
||||
These storage services are running externally to the application workload cluster.
|
||||
+
|
||||
|
||||
For telco core clusters, storage support is provided by {rh-storage} storage services running in external mode, for several reasons:
|
||||
+
|
||||
--
|
||||
|
||||
* Separating dependencies between {product-title} and Ceph operations allows for independent {product-title} and {rh-storage} updates.
|
||||
* Separation of operations functions for the Storage and {product-title} infrastructure layers, is a typical customer requirement for telco core use cases.
|
||||
* External Red Hat Ceph Storage clusters can be re-used by multiple {product-title} clusters deployed in the same region.
|
||||
--
|
||||
+
|
||||
--
|
||||
|
||||
{rh-storage} supports separation of storage traffic using secondary CNI networks.
|
||||
--
|
||||
|
||||
Limits and requirements::
|
||||
* In an IPv4/IPv6 dual-stack networking environment, {rh-storage} uses IPv4 addressing.
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.19/html/planning_your_deployment/network-requirements_rhodf#ipv6-support_rhodf[IPv6 support].
|
||||
For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.21/html/planning_your_deployment/network-requirements_rhodf#ipv6-support_rhodf[IPv6 support].
|
||||
|
||||
Engineering considerations::
|
||||
* {rh-storage} network traffic should be isolated from other traffic on a dedicated network, for example, by using VLAN isolation.
|
||||
|
||||
@@ -31,7 +31,7 @@ $ mkdir -p ./out
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.19 | base64 -d | tar xv -C out
|
||||
$ podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.21 | base64 -d | tar xv -C out
|
||||
----
|
||||
|
||||
.Verification
|
||||
|
||||
@@ -4,10 +4,8 @@
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="telco-core-rds-product-version-use-model-overview_{context}"]
|
||||
= Telco core RDS {product-version} use model overview
|
||||
= Telco core RDS use model overview
|
||||
|
||||
The Telco core reference design specification (RDS) describes a platform that supports large-scale telco applications including control plane functions such as signaling and aggregation.
|
||||
It also includes some centralized data plane functions, for example, user plane functions (UPF).
|
||||
These functions generally require scalability, complex networking support, resilient software-defined storage, and support performance requirements that are less stringent and constrained than far-edge deployments such as RAN.
|
||||
|
||||
|
||||
|
||||
@@ -7,8 +7,7 @@
|
||||
= Red Hat Advanced Cluster Management
|
||||
|
||||
New in this release::
|
||||
* Using {rh-rhacm} and PolicyGenerator CRs is the recommended approach for managing and deploying policies to managed clusters.
|
||||
This replaces the use of PolicyGenTemplate CRs for this purpose.
|
||||
* No reference design updates in this release.
|
||||
|
||||
Description::
|
||||
+
|
||||
@@ -20,7 +19,7 @@ You apply policies with the {rh-rhacm} policy controller as managed by {cgu-oper
|
||||
Configuration, upgrades, and cluster status are managed through the policy controller.
|
||||
|
||||
When installing managed clusters, {rh-rhacm} applies labels and initial ignition configuration to individual nodes in support of custom disk partitioning, allocation of roles, and allocation to machine config pools.
|
||||
You define these configurations with `SiteConfig` or `ClusterInstance` CRs.
|
||||
You define these configurations with `ClusterInstance` CRs.
|
||||
--
|
||||
|
||||
Limits and requirements::
|
||||
|
||||
@@ -4,8 +4,10 @@
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="telco-core-reference-design-specification-for-product-title-product-version_{context}"]
|
||||
|
||||
= Telco core reference design specifications
|
||||
|
||||
The telco core reference design specification (RDS) configures an {product-title} cluster running on commodity hardware to host telco core workloads.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -14,7 +14,8 @@ Scale clusters as described in "Limits and requirements".
|
||||
Scaling of workloads is described in "Application workloads".
|
||||
|
||||
Limits and requirements::
|
||||
* Cluster can scale to at least 120 nodes.
|
||||
|
||||
* A telco core cluster can scale to at least 120 nodes.
|
||||
|
||||
|
||||
:leveloffset!:
|
||||
|
||||
@@ -19,29 +19,36 @@ The Red{nbsp}Hat telco core {product-version} solution has been validated using
|
||||
|Component |Software version
|
||||
|
||||
|{rh-rhacm-first}
|
||||
|2.14
|
||||
|2.15
|
||||
|
||||
|{gitops-title}
|
||||
|1.19
|
||||
|
||||
|cert-manager Operator
|
||||
|1.18
|
||||
|
||||
|Cluster Logging Operator
|
||||
|6.2
|
||||
|
||||
|{rh-storage}
|
||||
|4.19
|
||||
|4.20
|
||||
|
||||
|SR-IOV Network Operator
|
||||
|4.20
|
||||
|4.21
|
||||
|
||||
|MetalLB
|
||||
|4.20
|
||||
|4.21
|
||||
|
||||
|NMState Operator
|
||||
|4.20
|
||||
|4.21
|
||||
|
||||
|NUMA-aware scheduler
|
||||
|4.20
|
||||
|4.21
|
||||
|====
|
||||
|
||||
* {rh-rhacm-first} will be updated to 2.15 when the aligned {rh-rhacm-first} version is released.
|
||||
* {rh-storage} will be updated to 4.20 when the aligned {rh-storage} version (4.20) is released.
|
||||
* {rh-rhacm-first} will be updated to 2.16 when the aligned {rh-rhacm-first} version is released.
|
||||
* {rh-storage} will be updated to 4.21 when the aligned {rh-storage} version is released.
|
||||
* The cert-manager Operator and {gitops-title} Operator are platform agnostic operators.
|
||||
The support lifecycle for these operators is independent from the support lifecycle for {product-title}.
|
||||
You might need to update to a newer minor version of these operators at the end of an operator lifecycle, or when planning to update the {product-title} cluster to continue support.
|
||||
For support lifecycle details for platform agnostic operators, see link:https://access.redhat.com/support/policy/updates/openshift_operators[OpenShift Operator Life Cycles].
|
||||
|
||||
@@ -33,6 +33,6 @@ When you unpause the `mcp` CR, all the configuration changes are applied with a
|
||||
During installation, custom `mcp` CRs can be paused along with setting `maxUnavailable` to 100% to improve installation times.
|
||||
====
|
||||
|
||||
* Orchestration of an upgrade, including {product-title}, day-2 OLM operators and custom configuration can be done using a `ClusterGroupUpgrade` (CGU) CR containing policies describing these updates.
|
||||
* Orchestration of an upgrade, including {product-title}, day-2 OLM operators, and custom configuration can be done using a `ClusterGroupUpgrade` (CGU) CR containing policies describing these updates.
|
||||
** An EUS to EUS upgrade can be orchestrated using chained CGU CRs
|
||||
** Control of MCP pause can be managed through policy in the CGU CRs for a full control plane and worker node rollout of upgrades.
|
||||
|
||||
49
modules/telco-core-worker-nodes-and-machineconfigpools.adoc
Normal file
49
modules/telco-core-worker-nodes-and-machineconfigpools.adoc
Normal file
@@ -0,0 +1,49 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
[id="telco-core-worker-nodes-and-machineconfigpools_{context}"]
|
||||
= Worker Nodes and MachineConfigPools
|
||||
|
||||
`MachineConfigPools` (MCPs) custom resources (CR) enable the subdivision of worker nodes in telco core clusters into different node groups based on customer planning parameters.
|
||||
Careful deployment planning using MCPs is crucial to minimize deployment and upgrade time and, more importantly, to minimize interruption of telco-grade services during cluster upgrades.
|
||||
|
||||
*Description*
|
||||
|
||||
Telco core clusters can use `MachineConfigPools` (MCPs) to split worker nodes into additional separate roles, for example, due to different hardware profiles.
|
||||
This allows custom tuning for each role and also plays a critical function in speeding up a telco core cluster deployment or upgrade.
|
||||
Multiple MCPs can be used to properly plan cluster upgrades across one or multiple maintenance windows.
|
||||
This is crucial because telco-grade services might otherwise be affected if careful planning is not considered.
|
||||
|
||||
During cluster upgrades, you can pause MCPs while you upgrade the control plane. See "Performing a canary rollout update" for more information. This ensures that worker nodes are not rebooted and running workloads remain unaffected until the MCP is unpaused.
|
||||
|
||||
Using careful MCP planning, you can control the timing and order of which set of nodes are upgraded at any time. For more information on how to use MCPs to plan telco upgrades, see "Applying MachineConfigPool labels to nodes before the update".
|
||||
|
||||
Before beginning the initial deployment, review the following engineering considerations:
|
||||
|
||||
**PerformanceProfile and Tuned profile association:**
|
||||
|
||||
When using PerformanceProfiles, remember that each Machine Config Pool (MCP) must be linked to exactly one PerformanceProfile or Tuned profile definition.
|
||||
Consequently, even if the desired configuration is identical for multiple MCPs, each MCP still requires its own dedicated PerformanceProfile definition.
|
||||
|
||||
**Planning your MCP labeling strategy:**
|
||||
|
||||
Plan your MCP labeling with an appropriate strategy to split your worker nodes depending on parameters
|
||||
such as:
|
||||
|
||||
* The worker node type: identifying a group of nodes with equivalent hardware profile, for example workers for control plane Network Functions (NFs) and workers for user data plane NFs.
|
||||
* The number of worker nodes per worker node type.
|
||||
* The minimum number of MCPs required for an equivalent hardware profile is 1, but could be larger for larger clusters.
|
||||
For example, you may design for more MCPs per hardware profile to support a more granular upgrade where a smaller percentage of the cluster capacity is affected with each step.
|
||||
* The update strategy for nodes within an MCP is by upgrade requirements and the chosen `maxUnavailable` value:
|
||||
** Number of maintenance windows allowed
|
||||
** Duration of a maintenance window
|
||||
** Total number of worker nodes
|
||||
** Desired `maxUnavailable` (number of nodes updated concurrently) for the MCP.
|
||||
* CNF requirements for worker nodes, in terms of:
|
||||
** Minimum availability per Pod required during an upgrade, configured with a pod disruption budget (PDB). PDBs are crucial to maintain telco service level Agreements (SLAs) during upgrades. For more information about PDB, see "Understanding how to use pod disruption budgets to specify the number of pods that must be up".
|
||||
** Minimum true high availability required per Pod, such that each replica runs on separate hardware.
|
||||
** Pod affinity and anti-affinity link: For more information about how to use pod affinity and anti-affinity, see "Placing pods relative to other pods using affinity and anti-affinity rules".
|
||||
* Duration and number of upgrade maintenance windows during which telco-grade services might be affected.
|
||||
|
||||
@@ -3,13 +3,12 @@
|
||||
// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
|
||||
[id="telco-core-workloads-on-schedulable-control-planes_{context}"]
|
||||
= Workloads on schedulable control planes
|
||||
|
||||
Enabling workloads on control plane nodes::
|
||||
|
||||
You can enable schedulable control planes to run workloads on control plane nodes, utilizing idle CPU capacity on bare-metal machines for potential cost savings. This feature is only applicable to clusters with bare-metal control plane nodes.
|
||||
You can enable schedulable control planes to run workloads on control plane nodes, utilizing idle CPU capacity on bare metal machines for potential cost savings. This feature is only applicable to clusters with bare metal control plane nodes.
|
||||
+
|
||||
There are two distinct parts to this functionality:
|
||||
|
||||
@@ -22,15 +21,15 @@ Workload characterization and limitations::
|
||||
|
||||
You must test and verify workloads to ensure that applications do not interfere with core cluster functions. It is recommended that you start with lightweight containers that do not heavily load the CPU or networking.
|
||||
+
|
||||
Certain workloads are not permitted on control plane nodes due to the risk to cluster stability. This includes any workload that reconfigures kernel arguments or system global sysctls, as this can lead to unpredictable outcomes for the cluster.
|
||||
Certain workloads are not permitted on control plane nodes due to the risk to cluster stability. This includes any workload that reconfigures kernel arguments or system global `sysctls`, as this can lead to unpredictable outcomes for the cluster.
|
||||
+
|
||||
To ensure stability, you must adhere to the following:
|
||||
|
||||
* Make sure all non-trivial workloads have memory limits defined. This protects the control plane in case of a memory leak.
|
||||
* Avoid excessively loading reserved CPUs, for example, by heavy use of exec probes.
|
||||
* Avoid heavy kernel-based networking usage, as it can increase reserved CPU load through software networking components such as OVS.
|
||||
* Avoid heavy kernel-based networking usage, as it can increase reserved CPU load through software networking components like OVS.
|
||||
|
||||
NUMA Resources Operator support::
|
||||
|
||||
The NUMA Resources Operator is supported for use on control plane nodes. Functional behavior of the Operator remains unchanged.
|
||||
As part of this enablement, the NUMA Resources Operator is supported on control plane nodes. The functional behavior of the NUMA Resources Operator remains the same, but its use on control plane nodes is explicitly permitted.
|
||||
|
||||
|
||||
@@ -3,24 +3,23 @@
|
||||
// * scalability_and_performance/telco_core_ref_design_specs/telco-core-rds.adoc
|
||||
|
||||
:_mod-docs-content-type: REFERENCE
|
||||
|
||||
[id="telco-core-zones_{context}"]
|
||||
= Zones
|
||||
|
||||
Designing the cluster to support disruption of multiple nodes simultaneously is critical for high availability (HA) and reduced upgrade times.
|
||||
{product-title} and Kubernetes use the well known label `topology.kubernetes.io/zone` to create pools of nodes that are subject to a common failure domain.
|
||||
{product-title} and Kubernetes use the label `topology.kubernetes.io/zone` to create pools of nodes which are subject to a common failure domain.
|
||||
Annotating nodes for topology (availability) zones allows high-availability workloads to spread such that each zone holds only one replica from a set of HA replicated pods.
|
||||
With this spread the loss of a single zone will not violate HA constraints and minimum service availability will be maintained.
|
||||
{product-title} and Kubernetes applies a default `TopologySpreadConstraint` to all replica constructs (`Service`, `ReplicaSet`, `StatefulSet` or `ReplicationController`) that spreads the replicas based on the `topology.kubernetes.io/zone` label.
|
||||
{product-title} and Kubernetes apply a default TopologySpreadConstraint to all replica constructs, such as `Service`, `ReplicaSet`, `StatefulSet` or `ReplicationController` resources, which spreads the replicas based on the `topology.kubernetes.io/zone` label.
|
||||
This default allows zone based spread to apply without any change to your workload pod specs.
|
||||
|
||||
Cluster upgrades typically result in node disruption as the underlying OS is updated.
|
||||
In large clusters it is necessary to update multiple nodes concurrently to complete upgrades quickly and in as few maintenance windows as possible.
|
||||
In large clusters it is necessary to update multiple nodes concurrently in order to complete upgrades quickly and in as few maintenance windows as possible.
|
||||
By using zones to ensure pod spread, an upgrade can be applied to all nodes in a zone simultaneously (assuming sufficient spare capacity) while maintaining high availability and service availability.
|
||||
The recommended cluster design is to partition nodes into multiple MCPs based on the considerations earlier and label all nodes in a single MCP as a single zone which is distinct from zones attached to other MCPs.
|
||||
The recommended cluster design is to partition nodes into multiple MCPs based on the considerations above and label all nodes in a single MCP as a single zone which is distinct from zones attached to other MCPs.
|
||||
Using this strategy all nodes in an MCP can be updated simultaneously.
|
||||
|
||||
Lifecycle hooks (readiness, liveness, startup and pre-stop) play an important role in ensuring application availability. For upgrades in particular the pre-stop hook allows applications to take necessary steps to prepare for disruption before being evicted from the node.
|
||||
Lifecycle hooks (readiness, liveness, startup and pre-stop) play an important role in ensuring application availability. For upgrades in particular the pre-stop hook allows applications to take necessary steps to prepare for disruption prior to being evicted from the node.
|
||||
|
||||
Limits and requirements::
|
||||
* The default TopologySpreadConstraints (TSC) only apply when an explicit TSC is not given. If your pods have explicit TSC ensure that spread based on zones is included.
|
||||
@@ -30,10 +29,5 @@ Limits and requirements::
|
||||
Engineering Considerations::
|
||||
* Pod drain times can significantly impact node update times. Ensure the workload design allows pods to be drained quickly.
|
||||
* PodDisruptionBudgets (PDB) are used to enforce high availability requirements.
|
||||
** To guarantee continuous application availability, a cluster design must use enough separate zones to spread the workload's pods.
|
||||
*** If pods are spread across sufficient zones, the loss of one zone won't take down more pods than permitted by the Pod Disruption Budget (PDB).
|
||||
*** If pods are not adequately distributed—either due to too few zones or restrictive scheduling constraints—a zone failure will violate the PDB, causing an outage.
|
||||
*** Furthermore, this poor distribution can force upgrades that typically run in parallel to execute slowly and sequentially (partial serialization) to avoid violating the PDB, significantly extending maintenance time.
|
||||
** With sufficient zones in the cluster design, a zone can be disrupted or lost without violating the PDB. If there are insufficient zones, or other scheduling constraints restrict the set of available nodes, the scheduling of pods might violate the PDB when the zone is disrupted or lost. During upgrades with simultaneous node updates this might lead to partial serialization of updates.
|
||||
** PDB with 0 disruptable pods will block node drain and require administrator intervention. This pattern should be avoided for fast and automated upgrades.
|
||||
|
||||
|
||||
|
||||
@@ -21,7 +21,9 @@ include::modules/telco-core-common-baseline-model.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/telco-core-deployment-planning.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/telco-core-zones.adoc[leveloffset=+1]
|
||||
include::modules/telco-core-worker-nodes-and-machineconfigpools.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/telco-core-zones.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
@@ -133,11 +135,11 @@ include::modules/telco-core-openshift-data-foundation.adoc[leveloffset=+3]
|
||||
include::modules/telco-core-additional-storage-solutions.adoc[leveloffset=+3]
|
||||
|
||||
[id="telco-reference-core-deployment-components_{context}"]
|
||||
=== Telco core deployment components
|
||||
== Telco core deployment components
|
||||
|
||||
The following sections describe the various {product-title} components and configurations that you use to configure the hub cluster with {rh-rhacm-first}.
|
||||
|
||||
include::modules/telco-core-red-hat-advanced-cluster-management.adoc[leveloffset=+3]
|
||||
include::modules/telco-core-red-hat-advanced-cluster-management.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
@@ -146,14 +148,14 @@ include::modules/telco-core-red-hat-advanced-cluster-management.adoc[leveloffset
|
||||
|
||||
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes[Red Hat Advanced Cluster Management for Kubernetes]
|
||||
|
||||
include::modules/telco-core-topology-aware-lifecycle-manager.adoc[leveloffset=+3]
|
||||
include::modules/telco-core-topology-aware-lifecycle-manager.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../edge_computing/cnf-talm-for-cluster-upgrades.adoc#cnf-talm-for-cluster-updates[Updating managed clusters with the {cgu-operator-full}]
|
||||
|
||||
include::modules/telco-core-gitops-operator-and-ztp-plugins.adoc[leveloffset=+3]
|
||||
include::modules/telco-core-gitops-operator-and-ztp-plugins.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
@@ -162,14 +164,21 @@ include::modules/telco-core-gitops-operator-and-ztp-plugins.adoc[leveloffset=+3]
|
||||
|
||||
* xref:../edge_computing/policygentemplate_for_ztp/ztp-advanced-policy-config.adoc#ztp-adding-new-content-to-gitops-ztp_ztp-advanced-policy-config[Adding custom content to the {ztp} pipeline]
|
||||
|
||||
include::modules/telco-core-monitoring.adoc[leveloffset=+2]
|
||||
include::modules/telco-core-agent-based-installer.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-with-agent-based-installer[Installing an {product-title} cluster with the Agent-based Installer]
|
||||
|
||||
include::modules/telco-core-monitoring.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/about_monitoring/about-ocp-monitoring[About {product-title} monitoring]
|
||||
|
||||
include::modules/telco-core-scheduling.adoc[leveloffset=+2]
|
||||
include::modules/telco-core-scheduling.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
@@ -180,7 +189,7 @@ include::modules/telco-core-scheduling.adoc[leveloffset=+2]
|
||||
|
||||
* xref:../scalability_and_performance/using-cpu-manager.adoc#topology-manager-policies_using-cpu-manager-and-topology-manager[Topology Manager policies]
|
||||
|
||||
include::modules/telco-core-node-configuration.adoc[leveloffset=+2]
|
||||
include::modules/telco-core-node-configuration.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
@@ -193,7 +202,7 @@ include::modules/telco-core-host-firmware-and-boot-loader-configuration.adoc[lev
|
||||
|
||||
include::modules/telco-core-kubelet-settings.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/telco-core-disconnected-environment.adoc[leveloffset=+2]
|
||||
include::modules/telco-core-disconnected-environment.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
@@ -202,14 +211,7 @@ include::modules/telco-core-disconnected-environment.adoc[leveloffset=+2]
|
||||
|
||||
* xref:../nodes/containers/nodes-containers-sysctls.adoc#nodes-containers-sysctls[Using sysctl in containers]
|
||||
|
||||
include::modules/telco-core-agent-based-installer.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-with-agent-based-installer[Installing an {product-title} cluster with the Agent-based Installer]
|
||||
|
||||
include::modules/telco-core-security.adoc[leveloffset=+2]
|
||||
include::modules/telco-core-security.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
@@ -222,7 +224,9 @@ include::modules/telco-core-security.adoc[leveloffset=+2]
|
||||
|
||||
* xref:../machine_configuration/machine-config-node-disruption.adoc#machine-config-node-disruption_machine-configs-configure[Using node disruption policies to minimize disruption from machine config changes]
|
||||
|
||||
include::modules/telco-core-scalability.adoc[leveloffset=+2]
|
||||
include::modules/telco-core-cert-manager-operator.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/telco-core-scalability.adoc[leveloffset=+1]
|
||||
|
||||
[id="telco-core-reference-configuration-crs"]
|
||||
== Telco core reference configuration CRs
|
||||
@@ -250,6 +254,8 @@ include::modules/telco-core-crs-scheduling.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/telco-core-crs-storage.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/telco-core-crs-security.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/telco-core-software-stack.adoc[leveloffset=+1]
|
||||
|
||||
:!telco-core:
|
||||
|
||||
Reference in New Issue
Block a user