1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

TELCODOCS-1874: Adding hub cluster RDS - Tech Preview

This commit is contained in:
Ronan Hennessy
2025-05-07 14:36:21 +01:00
parent 6d67576ddb
commit c159114e2a
33 changed files with 1284 additions and 0 deletions

View File

@@ -3366,6 +3366,8 @@ Topics:
File: telco-core-rds
- Name: Telco RAN DU reference design specifications
File: telco-ran-du-rds
- Name: Telco hub reference design specifications
File: telco-hub-rds
- Name: Comparing cluster configurations
Dir: cluster-compare
Distros: openshift-origin,openshift-enterprise

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

View File

@@ -0,0 +1,78 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-acm-observability_{context}"]
= {rh-rhacm} Observability
Cluster Observability is provided by the multicluster engine and {rh-rhacm-first}.
* Observability storage needs several `PV` resources and an S3 compatible bucket storage for long term retention of the metrics.
* Storage requirements calculation is complex and dependent on the specific workloads and characteristics of managed clusters.
Requirements for `PV` resources and the S3 bucket depend on many aspects including data retention, the number of managed clusters, managed cluster workloads, and so on.
* Estimate the required storage for observability by using the observability sizing calculator in the {rh-rhacm} capacity planning repository.
See the Red Hat Knowledgebase article link:https://access.redhat.com/articles/7103886[Calculating storage need for MultiClusterHub Observability on telco environments] for an explanation of using the calculator to estimate observability storage requirements.
The below table uses inputs derived from the telco RAN DU RDS and the hub cluster RDS as representative values.
[NOTE]
====
The following numbers are estimated.
Tune the values for more accurate results.
Add an engineering margin, for example +20%, to the results to account for potential estimation inaccuracies.
====
.Cluster requirements
[cols="42%,42%,16%",options="header"]
|====
|Capacity planner input
|Data source
|Example value
|Number of control plane nodes
|Hub cluster RDS (scale) and telco RAN DU RDS (topology)
|3500
|Number of additional worker nodes
|Hub cluster RDS (scale) and telco RAN DU RDS (topology)
|0
|Days for storage of data
|Hub cluster RDS
|15
|Total number of pods per cluster
|Telco RAN DU RDS
|120
|Number of namespaces (excluding {product-title})
|Telco RAN DU RDS
|4
|Number of metric samples per hour
|Default value
|12
|Number of hours of retention in receiver persistent volume (PV)
|Default value
|24
|====
With these input values, the sizing calculator as described in the Red Hat Knowledgebase article link:https://access.redhat.com/articles/7103886[Calculating storage need for MultiClusterHub Observability on telco environments] indicates the following storage needs:
.Storage requirements
[options="header"]
|====
2+|`alertmanager` PV 2+|`thanos receive` PV 2+|`thanos compact` PV
|*Per replica* |*Total* |*Per replica* |*Total* 2+|*Total*
|10 GiB |30 GiB |10 GiB |30 GiB 2+|100 GiB
|====
.Storage requirements
[options="header"]
|====
2+|`thanos rule` PV 2+|`thanos store` PV 2+|Object bucket^[1]^
|*Per replica* |*Total* |*Per replica* |*Total* |*Per day* |*Total*
|30 GiB |90 GiB |100 GiB |300 GiB |15 GiB |101 GiB
|====
[1] For the object bucket, it is assumed that downsampling is disabled, so that only raw data is calculated for storage requirements.

View File

@@ -0,0 +1,7 @@
[id="telco-hub-acmMCH-yaml"]
.acmMCH.yaml
[source,yaml]
----
link:https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/acmMCH.yaml[role=include]
----

View File

@@ -0,0 +1,35 @@
:_mod-docs-content-type: CONCEPT
[id="telco-hub-architecture-overview_{context}"]
= Hub cluster architecture overview
Use the features and components running on the management hub cluster to manage many other clusters in a hub-and-spoke topology.
The hub cluster provides a highly available and centralized interface for managing the configuration, lifecycle, and observability of the fleet of deployed clusters.
[NOTE]
====
All management hub functionality can be deployed on a dedicated {product-title} cluster or as applications that are co-resident on an existing cluster.
====
Managed cluster lifecycle::
Using a combination of Day 2 Operators, the hub cluster provides the necessary infrastructure to deploy and configure the fleet of clusters by using a GitOps methodology.
Over the lifetime of the deployed clusters, further management of upgrades, scaling the number of clusters, node replacement, and other lifecycle management functions can be declaratively defined and rolled out.
You can control the timing and progression of the rollout across the fleet.
Monitoring::
+
--
The hub cluster provides monitoring and status reporting for the managed clusters through the Observability pillar of the {rh-rhacm-full} Operator.
This includes aggregated metrics, alerts, and compliance monitoring through the Governance policy framework.
--
The telco management hub reference design specifications (RDS) and the associated reference custom resources (CRs) describe the telco engineering and QE validated method for deploying, configuring and managing the lifecycle of telco managed cluster infrastructure.
The reference configuration includes the installation and configuration of the hub cluster components on top of {product-title}.
.Hub cluster reference design components
image::telco-hub-cluster-reference-design-components.png[]
.Hub cluster reference design architecture
image::telco-hub-cluster-rds-architecture.png[]

View File

@@ -0,0 +1,21 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-assisted-service_{context}"]
= Assisted Service
The Assisted Service is deployed with the multicluster engine and {rh-rhacm-first}.
.Assisted Service storage requirements
[cols="1,2", options="header"]
|====
|Persistent volume resource
|Size (GB)
|`imageStorage`
|50
|`filesystemStorage`
|700
|`dataBaseStorage`
|20
|====

View File

@@ -0,0 +1,17 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-cluster-topology_{context}"]
= Cluster topology
In production environments, the {product-title} hub cluster must be highly available to maintain high availability of the management functions.
Limits and requirements::
Use a highly available cluster topology for the hub cluster, for example:
* Compact (3 nodes combined control plane and compute nodes)
* Standard (3 control plane nodes + N compute nodes)
Engineering considerations::
* In non-production environments, a {sno} cluster can be used for limited hub cluster functionality.
* Certain capabilities, for example {rh-storage-first}, are not supported on {sno}.
In this configuration, some hub cluster features might not be available.
* The number of optional compute nodes can vary depending on the scale of the specific use case.
* Compute nodes can be added later as required.

View File

@@ -0,0 +1,5 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-engineering-considerations_{context}"]
= Hub cluster engineering considerations
The follwing sections describe the engineering considerations for hub cluster resource scaling targets and utilization.

View File

@@ -0,0 +1,27 @@
:_mod-docs-content-type: CONCEPT
[id="telco-hub-git-repository_{context}"]
= Git repository
The telco management hub cluster supports a GitOps-driven methodology for installing and managing the configuration of OpenShift clusters for various telco applications.
This methodology requires an accessible Git repository that serves as the authoritative source of truth for cluster definitions and configuration artifacts.
Red Hat does not offer a commercially supported Git server.
An existing Git server provided in the production environment can be used.
Gitea and Gogs are examples of self-hosted Git servers that you can use.
The Git repository is typically provided in the production network external to the hub cluster.
In a large-scale deployment, multiple hub clusters can use the same Git repository for maintaining the definitions of managed clusters. Using this approach, you can easily review the state of the complete network.
As the source of truth for cluster definitions, the Git repository should be highly available and recoverable in disaster scenarios.
[NOTE]
====
For disaster recovery and multi-hub considerations, run the Git repository separately from the hub cluster.
====
Limits and requirements::
* A Git repository is required to support the {ztp} functions of the hub cluster, including installation, configuration, and lifecycle management of the managed clusters.
* The Git repository must be accessible from the management cluster.
Engineering considerations::
* The Git repository is used by the GitOps Operator to ensure continuous deployment and a single source of truth for the applied configuration.

View File

@@ -0,0 +1,30 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-gitops-operator-and-ztp-plugins_{context}"]
= GitOps Operator and {ztp}
New in this release::
* No reference design updates in this release
Description::
GitOps Operator and {ztp} provide a GitOps-based infrastructure for managing cluster deployment and configuration.
Cluster definitions and configurations are maintained as a declarative state in Git.
You can apply `ClusterInstance` custom resources (CRs) to the hub cluster where the `SiteConfig` Operator renders them as installation CRs.
In earlier releases, a {ztp} plugin supported the generation of installation CRs from `SiteConfig` CRs.
This plugin is now deprecated.
A separate {ztp} plugin is available to enable automatic wrapping of configuration CRs into policies based on the `PolicyGenerator` or the `PolicyGenTemplate` CRs.
+
You can deploy and manage multiple versions of {product-title} on managed clusters by using the baseline reference configuration CRs.
You can use custom CRs alongside the baseline CRs.
To maintain multiple per-version policies simultaneously, use Git to manage the versions of the source and policy CRs by using the `PolicyGenerator` or the `PolicyGenTemplate` CRs.
Limits and requirements::
* 300 single node `SiteConfig` CRs can be synchronized for each ArgoCD application.
You can use multiple applications to achieve the maximum number of clusters supported by a single hub cluster.
* To ensure consistent and complete cleanup of managed clusters and their associated resources during cluster or node deletion, you must configure ArgoCD to use background deletion mode.
Engineering considerations::
* To avoid confusion or unintentional overwrite when updating content, use unique and distinguishable names for custom CRs in the `source-crs` directory and extra manifests.
* Keep reference source CRs in a separate directory from custom CRs.
This facilitates easy update of reference CRs as required.
* To help with multiple versions, keep all source CRs and policy creation CRs in versioned Git repositories to ensure consistent generation of policies for each {product-title} version.

View File

@@ -0,0 +1,22 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-hub-cluster-day-2-operators_{context}"]
= Day 2 Operators in the hub cluster
The management hub cluster relies on a set of Day 2 Operators to provide critical management services and infrastructure.
Use Operator versions that match the set of managed cluster versions in your fleet.
Install Day 2 Operators using {olm-first} and `Subscription` custom resources (CRs).
`Subscription` CRs identify the specific Day 2 Operator to install, the catalog in which the Operator is found, and the appropriate version channel for the Operator.
By default {olm} installs and attempt to keep Operators updated with the latest z-stream version available in the channel.
By default all Subscriptions are set with an `installPlanApproval: Automatic` value.
In this mode, {olm} automatically installs new Operator versions when they are available in the catalog and channel.
[NOTE]
====
Setting `installPlanApproval` to automatic exposes the risk of the Operator being updated outside of defined maintenance windows if the catalog index is updated to include newer Operator versions.
In a disconnected environment where you are building and maintaining a curated set of Operators and versions in the catalog, and if you follow a strategy of creating a new catalog index for updated versions, the risk of the Operators being inadvertently updated is largely removed.
However, if you want to further close this risk, the `Subscription` CRs can be set to `installPlanApproval: Manual` which prevents Operators from being updated without explicit administrator approval.
====
Limits and requirements::
* When upgrading a telco hub cluster, the versions of {product-title} and Operators must meet the requirements of all relevant compatibility matrixes.

View File

@@ -0,0 +1,52 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-hub-cluster-openshift-deployment_{context}"]
= {product-title} installation on the hub cluster
Description::
+
--
The reference method for installing {product-title} for the hub cluster is through the Agent-based Installer.
Agent-based Installer provides installation capabilities without additional centralized infrastructure.
The Agent-based Installer creates an ISO image, which you mount to the server to be installed.
When you boot the server, {product-title} is installed alongside optionally supplied extra manifests, such as {ztp} custom resources.
[NOTE]
====
You can also install {product-title} in the hub cluster by using other installation methods.
====
If hub cluster functions are being applied to an existing {product-title} cluster, the Agent-based Installer installation is not required.
The remaining steps to install Day 2 Operators and configure the cluster for these functions remains the same.
When {product-title} installation is complete, the set of additional Operators and their configuration must be installed on the hub cluster.
The reference configuration includes all of these custom resources (CRs), which you can apply manually, for example:
[source,terminal]
----
$ oc apply -f <reference_cr>
----
You can also add the reference configuration to the Git repository and apply it using ArgoCD.
[NOTE]
====
If applying manually the CRs manually, take care to apply the CRs in the order indicated by the ArgoCD wave annotations.
Any CRs without annotations are in the initial wave.
====
--
Limits and requirements::
* Agent-based Installer requires an accessible image repository containing all required {product-title} and Day 2 Operator images.
* Agent-based Installer builds ISO images based on a specific OpenShift releases and specific cluster details.
Installation of a second hub requires a separate ISO image to be built.
Engineering considerations::
* Agent-based Installer provides a baseline {product-title} installation.
You apply Day 2 Operators and other configuration CRs after the cluster is installed.
* The reference configuration supports Agent-based Installer installation in a disconnected environment.
* A limited set of additional manifests can be supplied at installation time.
* Any `MachineConfiguration` CRs you require should be included as extra manifests during installation.

View File

@@ -0,0 +1,4 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-hub-components_{context}"]
= Hub cluster components

View File

@@ -0,0 +1,16 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-hub-disaster-recovery_{context}"]
= Hub cluster disaster recovery
Note that loss of the hub cluster does not typically create a service outage on the managed clusters.
Functions provided by the hub cluster will be lost, such as observability, configuration, lifecycle management updates being driven through the hub cluster, and so on.
Limits and requirements::
* Backup,restore and disaster recovery are offered by the cluster backup and restore Operator, which depends on the {oadp-first} Operator.
Engineering considerations::
* You can extend the cluster backup and restore operator to third party resources of the hub cluster based on your configuration.
* The cluster backup and restore operator is not enabled by default in {rh-rhacm-first}.
The reference configuration enables this feature.

View File

@@ -0,0 +1,17 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-local-storage-operator_{context}"]
= Local Storage Operator
New in this release::
* No reference design updates in this release
Description::
You can create persistent volumes that can be used as `PVC` resources by applications with the Local Storage Operator.
The number and type of `PV` resources that you create depends on your requirements.
Engineering considerations::
* Create backing storage for `PV` CRs before creating the persistent volume.
This can be a partition, a local volume, LVM volume, or full disk.
* Refer to the device listing in `LocalVolume` CRs by the hardware path used to access each device to ensure correct allocation of disks and partitions, for example, `/dev/disk/by-path/<id>`.
Logical names (for example, `/dev/sda`) are not guaranteed to be consistent across node reboots.

View File

@@ -0,0 +1,21 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-logging_{context}"]
= Logging
New in this release::
* No reference design updates in this release
Description::
Use the Cluster Logging Operator to collect and ship logs off the node for remote archival and analysis.
The reference configuration uses Kafka to ship audit and infrastructure logs to a remote archive.
Limits and requirements::
* The reference configuration does not include local log storage.
* The reference configuration does not include aggregation of managed cluster logs at the hub cluster.
Engineering considerations::
* The impact of cluster CPU use is based on the number or size of logs generated and the amount of log filtering configured.
* The reference configuration does not include shipping of application logs.
The inclusion of application logs in the configuration requires you to evaluate the application logging rate and have sufficient additional CPU resources allocated to the reserved set.

View File

@@ -0,0 +1,20 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-managed-cluster-deployment_{context}"]
= Managed cluster deployment
Description::
As of {rh-rhacm-first} 2.12, using the SiteConfig Operator is the recommended method for deploying managed clusters.
The SiteConfig Operator introduces a unified ClusterInstance API that decouples the parameters that define the cluster from the manner in which it is deployed.
The SiteConfig Operator uses a set of cluster templates that are instantiated using the data from a `ClusterInstance` custom resource (CR) to dynamically generate installation manifests.
Following the GitOps methodology, the `ClusterInstance` CR is sourced from a Git repository through ArgoCD.
The `ClusterInstance` CR can be used to initiate cluster installation by using either Assisted Installer, or the image-based installation available in multicluster engine.
Limits and requirements::
* The SiteConfig ArgoCD plugin which handles `SiteConfig` CRs is deprecated from {product-title} 4.18.
Engineering considerations::
* You must create a `Secret` CR with the login information for the cluster baseboard management controller (BMC).
This `Secret` CR is then referenced in the `SiteConfig` CR.
Integration with a secret store, such as Vault, can be used to manage the secrets.
* Besides offering deployment method isolation and unification of Git and non-Git workflows, the SiteConfig Operator provides better scalability, greater flexibility with the use of custom templates, and an enhanced troubleshooting experience.

View File

@@ -0,0 +1,45 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-managed-cluster-updates-and-upgrades_{context}"]
= Managed cluster updates
Description::
+
--
You can upgrade versions of {product-title}, Day 2 Operators, and managed cluster configurations, by declaring the required version in the `Policy` custom resources (CRs) that target the clusters to be upgraded.
Policy controllers periodically check for policy compliance.
If the result is negative, a violation report is created.
If the policy remediation action is set to `enforce` the violations are remediated according to the updated policy.
If the policy remediation action is set to `inform`, the process ends with a non-compliant status report and responsibility to initiate the upgrade is left to the user to perform during an appropriate maintenance window.
The {cgu-operator-first} extends {rh-rhacm-first} with features to manage the rollout of upgrades or configuration throughout the lifecycle of the fleet of clusters.
It operates in progressive, limited size batches of clusters.
When upgrades to {product-title} or the Day 2 Operators are required, {cgu-operator} progressively rolls out the updates by stepping through the set of policies and switching them to an "enforce" policy to push the configuration to the managed cluster.
The custom resource (CR) that {cgu-operator} uses to build the remediation plan is the `ClusterGroupUpgrade` CR.
You can use image-based upgrade (IBU) with the Lifecycle Agent as an alternative upgrade path for the {sno} cluster platform version.
IBU uses an OCI image generated from a dedicated seed cluster to install {sno} on the target cluster.
{cgu-operator} uses the `ImageBasedGroupUpgrade` CR to roll out image-based upgrades to a set of identified clusters.
--
Limits and requirements::
* You can perform direct upgrades for {sno} clusters using image-based upgrade for {product-title} <4.y> to `<4.y+2>`, and `<4.y.z>` to `<4.y.z+n>`.
* Image-based upgrade uses custom images that are specific to the hardware platform that the clusters are running on.
Different hardware platforms require separate seed images.
Engineering considerations::
* In edge deployments, you can minimize the disruption to managed clusters by managing the timing and rollout of changes.
Set all policies to `inform` to monitor compliance without triggering automatic enforcement.
Similarly, configure Day 2 Operator subscriptions to manual to prevent updates from occurring outside of scheduled maintenance windows.
* The recommended upgrade aproach for {sno} clusters is the image-based upgrade.
* For multi-node cluster upgrades, consider the following `MachineConfigPool` CR configurations to reduce upgrade times:
** Pause configuration deployments to nodes during a maintenance window by setting the `paused` field to `true`.
** Adjust the `maxUnavailable` field to control how many nodes in the pool can be updated simultaneously.
The `MaxUnavailable` field defines the percentage of nodes in the pool that can be simultaneously unavailable during a `MachineConfig` object update.
Set `maxUnavailable` to the maximum tolerable value.
This reduces the number of reboots in a cluster during upgrades which results in shorter upgrade times.
** Resume configuration deployments by setting the `paused` field to `false`. The configuration changes are applied in a single reboot.
* During cluster installation, you can pause `MachineConfigPool` CRs by setting the `paused` field to `true` and setting `maxUnavailable` to 100% to improve installation times.

View File

@@ -0,0 +1,7 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-managed-clusters-lifecycle-management_{context}"]
= Managed cluster lifecycle management
To provision and manage sites at the far edge of the network, use {ztp} in a hub-and-spoke architecture, where a single hub cluster manages many managed clusters.
Lifecycle management for spoke clusters can be divided into two different stages: cluster deployment, including {product-title} installation, and cluster configuration.

View File

@@ -0,0 +1,16 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-memory-and-cpu-requirements_{context}"]
= Hub cluster memory and CPU requirements
The memory and CPU requirements of the hub cluster vary depending on the configuration of the hub cluster, the number of resources on the cluster, and the number of managed clusters.
Limits and requirements::
* Ensure that the hub cluster meets the underlying memory and CPU requirements for {product-title} and {rh-rhacm-first}.
Engineering considerations::
+
--
* Before deploying a telco hub cluster, ensure that your cluster host meets cluster requirements.
For more information about scaling the number of managed clusters, see "Hub cluster engineering considerations".
--

View File

@@ -0,0 +1,42 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-networking_{context}"]
= Networking
The reference hub cluster is designed to operate in a disconnected networking environment where direct access to the internet is not possible.
As with all {product-title} clusters, the hub cluster requires access to an image registry hosting all OpenShift and Day 2 {olm-first} images.
The hub cluster supports dual-stack networking support for IPv6 and IPv4 networks.
IPv6 is typical in edge or far-edge network segments, while IPv4 is more prevalent for use with legacy equipment in the data center.
Limits and requirements::
+
--
* Regardless of the installation method, you must configure the following network types for the hub cluster:
** `clusterNetwork`
** `serviceNetwork`
** `machineNetwork`
* Yout must configure the following IP addresses for the hub cluster:
** `apiVIP`
** `ingressVIP`
[NOTE]
====
For the above networking configurations, some values are required, or can be auto-assigned, depending on the chosen architecture and DHCP configuration.
====
* You must use the default {product-title} network provider OVN-Kubernetes.
* Networking between the managed cluster and hub cluster must meet the networking requirements in the {rh-rhacm-first} documentation, for example:
** Hub cluster access to managed cluster API service, Ironic Python agent, and baseboard management controller (BMC) port.
** Managed cluster access to hub cluster API service, ingress IP and control plane node IP addresses.
** Managed cluster BMC access to hub cluster control plane node IP addresses.
* An image registry must be accessible throughout the lifetime of the hub cluster.
** All required container images must be mirrored to the disconnected registry.
** The hub cluster must be configured to use a disconnected registry.
** The hub cluster cannot host its own image registry.
For example, the registry must be available in a scenario where a power failure affects all cluster nodes.
--
Engineering considerations::
* When deploying a hub cluster, ensure you define appropriately sized CIDR range definitions.

View File

@@ -0,0 +1,30 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-oadp-operator_{context}"]
= {oadp-full}
New in this release::
* No reference design updates in this release
Description::
+
--
The {oadp-first} Operator is automatically installed and managed by {rh-rhacm-first} when the backup feature is enabled.
The {oadp-short} Operator facilitates the backup and restore of workloads in {product-title} clusters.
Based on the upstream open source project Velero, it allows you to backup and restore all Kubernetes resources for a given project, including persistent volumes.
While it is not mandatory to have it on the hub cluster, it is highly recommended for cluster backup, disaster recovery and high availability architecture for the hub cluster.
The {oadp-short} Operator must be enabled to use the disaster recovery solutions for {rh-rhacm}.
The reference configuration enables backup (OADP) through the `MultiClusterHub` custom resource (CR) provided by the {rh-rhacm} Operator.
--
Limits and requirements::
* Only one version of {oadp-short} can be installed on a cluster.
The version installed by {rh-rhacm} must be used for {rh-rhacm} disaster recovery features.
Engineering considerations::
* No engineering consideration updates in this release.

View File

@@ -0,0 +1,54 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-observability_{context}"]
= Observability
The {rh-rhacm-first} multicluster engine Observability component provides centralized aggregation and visualization of metrics and alerts for all managed clusters.
To balance performance and data analysis, the monitoring service maintains a subset list of aggregated metrics that are collected at a downsampled interval.
The metrics can be accessed on the hub through a set of different preconfigured dashboards.
Observability installation::
The primary CR to enable and configure the Observability service is the `MulticlusterObservability` CR, which defines the following settings:
The primary custom resource (CR) to enable and configure the observability service is the `MulticlusterObservability` CR, which defines the following settings:
* Configurable retention settings.
* Storage for the different components: `thanos receive`, `thanos compact`, `thanos rule`, `thanos store` sharding, `alertmanager`.
* The `metadata.annotations.mco-disable-alerting="true"` annotation that enables tuning for the monitoring configuration on managed clusters.
+
[NOTE]
====
Without this setting the Observability component attempts to configure the managed cluster monitoring configuration.
With this value set you can merge your desired configuration with the necessary Observability configuration of alert forwarding into the managed cluster monitoring `ConfigMap` object.
When the Observability service is enabled {rh-rhacm} will deploy to each managed cluster a workload to push metrics and alerts generated by local Monitoring to the hub cluster.
The metrics and alerts to be forwarded from the managed cluster to the hub, are defined by a `ConfigMap` CR in the `open-cluster-management-addon-observability` namespace.
You can also specify custom metrics, for more information, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html-single/observability/index#adding-custom-metrics[Adding custom metrics].
====
Alertmananger configuration::
+
--
* The hub cluster provides an Observability Alertmanager that can be configured to push alerts to external systems, for example, email.
The Alertmanager is enabled by default.
* You must configure alert forwarding.
* When the Alertmanager is enabled but not configured, the hub Alertmanager does not forward alerts externally.
* When Observability is enabled, the managed clusters can be configured to send alerts to any endpoint including the hub Alertmanager.
* When a managed cluster is configured to forward alerts to external sources, alerts are not routed through the hub cluster Alertmanager.
* Alert state is available as a metric.
* When observability is enabled, the managed cluster alert states are included in the subset of metrics forwarded to the hub cluster and are available through Observability dashboards.
--
Limits and requirements::
* Observability requires persistent object storage for long-term metrics.
For more information, see "Storage requirements".
Engineering considerations::
* Forwarding of metrics is a subset of the full metric data.
It includes only the metrics defined in the `observability-metrics-allowlist` config map and any custom metrics added by the user.
* Metrics are forwarded at a downsampled rate.
Metrics are forwarded by taking the latest datapoint at a 5 minute interval (or as defined by the `MultiClusterObservability` CR configuration).
* A network outage may lead to a loss of metrics forwarded to the hub cluster during that interval.
This can be mitigated if metrics are also forwarded directly from managed clusters to an external metrics collector in the providers network.
Full resolution metrics are available on the managed cluster.
* In addition to default metrics dashboards on the hub, users may define custom dashboards.
* The reference configuration is sized based on 15 days of metrics storage by the hub cluster for 3500 {sno} clusters.
If longer retention or other managed cluster topology or sizing is required, the storage calculations must be updated and sufficient storage capacity be maintained.
For more information about calculating new values, see "Storage requirements".

View File

@@ -0,0 +1,18 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-openshift-data-foundation_{context}"]
= {rh-storage-first}
New in this release::
* No reference design updates in this release
Description::
{rh-storage-first} provides file, block, and object storage services to the hub cluster.
Limits and requirements::
* {odf-first} in internal mode requires the Local Storage Operator to define a storage class which will provide the necessary underlying storage.
* When doing the planning for a telco management cluster, consider the {odf-short} infrastructure and networking requirements.
* Dual stack support is limited.
{odf-short} IPv4 is supported on dual-stack clusters.
Engineering considerations::
* Address capacity warnings promptly as recovery can be difficult in case of storage capacity exhaustion, see link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.15/html-single/planning_your_deployment/index#capacity_planning[Capacity planning].

View File

@@ -0,0 +1,54 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-red-hat-advanced-cluster-management-rhacm_{context}"]
= {rh-rhacm-first}
New in this release::
* No reference design updates in this release.
Description::
+
--
{rh-rhacm-first} provides multicluster engine installation and ongoing lifecycle management functionality for deployed clusters.
You can manage cluster configuration and upgrades declaratively by applying `Policy` custom resources (CRs) to clusters during maintenance windows.
{rh-rhacm} provides functionality such as the following:
* Zero touch provisioning (ZTP) and ongoing scaling of clusters using the multicluster engine component in {rh-rhacm}.
* Configuration, upgrades, and cluster status through the {rh-rhacm} policy controller.
* During managed cluster installation, {rh-rhacm} can apply labels to individual nodes as configured through the `ClusterInstance` CR.
* The {cgu-operator-full} component of {rh-rhacm} provides phased rollout of configuration changes to managed clusters.
* The {rh-rhacm} multicluster engine Observability component provides selective monitoring, dashboards, alerts, and metrics.
The recommended method for {sno} cluster installation is the image-based installation method in multicluster engine, which uses the `ClusterInstance` CR for cluster definition.
The recommended method for {sno} upgrade is the image-based upgrade method.
[NOTE]
====
The {rh-rhacm} multicluster engine Observability component brings you a centralized view of the health and status of all the managed clusters.
By default, every managed cluster is enabled to send metrics and alerts, created by their {cmo-first}, back to Observability.
For more information, see "Observability".
====
--
Limits and requirements::
* For more information about limits on number of clusters managed by a single hub cluster, see "Telco management hub cluster use model".
* The number of managed clusters that can be effectively managed by the hub depends on various factors, including:
** Resource availability at each managed cluster
** Policy complexity and cluster size
** Network utilization
** Workload demands and distribution
* The hub and managed clusters must maintain sufficient bi-directional connectivity.
Engineering considerations::
* You can configure the cluster backup and restore Operator to include third-party resources.
* The use of {rh-rhacm} hub side templating when defining configuration through policy is strongly recommended.
This feature reduces the number of policies needed to manage the fleet by enabling for each cluster or for each group. For example, regional or hardware type content to be templated in a policy and substituted on cluster or group basis.
* Managed clusters typically have some number of configuration values which are specific to an individual cluster.
These should be managed using {rh-rhacm} policy hub side templating with values pulled from `ConfigMap` CRs based on the cluster name.

View File

@@ -0,0 +1,17 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-resource-utilization_{context}"]
= Hubs cluster resource utilization
Resource utilization was measured for hub clusters in the following scenario:
* Under reference load managing 3500 {sno} clusters.
* 3-node compact cluster for management hub running on dual socket bare-metal servers.
* Network impairment of 50 ms round-trip latency, 100 Mbps bandwidth limit and 0.02% packet loss.
.Resource utilization values
[options="header"]
|====
|Metric |Peak Measurement
|OpenShift Platform CPU |106 cores (52 cores per node)
|OpenShift Platform memory |504 G (168 G per node)
|====

View File

@@ -0,0 +1,27 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-scaling-targets_{context}"]
= Hub cluster scaling target
The resource requirements for the hub cluster are directly dependent on the number of clusters being managed by the hub, the number of policies used for each managed cluster, and the set of features that are configured in {rh-rhacm-first}.
The hub cluster reference configuration can support up to 3500 managed {sno} clusters under the following conditions:
* 5 policies for each cluster with hub-side templating configured with a 10 minute evaluation interval.
* Only the following {rh-rhacm} add-ons are enabled:
** Policy controller
** Observability with the default configuration
* You deploy managed clusters by using {ztp} in batches of up to 500 clusters at a time.
The reference configuration is also validated for deployment and management of a mix of managed cluster topologies.
The specific limits depend on the mix of cluster topologies, enabled {rh-rhacm} features, and so on.
In a mixed topology scenario, the reference hub configuration is validated with a combination of 1200 {sno} clusters, 400 compact clusters (3 nodes combined control plane and compute nodes), and 230 standard clusters (3 control plane and 2 worker nodes).
[NOTE]
====
Specific dimensioning requirements are highly dependent on the cluster topology and workload.
For more information, see "Storage requirements".
Adjust cluster dimensions for the specific characteristics of your fleet of managed clusters.
====

View File

@@ -0,0 +1,18 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-storage-considerations_{context}"]
= Storage considerations
Limits and requirements::
+
--
* Minimum {product-title} and {rh-rhacm-first} limits apply
* High availability should be provided through a storage backend.
The hub cluster reference configuration provides storage through {rh-storage-first}.
* Object bucket storage is provided through {rh-storage}.
--
Engineering considerations::
* Use SSD or NVMe disks with low latency and high throughput for etcd storage.
* The storage solution for telco hub clusters is {rh-storage}.
** Local Storage Operator supports the storage class used by {rh-storage} to provide block, file, and object storage as needed by other components on the hub cluster.
* The Local Storage Operator `LocalVolume` configuration includes setting `forceWipeDevicesAndDestroyAllData: true` to support the reinstallation of hub cluster nodes where {rh-storage} has previously been used.

View File

@@ -0,0 +1,12 @@
:_mod-docs-content-type: CONCEPT
[id="telco-hub-storage-requirements_{context}"]
= Storage requirements
The total amount of storage required by the management hub cluster is dependant on the storage requirements for each of the applications deployed on the cluster.
The main components that require storage through highly available `PersistentVolume` resources are described in the following sections.
[NOTE]
====
The storage required for the underlying {product-title} installation is separate to these requirements.
====

View File

@@ -0,0 +1,5 @@
:_mod-docs-content-type: CONCEPT
[id="telco-hub-telco-management-cluster-use-model_{context}"]
= Telco management hub cluster use model
The hub cluster provides managed cluster installation, configuration, observability and ongoing lifecycle management for telco application and workload clusters.

View File

@@ -0,0 +1,34 @@
:_mod-docs-content-type: REFERENCE
[id="telco-hub-topology-aware-lifecycle-manager-talm_{context}"]
= {cgu-operator-full}
New in this release::
* No reference design updates in this release.
Description::
+
--
{cgu-operator} is an Operator that runs only on the hub cluster for managing how changes like cluster upgrades, Operator upgrades, and cluster configuration are rolled out to the network. {cgu-operator} supports the following features:
* Progressive rollout of policy updates to fleets of clusters in user configurable batches.
* Per-cluster actions add `ztp-done` labels or other user-configurable labels following configuration changes to managed clusters.
* {cgu-operator} supports optional pre-caching of {product_title}, {olm} Operator, and additional images to {sno} clusters before initiating an upgrade. The pre-caching feature is not applicable when using the recommended image-based upgrade method for upgrading {sno} clusters.
** Specifying optional pre-caching configurations with `PreCachingConfig` CRs.
** Configurable image filtering to exclude unused content.
** Storage validation before and after pre-caching, using defined space requirement parameters.
--
Limits and requirements::
* {cgu-operator} supports concurrent cluster upgrades in batches of 500.
* Pre-caching is limited to {sno} cluster topology.
Engineering considerations::
* The `PreCachingConfig` custom resource (CR) is optional. You do not need to create it if you want to pre-cache platform-related images only, such as {product-title} and {olm}.
* {cgu-operator} supports the use of hub-side templating with Red Hat Advanced Cluster Management policies.

View File

@@ -0,0 +1,531 @@
:_mod-docs-content-type: ASSEMBLY
:telco-hub:
[id="telco-hub-ref-design-specs"]
= Telco hub reference design specifications
include::_attributes/common-attributes.adoc[]
toc::[]
The telco hub reference design specifications (RDS) describes the configuration for a hub cluster that deploys and operates fleets of {product-title} clusters in a telco environment.
:FeatureName: The telco hub RDS
include::snippets/technology-preview.adoc[]
include::modules/telco-hub-architecture-overview.adoc[leveloffset=+1]
include::modules/telco-hub-telco-management-cluster-use-model.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* For more information about core clusters or far edge clusters that host RAN distributed unit (DU) workloads, see the following:
** xref:../scalability_and_performance/telco-core-rds.adoc#telco-core-ref-design-specs[Telco core RDS]
** xref:../scalability_and_performance/telco-ran-du-rds.adoc#telco-ran-du-ref-design-specs[Telco RAN DU RDS]
* For more information about lifecycle management for the fleet of managed clusters see:
** xref:../edge_computing/image_based_upgrade/cnf-understanding-image-based-upgrade.adoc#cnf-understanding-image-based-upgrade[Image-based upgrade for {sno} clusters]
** xref:../edge_computing/cnf-talm-for-cluster-upgrades.adoc#cnf-talm-for-cluster-updates[Updating managed clusters with the {cgu-operator-full}]
** xref:../edge_computing/day_2_core_cnf_clusters/telco-day-2-welcome.adoc#telco-day-2-welcome[Upgrading a telco core CNF cluster]
* For more information about declarative cluster provisioning with {ztp} see:
** xref:../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-deploying-far-edge-sites[Installing managed clusters with {rh-rhacm} and SiteConfig resources]
* For more information about observability metrics and alerts, see:
** link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/about/index#multicluster-architecture[Multicluster architecture]
** link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/about/index#observability-arch[Observability]
include::modules/telco-hub-engineering-considerations.adoc[leveloffset=+1]
include::modules/telco-hub-scaling-targets.adoc[leveloffset=+2]
include::modules/telco-hub-resource-utilization.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/governance/index#template-comparison-table[Comparison of hub cluster and managed cluster templates]
include::modules/telco-hub-cluster-topology.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:xref:../welcome/learn_more_about_openshift.adoc#architecture[{product-title} architecture]
* link:xref:../post_installation_configuration/node-tasks.adoc#post-install-node-tasks[Postinstallation node tasks]
include::modules/telco-hub-networking.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../disconnected/installing.adoc#installing-disconnected-environments[Installing a cluster in a disconnected environment]
* xref:../disconnected/using-olm.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks]
* xref:../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-configuring-the-cluster-for-a-disconnected-environment_ztp-preparing-the-hub-cluster[Configuring the hub cluster to use a disconnected mirror registry]
* xref:../networking/cidr-range-definitions.adoc#cidr-range-definitions[CIDR range definitions]
* xref:../installing/overview/index.adoc#ocp-installation-overview[Installing {product-title}]
* xref:../networking/understanding-networking.adoc#understanding-networking[Networking in {product-title}]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/networking/index[Networking in {rh-rhacm}]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/clusters/cluster_mce_overview#mce-network-configuration[Network configuration in {rh-rhacm}]
include::modules/telco-hub-memory-and-cpu-requirements.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../scalability_and_performance/index.adoc#scalability-and-performance-overview[Scaling your {product-title} cluster and tuning performance in production environments]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/install/installing#sizing-your-cluster[Sizing your cluster]
include::modules/telco-hub-storage-requirements.adoc[leveloffset=+2]
include::modules/telco-hub-assisted-service.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/clusters/cluster_mce_overview#enable-cim-disconnected[Enabling central infrastructure management in disconnected environments]
include::modules/telco-hub-acm-observability.adoc[leveloffset=+3]
include::modules/telco-hub-storage-considerations.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
* link:xref:../storage/understanding-persistent-storage.adoc#persistent-storage-overview_understanding-persistent-storage[Persistent storage overview]
* link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/red_hat_openshift_data_foundation_architecture/index[{rh-storage} architecture]
* xref:../storage/persistent_storage/persistent_storage_local/persistent-storage-local.adoc#persistent-storage-using-local-volume[Persistent storage using local volumes]
* xref:../scalability_and_performance/recommended-performance-scale-practices/recommended-etcd-practices.adoc#recommended-etcd-practices[Recommended etcd practices]
include::modules/telco-hub-git-repository.adoc[leveloffset=+2]
include::modules/telco-hub-hub-cluster-openshift-deployment.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../installing/overview/index.adoc#installation-overview_ocp-installation-overview[{product-title} installation overview]
* xref:../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-with-agent-based-installer[Installing a cluster with customizations]
* xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#preparing-to-install-with-agent-based-installer[Preparing to install with the Agent-based Installer]
include::modules/telco-hub-hub-cluster-day-2-operators.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* link:https://access.redhat.com/articles/7073065[Red Hat Advanced Cluster Management for Kubernetes 2.11 Support Matrix]
* link:https://access.redhat.com/support/policy/updates/openshift_operators[OpenShift Operator lifecycles]
* For more information about telco hub cluster update requirements, see:
** xref:../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-gitops-ztp-max-spoke-clusters_ztp-preparing-the-hub-cluster[Recommended hub cluster specifications and managed cluster limits for {ztp}].
** link:https://access.redhat.com/articles/7073065[Red Hat Advanced Cluster Management for Kubernetes 2.11 Support Matrix]
** link:https://access.redhat.com/support/policy/updates/openshift_operators[OpenShift Operator Life Cycles]
* For more information about updating the hub cluster, see:
** xref:../updating/understanding_updates/intro-to-updates.adoc#understanding-openshift-updates[Introduction to OpenShift updates]
** link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html-single/install/index#upgrading-hub[Upgrading your hub cluster]
** xref:../edge_computing/ztp-updating-gitops.adoc#ztp-updating-gitops[Updating {ztp}]
include::modules/telco-hub-observability.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* For more information about observability, see:
** link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/observability/index#exporting-metrics-to-external-endpoints[Exporting metrics to external endpoints]
** link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/observability/index#enabling-observability-service[Enabling the Observability service]
* For more information about custom metrics, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/observability/index#adding-custom-metrics[Adding custom metrics]
* For more information about forwarding alerts to other external systems, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/observability/index#forward-alerts[Forwarding alerts]
* For more information about CPU and memory requirements see: link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/observability/index#observability-pod-capacity-requests[Observability pod capacity requests]
For more information about custom metrics, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/observability/index#adding-custom-metrics[Adding custom metrics]
* For more information about custom dashboards, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/observability/index#using-grafana-dashboards[Using Grafana dashboards]
include::modules/telco-hub-managed-clusters-lifecycle-management.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../edge_computing/ztp-deploying-far-edge-clusters-at-scale.adoc#ztp-deploying-far-edge-clusters-at-scale[Challenges of the network far edge]
include::modules/telco-hub-managed-cluster-deployment.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/multicluster_engine_operator_with_red_hat_advanced_cluster_management/siteconfig-intro#siteconfig-intro[SiteConfig]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/apis/apis#rhacm-docs_apis_clusterinstance_jsonclusterinstance[ClusterInstance]
* xref:../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-creating-the-site-secrets_ztp-deploying-far-edge-sites[Creating the managed bare-metal host secrets]
include::modules/telco-hub-managed-cluster-updates-and-upgrades.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/governance/governance#configuration-policy-yaml[Configuration policy YAML structure]
* xref:../edge_computing/cnf-talm-for-cluster-upgrades.adoc#talo-about-cgu-crs_cnf-topology-aware-lifecycle-manager[About the ClusterGroupUpgrade CR]
* xref:../edge_computing/image_based_upgrade/cnf-understanding-image-based-upgrade.adoc#cnf-understanding-image-based-upgrade[Understanding the image-based upgrade for {sno} clusters]
* xref:../edge_computing/image_based_upgrade/ztp-image-based-upgrade.adoc#ztp-image-based-upgrade[Performing an image-based upgrade for {sno} clusters using {ztp}]
include::modules/telco-hub-hub-disaster-recovery.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.13/html-single/business_continuity/index[Business continuity]
include::modules/telco-hub-hub-components.adoc[leveloffset=+1]
include::modules/telco-hub-red-hat-advanced-cluster-management-rhacm.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html-single/clusters/index#cluster_mce_overview[Multi Cluster Engine]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html-single/governance/index[Governance]
* xref:../edge_computing/cnf-talm-for-cluster-upgrades.adoc#cnf-talm-for-cluster-updates[{cgu-operator-full}]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html/observability/index[MultiClusterHub Observability]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html-single/business_continuity/index#business-cont-overview[Business continuity]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html/install/installing#performance-and-scalability[Performance and scalability]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/clusters/index#mce-network-configuration[Network configuration]
include::modules/telco-hub-topology-aware-lifecycle-manager-talm.adoc[leveloffset=+2]
include::modules/telco-hub-gitops-operator-and-ztp-plugins.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/multicluster_engine_operator_with_red_hat_advanced_cluster_management/siteconfig-intro[ClusterInstance CR]
* xref:../edge_computing/policygentemplate_for_ztp/ztp-configuring-managed-clusters-policies.adoc#ztp-configuring-managed-clusters-policies[PolicyGenTemplate CRs]
* xref:../edge_computing/ztp-preparing-the-hub-cluster.adoc#ztp-preparing-the-ztp-git-repository-ver-ind_ztp-preparing-the-hub-cluster[{ztp} version independence]
include::modules/telco-hub-local-storage-operator.adoc[leveloffset=+2]
include::modules/telco-hub-openshift-data-foundation.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html-single/4.13_release_notes/index#support_openshift_dual_stack_with_odf_using_ipv4[Support OpenShift dual stack with {rh-storage} using IPv4]
* link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.15/html-single/planning_your_deployment/index#infrastructure-requirements_rhodf[Infrastructure requirements]
* link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.15/html-single/planning_your_deployment/index#network-requirements_rhodf[Network requirements]
* link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/planning_your_deployment/index#network-requirements_rhodf[Storage cluster deployment approaches]
include::modules/telco-hub-logging.adoc[leveloffset=+2]
include::modules/telco-hub-oadp-operator.adoc[leveloffset=+2]
[id="telco-yaml-reference_{context}"]
== Hub cluster reference configuration CRs
The following is the complete YAML reference of all the custom resources (CRs) for the telco management hub reference configuration in 4.18.
[id="telco-hub-rhacm-ref-crs_{context}"]
=== {rh-rhacm} reference YAML
[id="telco-hub-acmAgentServiceConfig-yaml_{context}"]
.acmAgentServiceConfig.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/acmAgentServiceConfig.yaml[]
----
[id="telco-hub-acmMCH-yaml_{context}"]
.acmMCH.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/acmMCH.yaml[]
----
[id="telco-hub-acmMirrorRegistryCM-yaml_{context}"]
.acmMirrorRegistryCM.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/acmMirrorRegistryCM.yaml[]
----
[id="telco-hub-acmNS-yaml_{context}"]
.acmNS.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/acmNS.yaml[]
----
[id="telco-hub-acmOperGroup-yaml_{context}"]
.acmOperGroup.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/acmOperGroup.yaml[]
----
[id="telco-hub-acmPerfSearch-yaml_{context}"]
.acmPerfSearch.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/acmPerfSearch.yaml[]
----
[id="telco-hub-acmProvisioning-yaml_{context}"]
.acmProvisioning.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/acmProvisioning.yaml[]
----
[id="telco-hub-acmSubscription-yaml_{context}"]
.acmSubscription.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/acmSubscription.yaml[]
----
[id="telco-hub-observabilityMCO-yaml_{context}"]
.observabilityMCO.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/observabilityMCO.yaml[]
----
[id="telco-hub-observabilityNS-yaml_{context}"]
.observabilityNS.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/observabilityNS.yaml[]
----
[id="telco-hub-observabilityOBC-yaml_{context}"]
.observabilityOBC.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/observabilityOBC.yaml[]
----
[id="telco-hub-observabilitySecret-yaml_{context}"]
.observabilitySecret.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/observabilitySecret.yaml[]
----
[id="telco-hub-thanosSecret-yaml_{context}"]
.thanosSecret.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/acm/thanosSecret.yaml[]
----
[id="telco-hub-talmSubscription-yaml_{context}"]
.talmSubscription.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/talm/talmSubscription.yaml[]
----
[id="telco-hub-storage-ref-crs_{context}"]
=== Storage reference YAML
[id="telco-hub-lsoLocalVolume-yaml"]
.lsoLocalVolume.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/lso/lsoLocalVolume.yaml[]
----
[id="telco-hub-lsoNS-yaml_{context}"]
.lsoNS.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/lso/lsoNS.yaml[]
----
[id="telco-hub-lsoOperatorgroup-yaml_{context}"]
.lsoOperatorgroup.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/lso/lsoOperatorGroup.yaml[]
----
[id="telco-hub-lsoSubscription-yaml_{context}"]
.lsoSubscription.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/lso/lsoSubscription.yaml[]
----
[id="telco-hub-odfNS-yaml_{context}"]
.odfNS.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/odf-internal/odfNS.yaml[]
----
[id="telco-hub-odfOperatorGroup-yaml_{context}"]
.odfOperatorGroup.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/odf-internal/odfOperatorGroup.yaml[]
----
[id="telco-hub-odfSubscription-yaml_{context}"]
.odfSubscription.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/odf-internal/odfSubscription.yaml[]
----
[id="telco-hub-storageCluster-yaml_{context}"]
.storageCluster.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/odf-internal/storageCluster.yaml[]
----
[id="telco-hub-gitopsztp-ref-crs_{context}"]
=== GitOps Operator and {ztp} reference YAML
[id="telco-hub-argocd-ssh-known-hosts-cm-yaml"]
.argocd-ssh-known-hosts-cm.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/argocd-ssh-known-hosts-cm.yaml[]
----
[id="telco-hub-gitopsNS-yaml_{context}"]
.gitopsNS.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/gitopsNS.yaml[]
----
[id="telco-hub-gitopsOperatorGroup-yaml_{context}"]
.gitopsOperatorGroup.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/gitopsOperatorGroup.yaml[]
----
[id="telco-hub-gitopsSubscription-yaml_{context}"]
.gitopsSubscription.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/gitopsSubscription.yaml[]
----
[id="telco-hub-ztp-repo-yaml_{context}"]
.ztp-repo.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/ztp-repo.yaml[]
----
[id="telco-hub-app-project-yaml_{context}"]
.app-project.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/ztp-installation/app-project.yaml[]
----
[id="telco-hub-argocd-openshift-gitops-patch-yaml_{context}"]
.argocd-openshift-gitops-patch.json
[source,json]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/ztp-installation/argocd-openshift-gitops-patch.json[]
----
[id="telco-hub-clusters-app-yaml_{context}"]
.clusters-app.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/ztp-installation/clusters-app.yaml[]
----
[id="telco-hub-gitops-cluster-rolebinding-yaml_{context}"]
.gitops-cluster-rolebinding.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/ztp-installation/gitops-cluster-rolebinding.yaml[]
----
[id="telco-hub-gitops-policy-rolebinding-yaml_{context}"]
.gitops-policy-rolebinding.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/ztp-installation/gitops-policy-rolebinding.yaml[]
----
[id="telco-hub-kustomization-yaml_{context}"]
.kustomization.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/ztp-installation/kustomization.yaml[]
----
[id="telco-hub-policies-app-project-yaml_{context}"]
.policies-app-project.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/ztp-installation/policies-app-project.yaml[]
----
[id="telco-hub-policies-app-yaml_{context}"]
.policies-app.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/required/gitops/ztp-installation/policies-app.yaml[]
----
[id="telco-hub-logging-ref-crs_{context}"]
=== Logging reference YAML
[id="telco-hub-clusterLogNS-yaml"]
.clusterLogNS.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/logging/clusterLogNS.yaml[]
----
[id="telco-hub-clusterLogOperGroup-yaml_{context}"]
.clusterLogOperGroup.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/logging/clusterLogOperGroup.yaml[]
----
[id="telco-hub-clusterLogSubscription-yaml_{context}"]
.clusterLogSubscription.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/configuration/reference-crs/optional/logging/clusterLogSubscription.yaml[]
----
[id="telco-hub-ztp-ref-crs_{context}"]
=== Installation reference YAML
[id="telco-hub-agent-config-yaml"]
.agent-config.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/install/openshift/agent-config.yaml[]
----
[id="telco-hub-install-config-yaml_{context}"]
.install-config.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/install/openshift/install-config.yaml[]
----
[id="telco-hub-mirroring-ref-crs_{context}"]
=== Image mirroring reference YAML
[id="telco-hub-imageset-config-yaml"]
.imageset-config.yaml
[source,yaml]
----
include::https://raw.githubusercontent.com/openshift-kni/telco-reference/release-4.19/telco-hub/install/mirror-registry/imageset-config.yaml[]
----
:!telco-hub: