1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

TELCODOCS-264 - New and changed files for ZTP

This commit is contained in:
Stephen Smith
2021-09-19 08:33:21 -04:00
committed by openshift-cherrypick-robot
parent b7a115d343
commit 4399db0724
22 changed files with 695 additions and 93 deletions

View File

@@ -2060,7 +2060,7 @@ Topics:
Distros: openshift-webscale
- Name: Deploying distributed units at scale in a disconnected environment
File: ztp-deploying-disconnected
Distros: openshift-webscale
Distros: openshift-origin,openshift-enterprise
---
Name: Backup and restore
Dir: backup_and_restore

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

View File

@@ -5,7 +5,7 @@
[id="about-ztp-and-distributed-units-on-single-node-clusters_{context}"]
= About ZTP and distributed units on single nodes
You can install a distributed unit (DU) on a single node at scale with Red Hat Advanced Cluster Management (ACM) using the assisted installer (AI) and the policy generator with core-reduction technology enabled. The DU installation is done using zero touch provisioning (ZTP) in a disconnected environment.
You can install a distributed unit (DU) on a single node at scale with {rh-rhacm-first} (ACM) using the assisted installer (AI) and the policy generator with core-reduction technology enabled. The DU installation is done using zero touch provisioning (ZTP) in a disconnected environment.
ACM manages clusters in a hub and spoke architecture, where a single hub cluster manages many spoke clusters. ACM applies radio access network (RAN) policies from predefined custom resources (CRs). Hub clusters running ACM provision and deploy the spoke clusters using ZTP and AI. DU installation follows the AI installation of {product-title} on a single node.
@@ -20,9 +20,10 @@ With ZTP and AI, you can provision {product-title} single nodes to run your DUs
* You install the DU bare metal host machines on site, and make the hosts ready for provisioning. To be ready for provisioning, the following is required for each bare metal host:
** Network connectivity - including DNS and DHCP for your network. Hosts should be reachable through the hub and managed spoke clusters.
** Network connectivity - including DNS for your network. Hosts should be reachable through the hub and managed spoke clusters. Ensure there is layer 3 connectivity between the hub and the host where you want to install your hub cluster.
** BMC details for each host - BMC details are used to connect to the host and run the installation media. Create spoke cluster definition CRs. These define the relevant elements for the managed clusters. Required
** Baseboard Management Controller (BMC) details for each host - ZTP uses BMC details to connect the URL and credentials for accessing the BMC.
Create spoke cluster definition CRs. These define the relevant elements for the managed clusters. Required
CRs are as follows:
+
[cols="1,1"]

View File

@@ -16,6 +16,7 @@ You use {rh-rhacm-first} on a hub cluster in the disconnected environment to man
[NOTE]
====
If you want to deploy Operators to the spoke clusters, you must also add them to this registry.
See link:https://docs.openshift.com/container-platform/4.8/operators/admin/olm-restricted-networks.html#olm-mirror-catalog_olm-restricted-networks[Mirroring an Operator catalog] for more information.
====
.Procedure

View File

@@ -0,0 +1,29 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-checking-the-installation-status_{context}"]
= Checking the installation status
The ArgoCD pipeline detects the `SiteConfig` and `PolicyGenTemplate` custom resources (CRs) in the Git repository and syncs them to the hub cluster. In the process, it generates installation and policy CRs and applies them to the hub cluster. You can monitor the progress of this synchronization in the ArgoCD dashboard.
.Procedure
. Monitor the progress of cluster installation using the following commands:
+
[source,terminal]
----
$ export CLUSTER=<clusterName>
----
+
[source,terminal]
----
$ oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq
----
+
[source,terminal]
----
$ curl -sk $(oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'
----
. Use the {rh-rhacm-first} (ACM) dashboard to monitor the progress of policy reconciliation.

View File

@@ -3,9 +3,10 @@
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-creating-siteconfig-custom-resources_{context}"]
= Creating ZTP custom resources for managed clusters
= Creating custom resources to install a single managed cluster
Create the zero touch provisioning (ZTP) custom resources that contain the site-specific data required to install and configure a cluster for RAN applications.
This procedure tells you how to manually create and deploy a single managed cluster. If you are creating multiple clusters, perhaps hundreds, use the `SiteConfig` method described in
“Creating ZTP custom resources for multiple managed clusters”.
.Prerequisites
@@ -52,7 +53,7 @@ provisioner_cluster_registry }}/ocp4:{{ mirror_version_spoke_release }}
* You mirrored the ISO and `rootfs` used to generate the spoke cluster ISO to an HTTP server and configured the settings to pull images from there.
+
The images must match the version of the ClusterImageSet. To deploy a 4.8.0 version, the `rootfs` and ISO need to be 4.8.0 as well.
The images must match the version of the `ClusterImageSet`. To deploy a 4.9.0 version, the `rootfs` and ISO need to be set at 4.9.0.
.Procedure
@@ -64,9 +65,9 @@ The images must match the version of the ClusterImageSet. To deploy a 4.8.0 vers
apiVersion: hive.openshift.io/v1
kind: ClusterImageSet
metadata:
name: openshift-4.8.0-rc.0 <1>
name: openshift-4.9.0-rc.0 <1>
spec:
releaseImage: quay.io/openshift-release-dev/ocp-release:4.8.0-x86_64 <2>
releaseImage: quay.io/openshift-release-dev/ocp-release:4.9.0-x86_64 <2>
----
<1> `name` is the descriptive version that you want to deploy.
<2> `releaseImage` needs to point to the specific release image to deploy.
@@ -204,13 +205,13 @@ spec:
applicationManager:
enabled: true
certPolicyController:
enabled: true <1>
enabled: false
iamPolicyController:
enabled: true
enabled: false
policyController:
enabled: true
searchCollector:
enabled: false
enabled: false <1>
----
+
<1> `enabled:` is set to either `true` to enable KlusterletAddonConfig or `false` to disable the KlusterletAddonConfig. Keep `searchCollector` disabled.

View File

@@ -0,0 +1,18 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-creating-the-policygentemplates_{context}"]
= Creating the PolicyGenTemplates
Use the following procedure to create the `PolicyGenTemplates` you will need for generating policies in your Git repository for the hub cluster.
.Procedure
. Create the `PolicyGenTemplates` and save them to the zero touch provisioning (ZTP) Git repository accessible from the hub cluster and defined as a source repository of the ArgoCD application.
. ArgoCD detects that the application is out of sync. Upon sync, either automatic or manual, ArgoCD applies the new `PolicyGenTemplate` to the hub cluster and launches the associated resource hooks. These hooks are responsible for generating the policy wrapped configuration CRs that apply to the spoke cluster and perform the following actions:
.. Create the {rh-rhacm-first} (ACM) policies according to the basic distributed unit (DU) profile and required customizations.
.. Apply the generated policies to the hub cluster.
The ZTP process creates policies that direct ACM to apply the desired configuration to the cluster nodes.

View File

@@ -0,0 +1,45 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-creating-the-site-secrets_{context}"]
= Creating the site secrets
Add the required secrets for the site to the hub cluster. These resources must be in a namespace with a name that matches the cluster name.
.Procedure
. Create a secret for authenticating to the site Baseboard Management Controller (BMC). Ensure the secret name matches the name used in the `SiteConfig`.
In this example, the secret name is `test-sno-bmh-secret`:
+
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: test-sno-bmh-secret
namespace: test-sno
data:
password: dGVtcA==
username: cm9vdA==
type: Opaque
----
. Create the pull secret for the site. The pull secret must contain all credentials necessary for installing OpenShift and all add-on Operators. In this example, the secret name is `assisted-deployment-pull-secret`:
+
[source,yaml]
----
apiVersion: v1
kind: Secret
metadata:
name: assisted-deployment-pull-secret
namespace: test-sno
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: <Your pull secret base64 encoded>
----
[NOTE]
====
The secrets are referenced from the `SiteConfig` custom resource (CR) by name. The namespace must match the `SiteConfig` namespace.
====

View File

@@ -0,0 +1,100 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-creating-the-siteconfig-custom-resources_{context}"]
= Creating the SiteConfig custom resources
ArgoCD acts as the engine for the GitOps method of site deployment. After completing a site plan that contains the required custom resources for the site installation, a policy generator creates the manifests and applies them to the hub cluster.
.Procedure
. Create one or more `SiteConfig` custom resources, `site-config.yaml` files, that contains the site-plan data for the
clusters. For example:
+
[source,yaml]
----
apiVersion: ran.openshift.io/v1
kind: SiteConfig
metadata:
name: "test-sno"
namespace: "test-sno"
spec:
baseDomain: "clus2.t5g.lab.eng.bos.redhat.com"
pullSecretRef:
name: "assisted-deployment-pull-secret"
clusterImageSetNameRef: "openshift-4.8"
sshPublicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDB3dwhI5X0ZxGBb9VK7wclcPHLc8n7WAyKjTNInFjYNP9J+Zoc/ii+l3YbGUTuqilDwZN5rVIwBux2nUyVXDfaM5kPd9kACmxWtfEWTyVRootbrNWwRfKuC2h6cOd1IlcRBM1q6IzJ4d7+JVoltAxsabqLoCbK3svxaZoKAaK7jdGG030yvJzZaNM4PiTy39VQXXkCiMDmicxEBwZx1UsA8yWQsiOQ5brod9KQRXWAAST779gbvtgXR2L+MnVNROEHf1nEjZJwjwaHxoDQYHYKERxKRHlWFtmy5dNT6BbvOpJ2e5osDFPMEd41d2mUJTfxXiC1nvyjk9Irf8YJYnqJgBIxi0IxEllUKH7mTdKykHiPrDH5D2pRlp+Donl4n+sw6qoDc/3571O93+RQ6kUSAgAsvWiXrEfB/7kGgAa/BD5FeipkFrbSEpKPVu+gue1AQeJcz9BuLqdyPUQj2VUySkSg0FuGbG7fxkKeF1h3Sga7nuDOzRxck4I/8Z7FxMF/e8DmaBpgHAUIfxXnRqAImY9TyAZUEMT5ZPSvBRZNNmLbfex1n3NLcov/GEpQOqEYcjG5y57gJ60/av4oqjcVmgtaSOOAS0kZ3y9YDhjsaOcpmRYYijJn8URAH7NrW8EZsvAoF6GUt6xHq5T258c6xSYUm5L0iKvBqrOW9EjbLw== root@cnfdc2.clus2.t5g.lab.eng.bos.redhat.com"
clusters:
- clusterName: "test-sno"
clusterType: "sno"
clusterProfile: "du"
clusterLabels:
group-du-sno: ""
common: true
sites : "test-sno"
clusterNetwork:
- cidr: 1001:db9::/48
hostPrefix: 64
machineNetwork:
- cidr: 2620:52:0:10e7::/64
serviceNetwork:
- 1001:db7::/112
additionalNTPSources:
- 2620:52:0:1310::1f6
nodes:
- hostName: "test-sno.clus2.t5g.lab.eng.bos.redhat.com"
bmcAddress: "idrac-virtualmedia+https://[2620:52::10e7:f602:70ff:fee4:f4e2]/redfish/v1/Systems/System.Embedded.1"
bmcCredentialsName:
name: "test-sno-bmh-secret"
bootMACAddress: "0C:42:A1:8A:74:EC"
bootMode: "UEFI"
rootDeviceHints:
hctl: '0:1:0'
cpuset: "0-1,52-53"
nodeNetwork:
interfaces:
- name: eno1
macAddress: "0C:42:A1:8A:74:EC"
config:
interfaces:
- name: eno1
type: ethernet
state: up
macAddress: "0C:42:A1:8A:74:EC"
ipv4:
enabled: false
ipv6:
enabled: true
address:
- ip: 2620:52::10e7:e42:a1ff:fe8a:900
prefix-length: 64
dns-resolver:
config:
search:
- clus2.t5g.lab.eng.bos.redhat.com
server:
- 2620:52:0:1310::1f6
routes:
config:
- destination: ::/0
next-hop-interface: eno1
next-hop-address: 2620:52:0:10e7::fc
table-id: 254
----
. Save the files and push them to the zero touch provisioning (ZTP) Git repository accessible from the hub cluster and defined as a source repository of the ArgoCD application.
ArgoCD detects that the application is out of sync. Upon sync, either automatic or manual, ArgoCD synchronizes the `PolicyGenTemplate` to the hub cluster and launches the associated resource hooks. These hooks are responsible for generating the policy wrapped configuration CRs that apply to the spoke cluster. The resource hooks convert the site definitions to installation custom resources and applies them to the hub cluster:
* `Namespace` - Unique per site
* `AgentClusterInstall`
* `BareMetalHost`
* `ClusterDeployment`
* `InfraEnv`
* `NMStateConfig`
* `ExtraManifestsConfigMap` - Extra manifests. The additional manifests include workload partitioning, chronyd, mountpoint hiding, sctp enablement, and more.
* `ManagedCluster`
* `KlusterletAddonConfig`
{rh-rhacm-first} (ACM) deploys the hub cluster.

View File

@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-creating-ztp-custom-resources-for-multiple-managed-clusters_{context}"]
= Creating ZTP custom resources for multiple managed clusters
If you are installing multiple managed clusters, zero touch provisioning (ZTP) uses ArgoCD and `SiteConfig` to manage the processes that create the custom resources (CR) and generate and apply the policies for multiple clusters, in batches of no more than 100, using the GitOps approach.
Installing and deploying the clusters is a two stage process, as shown here:
image::183_OpenShift_ZTP_0921.png[GitOps approach for Installing and deploying the clusters]

View File

@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-installing-the-gitops-ztp-pipeline_{context}"]
= Installing the GitOps ZTP pipeline
The procedures in this section tell you how to complete the following tasks:
* Prepare the Git repository you need to host site configuration data.
* Configure the hub cluster for generating the required installation and policy custom resources (CR).
* Deploy the managed clusters using zero touch provisioning (ZTP).

View File

@@ -0,0 +1,101 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-preparing-the-hub-cluster-for-ztp_{context}"]
= Preparing the hub cluster for ZTP
You can configure your hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CR) for each site based on a zero touch provisioning (ZTP) GitOps flow.
.Procedure
. Install the Red Hat OpenShift GitOps Operator on your hub cluster.
. Extract the administrator password for ArgoCD:
+
[source,terminal]
----
$ oc get secret openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d
----
. Prepare the ArgoCD pipeline configuration:
.. Clone the Git repository.
.. Modify the source values of the two ArgoCD applications, `deployment/clusters-app.yaml` and `deployment/policies-app.yaml` with appropriate URL, `targetRevision` branch, and path values. The path values must match those used in your Git repository.
+
Modify `deployment/clusters-app.yaml`:
+
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
name: clusters-sub
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: clusters
namespace: openshift-gitops
spec:
destination:
server: https://kubernetes.default.svc
namespace: clusters-sub
project: default
source:
path: ztp/gitops-subscriptions/argocd/resource-hook-example/siteconfig <1>
repoURL: https://github.com/openshift-kni/cnf-features-deploy <2>
targetRevision: master <3>
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
----
<1> `path` is the branch that contains the `siteconfig` CRs for the clusters.
<2> `repoURL` is the URL of the Git repository that contains the `siteconfig` custom resources that define site configuration for installing clusters.
<3> `targetRevision` is the branch on the Git repository that contains the relevant site configuration data.
.. Modify `deployment/policies-app.yaml`:
+
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
name: policies-sub
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: policies
namespace: openshift-gitops
spec:
destination:
server: https://kubernetes.default.svc
namespace: policies-sub
project: default
source:
directory:
recurse: true
path: ztp/gitops-subscriptions/argocd/resource-hook-example/policygentemplates <1>
repoURL: https://github.com/openshift-kni/cnf-features-deploy <2>
targetRevision: master <3>
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
----
<1> `path` is the branch that contains the `policygentemplates` CRs for the clusters.
<2> `repoURL` is the URL of the Git repository that contains the `policygentemplates` custom resources that specify configuration data for the site.
<3> `targetRevision` is the branch on the Git repository that contains the relevant configuration data.
. To apply the pipeline configuration to your hub cluster, enter this command:
+
[source,terminal]
----
oc apply -k ./deployment
----

View File

@@ -0,0 +1,23 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-preparing-the-ztp-git-repository_{context}"]
= Preparing the ZTP Git repository
Create a Git repository for hosting site configuration data. The zero touch provisioning (ZTP) pipeline requires read access to this repository.
.Procedure
. Create a directory structure with separate paths for the `SiteConfig` and `PolicyGenTemplate` custom resources (CR).
. Add `pre-sync.yaml` and `post-sync.yaml` from `resource-hook-example/<policygentemplates>/` to the path for the `PolicyGenTemplate` CRs.
. Add `pre-sync.yaml` and `post-sync.yaml` from `resource-hook-example/<siteconfig>/` to the path for the `SiteConfig` CRs.
+
[NOTE]
====
If your hub cluster operates in a disconnected environment, you must update the `image` for all four pre and post sync hook CRs.
====
. Apply the `policygentemplates.ran.openshift.io` and `siteconfigs.ran.openshift.io` CR definitions.

View File

@@ -0,0 +1,28 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-prerequisites-for-deploying-the-ztp-pipeline_{context}"]
= Prerequisites for deploying the ZTP pipeline
* Openshift cluster version 4.8 or higher and Red Hat GitOps Operator is installed.
* {rh-rhacm-first} version 2.3 or above is installed.
* For disconnected environments, make sure your source data Git repository and `ztp-site-generator` container image are accessible from the hub cluster.
* If you want additional custom content, such as extra install manifests or custom resources (CR) for policies, add them to the `/usr/src/hook/ztp/source-crs/extra-manifest/` directory. Similarly, you can add additional configuration CRs, as referenced from a `PolicyGenTemplate`, to the `/usr/src/hook/ztp/source-crs/` directory.
** Create a `Containerfile` that adds your additional manifests to the Red Hat provided image, for example:
+
[source,yaml]
----
FROM <registry fqdn>/ztp-site-generator:latest <1>
COPY myInstallManifest.yaml /usr/src/hook/ztp/source-crs/extra-manifest/
COPY mySourceCR.yaml /usr/src/hook/ztp/source-crs/
----
+
<1> <registry fqdn> must point to a registry containing the `ztp-site-generator` container image provided by Red Hat.
** Build a new container image that includes these additional files:
+
[source,terminal]
----
$> podman build Containerfile.example
----

View File

@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-removing-the-argocd-pipeline_{context}"]
= Removing the ArgoCD pipeline
Use the following procedure if you want to remove the ArgoCD pipeline and all generated artifacts.
.Procedure
. Detach all clusters from ACM.
. Delete all `SiteConfig` and `PolicyGenTemplate` custom resources (CRs) from your Git repository.
. Delete the following namespaces:
+
* All policy namespaces:
+
[source,terminal]
----
$ oc get policy -A
----
+
* `clusters-sub`
* `policies-sub`
. Process the directory using the Kustomize tool:
+
[source,terminal]
----
$ oc delete -k cnf-features-deploy/ztp/gitops-subscriptions/argocd/deployment
----

View File

@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-site-cleanup_{context}"]
= Site cleanup
To remove a site and the associated installation and policy custom resources (CRs), remove the `SiteConfig` and site-specific `PolicyGenTemplate` CRs from the Git repository. The pipeline hooks remove the generated CRs.
[NOTE]
====
Before removing a `SiteConfig` CR you must detach the cluster from ACM.
====

View File

@@ -5,70 +5,39 @@
[id="ztp-the-policygentemplate_{context}"]
= The PolicyGenTemplate
The `PolicyGenTemplate.yaml` file is a Custom Resource Definition (CRD) that tells PolicyGen where to locate the generated policies on the cluster and the items that need to be defined. The following example shows the `PolicyGenTemplate.yaml` file:
The `PolicyGenTemplate.yaml` file is a Custom Resource Definition (CRD) that tells PolicyGen where to categorize the generated policies and which items need to be overlaid.
The following example shows the `PolicyGenTemplate.yaml` file:
[source,yaml]
----
apiVersion: policyGenerator/v1
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
# The name will be used to generate the placementBinding and placementRule names as ex: policyGenTemp-placementBinding and policyGenTemp-placementRule
name: "policyGenTemp"
namespace: "policy-templates"
labels:
# Set common to true if the generated policies will be applied for all clusters.
common: false
# Set groupName value (ex:group-du) if the generated policies will be applied for a group of clusters.
groupName: "N/A"
siteName: "N/A"
# Set siteName value (ex:prod-cluster) if the generated policies will be applied for a specific cluster.
mcp: "N/A"
sourceFiles:
# (mandatory) The fileName values must be same as file name in the sourcePolicies dir without .yaml extension ex: SriovNetwork
- fileName: "N/A"
# (mandatory) The policyName will be used with common|{metadata.labels.groupName}|{metadata.labels.siteName}
# to set the generated policy name. ex: group1-ptp-policy or common-sriov-sub-policy
# When a policy is propagated to a managed cluster, the replicated policy is named namespaceName.policyName.
# When you create a policy, make sure that the length of the namespaceName.policyName must not exceed 63 characters
# due to the Kubernetes limit for object names.
# The namespace.names that policy generator use are: common-sub, groups-sub and sites-sub
policyName: "N/A"
# (optional) The name will be used to set the generated custom resource metadata.name
# if name is defined as N/A, "" or not set the name value exist in the sourcePolicies/{fileName} will be used.
name: "N/A"
# (optional) spec must contain the values that is needed to be set in the source policy following the exact same path ex:
# sriovnetwork.spec as follow
# spec:
# resourceName: du_fh
# vlan: 140
#
# If the spec is not defined, the defined spec in the sourcePolicies/{fileName} will be carried out to
# the generated custom resource.
# If Any of the spec items defined in the source file as variable start with $, it will be deleted
# from the generated spec items if it is not setted.
spec: "N/A"
# (optional) data must contain the values that is needed to be set in the source policy following the exact same path ex:
# configMap.data as follow
# data:
# rules1.properties: |
# name: cnf*
# labels:
# - node-role.kubernetes.io/worker-du
#
# If the data is not defined, the defined data in the sourcePolicies/{fileName} will be carried out to
# the generated custom resource.
# If Any of the data items defined in the source file as variable start with $, it will be deleted
# from the generated data items if it is not setted.
data: "N/A"
name: "group-du-sno"
namespace: "group-du-sno"
spec:
bindingRules:
group-du-sno: ""
mcp: "master"
sourceFiles:
- fileName: ConsoleOperatorDisable.yaml
policyName: "console-policy"
- fileName: ClusterLogging.yaml
policyName: "cluster-log-policy"
spec:
curation:
curator:
schedule: "30 3 * * *"
collection:
logs:
type: "fluentd"
fluentd: {}
----
The `group-du-ranGen.yaml` file defines a group of policies under a group named `group-du`. It defines a MachineConfigPool `worker-du` that is used as the node selector for any other policy defined in `sourceFiles`. An ACM policy is generated for every source file that exists in `sourceFiles`. And, a single placement binding and placement rule is generated to apply the cluster selection rule for group-du policies.
The `group-du-ranGen.yaml` file defines a group of policies under a group named `group-du`. This file defines a `MachineConfigPool` `worker-du` that is used as the node selector for any other policy defined in `sourceFiles`. An ACM policy is generated for every source file that exists in `sourceFiles`. And, a single placement binding and placement rule is generated to apply the cluster selection rule for `group-du` policies.
Using the source file `PtpConfigSlave.yaml` as an example, the PtpConfigSlave has a definition of a PtpConfig custom resource. The generated policy for the PtpConfigSlave example is named `group-du-ptp-config-policy`. The PtpConfig custom resource defined in the generated `group-du-ptp-config-policy` is named `du-ptp-slave`. The `spec` defined in `PtpConfigSlave.yaml` is placed under `du-ptp-slave` along with the other `spec` items defined under the source file.
Using the source file `PtpConfigSlave.yaml` as an example, the `PtpConfigSlave` has a definition of a `PtpConfig` custom resource (CR). The generated policy for the `PtpConfigSlave` example is named `group-du-ptp-config-policy`. The `PtpConfig` CR defined in the generated `group-du-ptp-config-policy` is named `du-ptp-slave`. The `spec` defined in `PtpConfigSlave.yaml` is placed under `du-ptp-slave` along with the other `spec` items defined under the source file.
The following example shows the `group-du-ptp-config-policy`:

View File

@@ -0,0 +1,8 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-troubleshooting-gitops-ztp_{context}"]
= Troubleshooting GitOps ZTP
As noted, the ArgoCD pipeline synchronizes the `SiteConfig` and `PolicyGenTemplate` custom resources (CR) from the Git repository to the hub cluster. During this process, post-sync hooks create the installation and policy CRs that are also applied to the hub cluster. Use the following procedures to troubleshoot issues that might occur in this process.

View File

@@ -0,0 +1,70 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-validating-the-generation-of-installation-crs_{context}"]
= Validating the generation of installation CRs
`SiteConfig` applies Installation custom resources (CR) to the hub cluster in a namespace with the name matching the site name. To check the status, enter the following command:
[source,terminal]
----
$ oc get AgentClusterInstall -n <clusterName>
----
If no object is returned, use the following procedure to troubleshoot the ArgoCD pipeline flow from `SiteConfig` to the installation CRs.
.Procedure
. Check the synchronization of the `SiteConfig` to the hub cluster using either of the following commands:
+
[source,terminal]
----
$ oc get siteconfig -A
----
+
or
+
[source,terminal]
----
$ oc get siteconfig -n clusters-sub
----
+
If the `SiteConfig` is missing, one of the following situations has occurred:
* The *clusters* application failed to synchronize the CR from the Git repository to the hub. Use the following command to verify this:
+
[source,terminal]
----
$ oc describe -n openshift-gitops application clusters
----
+
Check for `Status: Synced` and that the `Revision:` is the SHA of the commit you pushed to the subscribed repository.
+
* The pre-sync hook failed, possibly due to a failure to pull the container image. Check the ArgoCD dashboard for the status of the pre-sync job in the *clusters* application.
. Verify the post hook job ran:
+
[source,terminal]
----
$ oc describe job -n clusters-sub siteconfig-post
----
+
* If successful, the returned output indicates `succeeded: 1`.
* If the job fails, ArgoCD retries it. In some cases, the first pass will fail and the second pass will indicate that the job passed.
. Check for errors in the post hook job:
+
[source,terminal]
----
$ oc get pod -n clusters-sub
----
+
Note the name of the `siteconfig-post-xxxxx` pod:
+
[source,terminal]
----
$ oc logs -n clusters-sub siteconfig-post-xxxxx
----
+
If the logs indicate errors, correct the conditions and push the corrected `SiteConfig` or `PolicyGenTemplate` to the Git repository.

View File

@@ -0,0 +1,111 @@
// Module included in the following assemblies:
//
// *scalability_and_performance/ztp-zero-touch-provisioning.adoc
[id="ztp-validating-the-generation-of-policy-crs_{context}"]
= Validating the generation of policy CRs
ArgoCD generates the policy custom resources (CRs) in the same namespace as the `PolicyGenTemplate` from which they were created. The same troubleshooting flow applies to all policy CRs generated from `PolicyGenTemplates` regardless of whether they are common, group, or site based.
To check the status of the policy CRs, enter the following commands:
[source,terminal]
----
$ export NS=<namespace>
----
[source,terminal]
----
$ oc get policy -n $NS
----
The returned output displays the expected set of policy wrapped CRs. If no object is returned, use the following procedure to troubleshoot the ArgoCD pipeline flow from `SiteConfig` to the policy CRs.
.Procedure
. Check the synchronization of the `PolicyGenTemplate` to the hub cluster:
+
[source,terminal]
----
$ oc get policygentemplate -A
----
or
+
[source,terminal]
----
$ oc get policygentemplate -n $NS
----
+
If the `PolicyGenTemplate` is not synchronized, one of the following situations has occurred:
+
* The clusters application failed to synchronize the CR from the Git repository to the hub. Use the following command to verify this:
+
[source,terminal]
----
$ oc describe -n openshift-gitops application clusters
----
+
Check for `Status: Synced` and that the `Revision:` is the SHA of the commit you pushed to the subscribed repository.
+
* The pre-sync hook failed, possibly due to a failure to pull the container image. Check the ArgoCD dashboard for the status of the pre-sync job in the *clusters* application.
. Ensure the policies were copied to the cluster namespace. When ACM recognizes that policies apply to a `ManagedCluster`, ACM applies the policy CR objects to the cluster namespace:
+
[source,terminal]
----
$ oc get policy -n <clusterName>
----
ACM copies all applicable common, group, and site policies here. The policy names are `<policyNamespace>` and `<policyName>`.
. Check the placement rule for any policies not copied to the cluster namespace. The `matchSelector` in the `PlacementRule` for those policies should match the labels on the `ManagedCluster`:
+
[source,terminal]
----
$ oc get placementrule -n $NS
----
. Make a note of the `PlacementRule` name for the missing common, group, or site policy:
+
[source,terminal]
----
oc get placementrule -n $NS <placmentRuleName> -o yaml
----
+
* The `status decisions` value should include your cluster name.
* The `key value` of the `matchSelector` in the spec should match the labels on your managed cluster. Check the labels on `ManagedCluster`:
+
[source,terminal]
----
oc get ManagedCluster $CLUSTER -o jsonpath='{.metadata.labels}' | jq
----
+
.Example
[source,yaml]
----
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
name: group-test1-policies-placementrules
namespace: group-test1-policies
spec:
clusterSelector:
matchExpressions:
- key: group-test1
operator: In
values:
- ""
status:
decisions:
- clusterName: <myClusterName>
clusterNamespace: <myClusterName>
----
. Ensure all policies are compliant:
+
[source,terminal]
----
oc get policy -n $CLUSTER
----
+
If the Namespace, OperatorGroup, and Subscription policies are compliant but the operator configuration policies are not it is likely that the operators did not install.

View File

@@ -5,43 +5,44 @@
[id="ztp-ztp-custom-resources_{context}"]
= ZTP custom resources
Zero touch provisioning (ZTP) uses custom resource objects to extend the Kubernetes API or introduce your own API into a project or a cluster. These custom resources contain the site-specific data required to install and configure a cluster for RAN applications.
Zero touch provisioning (ZTP) uses custom resource (CR) objects to extend the Kubernetes API or introduce your own API into a project or a cluster. These CRs contain the site-specific data required to install and configure a
cluster for RAN applications.
A custom resource definition (CRD) file defines your own object kinds. Deploying a CRD into the managed cluster causes the Kubernetes API server to begin serving the specified custom resource for the entire lifecycle.
A custom resource definition (CRD) file defines your own object kinds. Deploying a CRD into the managed cluster causes the Kubernetes API server to begin serving the specified CR for the entire lifecycle.
For each custom resource in the `<site>.yaml` file on the managed cluster, the data is used to create installation custom resources in a directory named
for the cluster.
For each CR in the `<site>.yaml` file on the managed cluster, ZTP uses the data to create installation CRs in a directory named for the cluster.
On the cluster site, an automated Discovery image ISO file creates a directory with the site name and a file with the cluster name. The cluster file contains the custom resources shown in the following table. Every cluster has its own namespace, and all of the custom resources are under that namespace. The namespace and the custom resource names match the cluster name.
ZTP provides two ways for defining and installing CRs on managed clusters: a manual approach when you are provisioning a single cluster and an automated approach when provisioning multiple clusters.
The following table describes the ZTP custom resources, the generated filenames, and usage.
Manual CR creation for single clusters::
Use this method when you are creating CRs for a single cluster. This is a good way to test your CRs before deploying on a larger scale.
Automated CR creation for multiple managed clusters::
Use the automated SiteConfig method when you are installing multiple managed clusters, for example, in batches of up to 100 clusters. SiteConfig uses ArgoCD as the engine for the GitOps method of site deployment. After completing a site plan that contains all of the required parameters for deployment, a policy generator creates the manifests and applies them to the hub cluster.
Both methods create the CRs shown in the following table. On the cluster site, an automated Discovery image ISO file creates a directory with the site name and a file with the cluster name. Every cluster has its own namespace, and all of the CRs are under that namespace. The namespace and the CR names match the cluster name.
[cols="1,1,1"]
|===
| Resource (file name) | Description | Usage
| Resource | Description | Usage
|`BareMetalHost` +
(`BareMetalHost.yaml`)
|`BareMetalHost`
|Contains the connection information for the Baseboard Management Controller (BMC) of the target bare metal machine.
|Provides access to the BMC in order to load and boot the Discovery image ISO on the target machine by using the Redfish protocol.
|`InfraEnv` +
(`InfraEnv.yaml`)
|`InfraEnv`
|Contains information for pulling {product-title} onto the target bare metal machine.
|Used with ClusterDeployment to generate the Discovery ISO for the managed cluster.
|`AgentClusterInstall` +
(`AgentClusterInstall.yaml`)
|`AgentClusterInstall`
|Specifies the managed clusters configuration such as networking and the number of supervisor (control plane) nodes. Shows the `kubeconfig` and credentials when the installation is complete.
|Specifies the managed cluster configuration information and provides status during the installation of the cluster.
|`ClusterDeployment` +
(`ClusterDeployment.yaml`)
|`ClusterDeployment`
|References the `AgentClusterInstall` to use.
|Used with `InfraEnv` to generate the Discovery ISO for the managed cluster.
|`NMStateConfig` +
(`NMStateConfig.yaml`)
|`NMStateConfig`
|Provides network configuration information such as `MAC` to `IP` mapping, DNS server, default route, and other network settings. This is not needed if DHCP is used.
|Sets up a static IP address for the managed clusters Kube API server.
@@ -49,24 +50,19 @@ The following table describes the ZTP custom resources, the generated filenames,
|Contains hardware information about the target bare metal machine.
|Created automatically on the hub when the target machine's Discovery image ISO boots.
|`ManagedCluster` +
(`ManagedCluster.yaml`)
|`ManagedCluster`
|When a cluster is managed by the hub, it must be imported and known. This Kubernetes object provides that interface.
|The hub uses this resource to manage and show the status of managed clusters.
|`KlusterletAddonConfig` +
(`KlusterletAddonConfig.yaml`)
|`KlusterletAddonConfig`
|Contains the list of services provided by the hub to be deployed to a `ManagedCluster`.
|Tells the hub which addon services to deploy to a `ManagedCluster`.
|`Namespace` +
(`namespace.yaml`)
|`Namespace`
|Logical space for `ManagedCluster` resources existing on the hub. Unique per site.
|Propagates resources to the `ManagedCluster`.
| `Secret` +
(`Secret.yaml`) +
(`PullSecret.yaml`)
|Two custom resources are created: `BMC Secret` and `Image Pull Secret`.
a| * `BMC Secret` authenticates into the target bare metal machine using its username and password.
* `Image Pull Secret` contains authentication information for the {product-title} image installed on the target bare metal machine.

View File

@@ -108,3 +108,34 @@ include::modules/ztp-performance-addon-operator.adoc[leveloffset=+2]
include::modules/ztp-sriov-operator.adoc[leveloffset=+2]
include::modules/ztp-precision-time-protocol-operator.adoc[leveloffset=+2]
// New files for Creating ZTP custom resources for multiple managed clusters for 4.9
include::modules/ztp-creating-ztp-custom-resources-for-multiple-managed-clusters.adoc[leveloffset=+1]
include::modules/ztp-prerequisites-for-deploying-the-ztp-pipeline.adoc[leveloffset=+2]
include::modules/ztp-installing-the-gitops-ztp-pipeline.adoc[leveloffset=+2]
include::modules/ztp-preparing-the-ztp-git-repository.adoc[leveloffset=+3]
include::modules/ztp-preparing-the-hub-cluster-for-ztp.adoc[leveloffset=+3]
include::modules/ztp-creating-the-site-secrets.adoc[leveloffset=+2]
include::modules/ztp-creating-the-siteconfig-custom-resources.adoc[leveloffset=+2]
include::modules/ztp-creating-the-policygentemplates.adoc[leveloffset=+2]
include::modules/ztp-checking-the-installation-status.adoc[leveloffset=+2]
include::modules/ztp-site-cleanup.adoc[leveloffset=+2]
include::modules/ztp-removing-the-argocd-pipeline.adoc[leveloffset=+3]
include::modules/ztp-troubleshooting-gitops-ztp.adoc[leveloffset=+1]
include::modules/ztp-validating-the-generation-of-installation-crs.adoc[leveloffset=+2]
include::modules/ztp-validating-the-generation-of-policy-crs.adoc[leveloffset=+2]