1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/cnf-topology-aware-lifecycle-manager-creating-custom-resources.adoc
Aidan Reilly 01fade44b2 Updates for PolicyGenerator and deprecated PolicyGenTemplate
Updates for Max

David's comments

Adding merge comparison

Reorg for clarity

Updates for David
2024-06-18 11:16:09 +01:00

229 lines
6.3 KiB
Plaintext

// Module included in the following assemblies:
//
// * scalability_and_performance/ztp_far_edge/ztp-talm-updating-managed-policies.adoc
:_mod-docs-content-type: PROCEDURE
[id="talm-prechache-user-specified-images-preparing-crs_{context}"]
= Creating the custom resources for pre-caching
You must create the `PreCachingConfig` CR before or concurrently with the `ClusterGroupUpgrade` CR.
. Create the `PreCachingConfig` CR with the list of additional images you want to pre-cache.
+
[source,yaml]
----
apiVersion: ran.openshift.io/v1alpha1
kind: PreCachingConfig
metadata:
name: exampleconfig
namespace: default <1>
spec:
[...]
spaceRequired: 30Gi <2>
additionalImages:
- quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef
- quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef
- quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09
----
<1> The `namespace` must be accessible to the hub cluster.
<2> It is recommended to set the minimum disk space required field to ensure that there is sufficient storage space for the pre-cached images.
. Create a `ClusterGroupUpgrade` CR with the `preCaching` field set to `true` and specify the `PreCachingConfig` CR created in the previous step:
+
[source,yaml]
----
apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
name: cgu
namespace: default
spec:
clusters:
- sno1
- sno2
preCaching: true
preCachingConfigRef:
- name: exampleconfig
namespace: default
managedPolicies:
- du-upgrade-platform-upgrade
- du-upgrade-operator-catsrc-policy
- common-subscriptions-policy
remediationStrategy:
timeout: 240
----
+
[WARNING]
====
Once you install the images on the cluster, you cannot change or delete them.
====
+
. When you want to start pre-caching the images, apply the `ClusterGroupUpgrade` CR by running the following command:
+
[source,terminal]
----
$ oc apply -f cgu.yaml
----
{cgu-operator} verifies the `ClusterGroupUpgrade` CR.
From this point, you can continue with the {cgu-operator} pre-caching workflow.
[NOTE]
====
All sites are pre-cached concurrently.
====
.Verification
. Check the pre-caching status on the hub cluster where the `ClusterUpgradeGroup` CR is applied by running the following command:
+
[source,terminal]
----
$ oc get cgu <cgu_name> -n <cgu_namespace> -oyaml
----
+
.Example output
[source,yaml]
----
precaching:
spec:
platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef
operatorsIndexes:
- registry.example.com:5000/custom-redhat-operators:1.0.0
operatorsPackagesAndChannels:
- local-storage-operator: stable
- ptp-operator: stable
- sriov-network-operator: stable
excludePrecachePatterns:
- aws
- vsphere
additionalImages:
- quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef
- quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef
- quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09
spaceRequired: "30"
status:
sno1: Starting
sno2: Starting
----
+
The pre-caching configurations are validated by checking if the managed policies exist.
Valid configurations of the `ClusterGroupUpgrade` and the `PreCachingConfig` CRs result in the following statuses:
+
.Example output of valid CRs
[source,yaml]
----
- lastTransitionTime: "2023-01-01T00:00:01Z"
message: All selected clusters are valid
reason: ClusterSelectionCompleted
status: "True"
type: ClusterSelected
- lastTransitionTime: "2023-01-01T00:00:02Z"
message: Completed validation
reason: ValidationCompleted
status: "True"
type: Validated
- lastTransitionTime: "2023-01-01T00:00:03Z"
message: Precaching spec is valid and consistent
reason: PrecacheSpecIsWellFormed
status: "True"
type: PrecacheSpecValid
- lastTransitionTime: "2023-01-01T00:00:04Z"
message: Precaching in progress for 1 clusters
reason: InProgress
status: "False"
type: PrecachingSucceeded
----
+
.Example of an invalid PreCachingConfig CR
[source,yaml]
----
Type: "PrecacheSpecValid"
Status: False,
Reason: "PrecacheSpecIncomplete"
Message: "Precaching spec is incomplete: failed to get PreCachingConfig resource due to PreCachingConfig.ran.openshift.io "<pre-caching_cr_name>" not found"
----
. You can find the pre-caching job by running the following command on the managed cluster:
+
[source,terminal]
----
$ oc get jobs -n openshift-talo-pre-cache
----
+
.Example of pre-caching job in progress
[source,terminal]
----
NAME COMPLETIONS DURATION AGE
pre-cache 0/1 1s 1s
----
. You can check the status of the pod created for the pre-caching job by running the following command:
+
[source,terminal]
----
$ oc describe pod pre-cache -n openshift-talo-pre-cache
----
+
.Example of pre-caching job in progress
[source,terminal]
----
Type Reason Age From Message
Normal SuccesfulCreate 19s job-controller Created pod: pre-cache-abcd1
----
. You can get live updates on the status of the job by running the following command:
+
[source,terminal]
----
$ oc logs -f pre-cache-abcd1 -n openshift-talo-pre-cache
----
. To verify the pre-cache job is successfully completed, run the following command:
+
[source,terminal]
----
$ oc describe pod pre-cache -n openshift-talo-pre-cache
----
+
.Example of completed pre-cache job
[source,terminal]
----
Type Reason Age From Message
Normal SuccesfulCreate 5m19s job-controller Created pod: pre-cache-abcd1
Normal Completed 19s job-controller Job completed
----
. To verify that the images are successfully pre-cached on the {sno}, do the following:
.. Enter into the node in debug mode:
+
[source,terminal]
----
$ oc debug node/cnfdf00.example.lab
----
.. Change root to `host`:
+
[source,terminal]
----
$ chroot /host/
----
.. Search for the desired images:
+
[source,terminal]
----
$ sudo podman images | grep <operator_name>
----