1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

TELCODOCS-1707: Add Preparing/ConfigMap objects nonGitOps

This commit is contained in:
Alexandra Molnar
2024-06-05 10:04:06 +01:00
parent f975a6f11f
commit 74bfe43cdb
11 changed files with 405 additions and 3 deletions

View File

@@ -3103,8 +3103,8 @@ Topics:
File: cnf-image-based-upgrade-install-operators
- Name: Generating a seed image for the image-based upgrade with Lifecycle Agent
File: cnf-image-based-upgrade-generate-seed
# - Name: Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent
# File: cnf-image-based-upgrade-prep-resources
- Name: Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent
File: cnf-image-based-upgrade-prep-resources
# - Name: Creating ConfigMap objects for the image-based upgrade with Lifecycle Agent using GitOps ZTP
# File: ztp-image-based-upgrade-prep-resources
# - Name: Performing an image-based upgrade for single-node OpenShift clusters

View File

@@ -127,4 +127,4 @@ include::modules/ztp-image-based-upgrade-extra-manifests-guide.adoc[leveloffset=
* xref:../../edge_computing/image_based_upgrade/preparing_for_image_based_upgrade/ztp-image-based-upgrade-prep-resources.adoc#ztp-image-based-upgrade-creating-backup-resources-with-ztp_ztp-gitops[Creating ConfigMap objects for the image-based upgrade with GitOps ZTP]
* xref:../../backup_and_restore/application_backup_and_restore/installing/about-installing-oadp.adoc#about-installing-oadp[About installing OADP]
////
////

View File

@@ -0,0 +1,28 @@
:_mod-docs-content-type: ASSEMBLY
[id="cnf-image-based-upgrade-prep-resources"]
= Creating ConfigMap objects for the image-based upgrade with {lcao}
include::_attributes/common-attributes.adoc[]
:context: nongitops
toc::[]
The {lcao} needs all your OADP resources, extra manifests, and custom catalog sources wrapped in a `ConfigMap` object to process them for the image-based upgrade.
include::modules/cnf-image-based-upgrade-prep-oadp.adoc[leveloffset=+1]
include::modules/cnf-image-based-upgrade-prep-extramanifests.adoc[leveloffset=+1]
include::modules/cnf-image-based-upgrade-prep-catalogsource.adoc[leveloffset=+1]
////
[role="_additional-resources"]
.Additional resources
* xref:../../../edge_computing/image_based_upgrade/preparing_for_image_based_upgrade/cnf-image-based-upgrade-shared-container-image.adoc#cnf-image-based-upgrade-shared-varlibcontainers[Configuring a shared container directory for the image-based upgrade]
* xref:../../../backup_and_restore/application_backup_and_restore/installing/about-installing-oadp.adoc[About installing OADP]
* xref:../../../edge_computing/image_based_upgrade/cnf-image-based-upgrade-base.adoc#cnf-image-based-upgrade-for-sno[Performing an image-based upgrade with Lifecycle Agent]
* xref:../../../operators/understanding/olm/olm-understanding-olm.adoc#olm-catalogsource_olm-understanding-olm[Catalog source]
////

View File

@@ -0,0 +1,44 @@
// Module included in the following assemblies:
// * edge_computing/image-based-upgrade/cnf-preparing-for-image-based-upgrade.adoc
:_mod-docs-content-type: PROCEDURE
[id="cnf-image-based-upgrade-creating-backup-custom-catalog-sources_{context}"]
= Creating ConfigMap objects of custom catalog sources for the image-based upgrade with {lcao}
You can keep your custom catalog sources after the upgrade by generating a `ConfigMap` object for your catalog sources and adding them to the `spec.extraManifest` field in the `ImageBasedUpgrade` CR.
For more information about catalog sources, see "Catalog source".
.Procedure
. Create a YAML file that contains the `CatalogSource` CR:
+
--
[source,yaml]
----
apiVersion: operators.coreos.com/v1
kind: CatalogSource
metadata:
name: example-catalogsources
namespace: openshift-marketplace
spec:
sourceType: grpc
displayName: disconnected-redhat-operators
image: quay.io/example-org/example-catalog:v1
----
--
. Create the `ConfigMap` object by running the following command:
+
[source,terminal]
----
$ oc create configmap example-catalogsources-cm --from-file=example-catalogsources.yaml=<path_to_catalogsource_cr> -n openshift-lifecycle-agent
----
. Patch the `ImageBasedUpgrade` CR by running the following command:
+
[source,terminal]
----
$ oc patch imagebasedupgrades.lca.openshift.io upgrade \
-p='{"spec": {"extraManifests": [{"name": "example-catalogsources-cm", "namespace": "openshift-lifecycle-agent"}]}}' \
--type=merge -n openshift-lifecycle-agent
----

View File

@@ -0,0 +1,64 @@
// Module included in the following assemblies:
// * edge_computing/image-based-upgrade/cnf-preparing-for-image-based-upgrade.adoc
:_mod-docs-content-type: PROCEDURE
[id="cnf-image-based-upgrade-creating-backup-extra-manifests_{context}"]
= Creating ConfigMap objects of extra manifests for the image-based upgrade with {lcao}
You can create additional manifests that you want to apply to the target cluster.
.Procedure
. Create a YAML file that contains your extra manifests:
+
[source,yaml]
----
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
name: "pci-sriov-net-e5l"
namespace: openshift-sriov-network-operator
spec:
deviceType: vfio-pci
isRdma: false
nicSelector:
pfNames: [ens1f0]
nodeSelector:
node-role.kubernetes.io/master: ""
mtu: 1500
numVfs: 8
priority: 99
resourceName: pci_sriov_net_e5l
---
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
name: "networking-e5l"
namespace: openshift-sriov-network-operator
spec:
ipam: |-
{
}
linkState: auto
networkNamespace: sriov-namespace
resourceName: pci_sriov_net_e5l
spoofChk: "on"
trust: "off"
----
. Create the `ConfigMap` object by running the following command:
+
[source,terminal]
----
$ oc create configmap example-extra-manifests-cm --from-file=example-extra-manifests.yaml=<path_to_extramanifest> -n openshift-lifecycle-agent
----
. Patch the `ImageBasedUpgrade` CR by running the following command:
+
[source,terminal]
----
$ oc patch imagebasedupgrades.lca.openshift.io upgrade \
-p='{"spec": {"extraManifests": [{"name": "example-extra-manifests-cm", "namespace": "openshift-lifecycle-agent"}]}}' \
--type=merge -n openshift-lifecycle-agent
----

View File

@@ -0,0 +1,71 @@
// Module included in the following assemblies:
// * edge_computing/image-based-upgrade/cnf-preparing-for-image-based-upgrade.adoc
:_mod-docs-content-type: PROCEDURE
[id="cnf-image-based-upgrade-creating-backup-oadp-resources_{context}"]
= Creating OADP ConfigMap objects for the image-based upgrade with {lcao}
Create your OADP resources that are used to back up and restore your resources during the upgrade.
.Prerequisites
* Generate a seed image from a compatible seed cluster.
* Create OADP backup and restore resources.
* Create a separate partition on the target cluster for the container images that is shared between stateroots. For more information about, see "Configuring a shared container directory for the image-based upgrade".
* Deploy a version of {lcao} that is compatible with the version used with the seed image.
* Install the OADP Operator, the `DataProtectionApplication` CR, and its secret on the target cluster.
* Create an S3-compatible storage solution and a ready-to-use bucket with proper credentials configured. For more information, see "About installing OADP".
.Procedure
. Create the OADP `Backup` and `Restore` CRs for platform artifacts in the same namespace where the OADP Operator is installed, which is `openshift-adp`.
.. If the target cluster is managed by {rh-rhacm}, add the following YAML file for backing up and restoring {rh-rhacm} artifacts:
+
--
.PlatformBackupRestore.yaml for {rh-rhacm}
include::snippets/ibu-PlatformBackupRestore.adoc[]
--
.. If you created persistent volumes on your cluster through {lvms}, add the following YAML file for {lvms} artifacts:
+
.PlatformBackupRestoreLvms.yaml for {lvms}
include::snippets/ibu-PlatformBackupRestoreLvms.adoc[]
. (Optional) If you need to restore applications after the upgrade, create the OADP `Backup` and `Restore` CRs for your application in the `openshift-adp` namespace.
.. Create the OADP CRs for cluster-scoped application artifacts in the `openshift-adp` namespace.
+
.Example OADP CRs for cluster-scoped application artifacts for LSO and {LVMS}
include::snippets/ibu-ApplicationClusterScopedBackupRestore.adoc[]
.. Create the OADP CRs for your namespace-scoped application artifacts.
+
--
.Example OADP CRs namespace-scoped application artifacts when LSO is used
include::snippets/ibu-ApplicationBackupRestoreLso.adoc[]
.Example OADP CRs namespace-scoped application artifacts when {lvms} is used
include::snippets/ibu-ApplicationBackupRestoreLvms.adoc[]
[IMPORTANT]
====
The same version of the applications must function on both the current and the target release of {product-title}.
====
--
. Create the `ConfigMap` object for your OADP CRs by running the following command:
+
[source,terminal]
----
$ oc create configmap oadp-cm-example --from-file=example-oadp-resources.yaml=<path_to_oadp_crs> -n openshift-adp
----
. Patch the `ImageBasedUpgrade` CR by running the following command:
+
[source,terminal]
----
$ oc patch imagebasedupgrades.lca.openshift.io upgrade \
-p='{"spec": {"oadpContent": [{"name": "oadp-cm-example", "namespace": "openshift-adp"}]}}' \
--type=merge -n openshift-lifecycle-agent
----

View File

@@ -0,0 +1,40 @@
[source,yaml]
----
apiVersion: velero.io/v1
kind: Backup
metadata:
labels:
velero.io/storage-location: default
name: backup-app
namespace: openshift-adp
spec:
includedNamespaces:
- test
includedNamespaceScopedResources:
- secrets
- persistentvolumeclaims
- deployments
- statefulsets
- configmaps
- cronjobs
- services
- job
- poddisruptionbudgets
- <application_custom_resources> <1>
excludedClusterScopedResources:
- persistentVolumes
---
apiVersion: velero.io/v1
kind: Restore
metadata:
name: test-app
namespace: openshift-adp
labels:
velero.io/storage-location: default
annotations:
lca.openshift.io/apply-wave: "4"
spec:
backupName:
backup-app
----
<1> Define custom resources for your application.

View File

@@ -0,0 +1,50 @@
[source,yaml]
----
apiVersion: velero.io/v1
kind: Backup
metadata:
labels:
velero.io/storage-location: default
name: backup-app
namespace: openshift-adp
spec:
includedNamespaces:
- test
includedNamespaceScopedResources:
- secrets
- persistentvolumeclaims
- deployments
- statefulsets
- configmaps
- cronjobs
- services
- job
- poddisruptionbudgets
- <application_custom_resources> <1>
includedClusterScopedResources:
- persistentVolumes <2>
- logicalvolumes.topolvm.io <3>
- volumesnapshotcontents <4>
---
apiVersion: velero.io/v1
kind: Restore
metadata:
name: test-app
namespace: openshift-adp
labels:
velero.io/storage-location: default
annotations:
lca.openshift.io/apply-wave: "4"
spec:
backupName:
backup-app
restorePVs: true
restoreStatus:
includedResources:
- logicalvolumes <5>
----
<1> Define custom resources for your application.
<2> Required field.
<3> Required field
<4> Optional if you use {lvms} volume snapshots.
<5> Required field.

View File

@@ -0,0 +1,35 @@
[source,yaml]
----
apiVersion: velero.io/v1
kind: Backup
metadata:
annotations:
lca.openshift.io/apply-label: "apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test" <1>
name: backup-app-cluster-resources
labels:
velero.io/storage-location: default
namespace: openshift-adp
spec:
includedClusterScopedResources:
- customresourcedefinitions
- securitycontextconstraints
- clusterrolebindings
- clusterroles
excludedClusterScopedResources:
- Namespace
---
apiVersion: velero.io/v1
kind: Restore
metadata:
name: test-app-cluster-resources
namespace: openshift-adp
labels:
velero.io/storage-location: default
annotations:
lca.openshift.io/apply-wave: "3" <2>
spec:
backupName:
backup-app-cluster-resources
----
<1> Replace the example resource name with your actual resources.
<2> The `lca.openshift.io/apply-wave` value must be higher than the value in the platform `Restore` CRs and lower than the value in the application namespace-scoped `Restore` CR.

View File

@@ -0,0 +1,39 @@
[source,yaml]
----
apiVersion: velero.io/v1
kind: Backup
metadata:
name: acm-klusterlet
annotations:
lca.openshift.io/apply-label: "apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials" <1>
labels:
velero.io/storage-location: default
namespace: openshift-adp
spec:
includedNamespaces:
- open-cluster-management-agent
includedClusterScopedResources:
- klusterlets.operator.open-cluster-management.io
- clusterroles.rbac.authorization.k8s.io
- clusterrolebindings.rbac.authorization.k8s.io
- priorityclasses.scheduling.k8s.io
includedNamespaceScopedResources:
- deployments
- serviceaccounts
- secrets
excludedNamespaceScopedResources: []
---
apiVersion: velero.io/v1
kind: Restore
metadata:
name: acm-klusterlet
namespace: openshift-adp
labels:
velero.io/storage-location: default
annotations:
lca.openshift.io/apply-wave: "1"
spec:
backupName:
acm-klusterlet
----
<1> If your `multiclusterHub` CR does not have `.spec.imagePullSecret` defined and the secret does not exist on the `open-cluster-management-agent` namespace in your hub cluster, remove `v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials`.

View File

@@ -0,0 +1,31 @@
[source,yaml]
----
apiVersion: velero.io/v1
kind: Backup
metadata:
labels:
velero.io/storage-location: default
name: lvmcluster
namespace: openshift-adp
spec:
includedNamespaces:
- openshift-storage
includedNamespaceScopedResources:
- lvmclusters
- lvmvolumegroups
- lvmvolumegroupnodestatuses
---
apiVersion: velero.io/v1
kind: Restore
metadata:
name: lvmcluster
namespace: openshift-adp
labels:
velero.io/storage-location: default
annotations:
lca.openshift.io/apply-wave: "2" <1>
spec:
backupName:
lvmcluster
----
<1> The `lca.openshift.io/apply-wave` value must be lower than the values specified in the application `Restore` CRs.