1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Add OLM 1.0 arch/components

This commit is contained in:
Alex Dellapenta
2023-10-09 11:19:03 -06:00
parent 1b224b7d61
commit 7cb3347ce2
35 changed files with 687 additions and 387 deletions

View File

@@ -193,7 +193,7 @@ endif::[]
:osdk_ver: 1.31.0
//Operator SDK version that shipped with the previous OCP 4.x release
:osdk_ver_n1: 1.28.0
//Next-gen (OCP 4.13+) Operator Lifecycle Manager, aka "v1"
//Next-gen (OCP 4.14+) Operator Lifecycle Manager, aka "v1"
:olmv1: OLM 1.0
:olmv1-first: Operator Lifecycle Manager (OLM) 1.0
:ztp-first: GitOps Zero Touch Provisioning (ZTP)

View File

@@ -1857,18 +1857,29 @@ Topics:
Distros: openshift-origin
- Name: Cluster Operators reference
File: operator-reference
- Name: OLM v1 (Technology Preview)
- Name: OLM 1.0 (Technology Preview)
Dir: olm_v1
Distros: openshift-origin,openshift-enterprise
Topics:
- Name: About OLM v1
- Name: About OLM 1.0
File: index
- Name: Packaging format
File: olmv1-packaging-format
- Name: Managing catalogs
File: olmv1-managing-catalogs
- Name: Components and architecture
Dir: arch
Topics:
- Name: Components overview
File: olmv1-components
- Name: Operator Controller
File: olmv1-operator-controller
- Name: RukPak
File: olmv1-rukpak
- Name: Dependency resolution
File: olmv1-dependency
- Name: Catalogd
File: olmv1-catalogd
- Name: Installing an Operator from a catalog
File: olmv1-installing-an-operator-from-a-catalog
- Name: Managing plain bundles
File: olmv1-managing-plain-bundles
---
Name: CI/CD
Dir: cicd

View File

@@ -1,6 +1,7 @@
// Module included in the following assemblies:
//
// * operators/understanding/olm-packaging-format.adoc
// * operators/olm_v1/olmv1_rukpak.adoc
:_content-type: CONCEPT
[id="olm-rukpak-about_{context}"]
@@ -11,13 +12,28 @@ ifeval::["{context}" == "olm-packaging-format"]
include::snippets/technology-preview.adoc[]
{product-title} 4.12 introduces the _platform Operator_ type as a Technology Preview feature. The platform Operator mechanism relies on the RukPak component, also introduced in {product-title} 4.12, and its resources to manage content.
{product-title} 4.14 introduces {olmv1-first} as a Technology Preview feature, which also relies on the RukPak component.
endif::[]
ifeval::["{context}" == "olmv1-packaging-format"]
= RukPak
ifeval::["{context}" == "olmv1-rukpak"]
= About RukPak
endif::[]
RukPak consists of a series of controllers, known as _provisioners_, that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: `Bundle` and `BundleDeployment`. These components work together to bring content onto the cluster and install it, generating resources within the cluster.
RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
A provisioner places a watch on both `Bundle` and `BundleDeployment` resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the `Bundle` resource onto the cluster. Then, given a `BundleDeployment` resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources.
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
Two provisioners are currently implemented and bundled with RukPak: the _plain provisioner_ that sources and unpacks `plain+v0` bundles, and the _registry provisioner_ that sources and unpacks Operator Lifecycle Manager (OLM) `registry+v1` bundles.
At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs.
.Common terminology
Bundle::
A collection of Kubernetes manifests that define content to be deployed to a cluster
Bundle image::
A container image that contains a bundle within its filesystem
Bundle Git repository::
A Git repository that contains a bundle within a directory
Provisioner::
Controllers that install and manage content on a Kubernetes cluster
Bundle deployment::
Generates deployed instances of a bundle

View File

@@ -20,13 +20,13 @@ For example, the following shows the file tree in a `plain+v0` bundle. It must h
.Example `plain+v0` bundle file tree
[source,terminal]
----
$ tree manifests
manifests
├── namespace.yaml
├── service_account.yaml
├── cluster_role.yaml
├── role.yaml
├── serviceaccount.yaml
├── cluster_role_binding.yaml
├── role_binding.yaml
└── deployment.yaml
----

View File

@@ -1,11 +1,16 @@
// Module included in the following assemblies:
//
// * operators/understanding/olm-packaging-format.adoc
// * operators/olm_v1/olmv1_rukpak.adoc
:_content-type: CONCEPT
[id="olm-rukpak-provisioner_{context}"]
= Provisioner
= About provisioners
A RukPak provisioner is a controller that understands the `BundleDeployment` and `Bundle` APIs and can take action. Each provisioner is assigned a unique ID and is responsible for reconciling `Bundle` and `BundleDeployment` objects with a `spec.provisionerClassName` field that matches that particular ID.
RukPak consists of a series of controllers, known as _provisioners_, that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: `Bundle` and `BundleDeployment`. These components work together to bring content onto the cluster and install it, generating resources within the cluster.
For example, the plain provisioner is able to unpack a given `plain+v0` bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster.
Two provisioners are currently implemented and bundled with RukPak: the _plain provisioner_ that sources and unpacks `plain+v0` bundles, and the _registry provisioner_ that sources and unpacks Operator Lifecycle Manager (OLM) `registry+v1` bundles.
Each provisioner is assigned a unique ID and is responsible for reconciling `Bundle` and `BundleDeployment` objects with a `spec.provisionerClassName` field that matches that particular ID. For example, the plain provisioner is able to unpack a given `plain+v0` bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster.
A provisioner places a watch on both `Bundle` and `BundleDeployment` resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the `Bundle` resource onto the cluster. Then, given a `BundleDeployment` resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources.

View File

@@ -1,10 +1,11 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc
// * operators/olm_v1/arch/olmv1-catalogd.adoc
:_content-type: CONCEPT
[id="olmv1-about-catalogs_{context}"]
= About catalogs in OLM 1.0
= About catalogs in {olmv1}
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the OLM 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.
You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the {olmv1-first} suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.

View File

@@ -0,0 +1,14 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/index.adoc
:_content-type: CONCEPT
[id="olmv1-about-purpose_{context}"]
= Purpose
The mission of Operator Lifecycle Manager (OLM) has been to manage the lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose has always been to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster and platform-as-a-service (PaaS) administrators throughout the lifecycle of the underlying cluster.
The initial version of OLM, which launched with {product-title} 4 and is included by default, focused on providing unique support for these specific needs for a particular type of cluster extension, known as Operators. Operators are classified as one or more Kubernetes controllers, shipping with one or more API extensions, as `CustomResourceDefinition` (CRD) objects, to provide additional functionality to the cluster.
After running in production clusters for many releases, the next-generation of OLM aims to encompass lifecycles for cluster extensions that are not just Operators.

View File

@@ -1,15 +1,16 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc
// * operators/olm_v1/arch/olmv1-operator-controller.adoc
:_content-type: CONCEPT
[id="olmv1-about-operator-updates_{context}"]
= About target versions in OLM 1.0
= About target versions in {olmv1}
In Operator Lifecycle Manager 1.0, cluster administrators set the target version of an Operator declaratively in the Operator's custom resource (CR).
In {olmv1-first}, cluster administrators set the target version of an Operator declaratively in the Operator's custom resource (CR).
If you specify a channel in the Operator's CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.
If you specify a channel in the Operator's CR, {olmv1} installs the latest release from the specified channel. When updates are published to the specified channel, {olmv1} automatically updates to the latest release from the channel.
.Example CR with a specified channel
[source,yaml]
@@ -24,7 +25,7 @@ spec:
----
<1> Installs the latest release published to the specified channel. Updates to the channel are automatically installed.
If you specify the Operator's target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator's CR, OLM 1.0 does not change the target version when updates are published to the catalog.
If you specify the Operator's target version in the CR, {olmv1} installs the specified version. When the target version is specified in the Operator's CR, {olmv1} does not change the target version when updates are published to the catalog.
If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator's CR. Specifying a Operator's target version pins the Operator's version to the specified release.
@@ -45,7 +46,7 @@ If you want to change the installed version of an Operator, edit the Operator's
[WARNING]
====
In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.
In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, {olmv1} does not enforce upgrade edge definitions. You can specify any version of an Operator, and {olmv1} attempts to apply the update.
====
You can inspect an Operator's catalog contents, including available versions and channels, by running the following command:
@@ -56,7 +57,7 @@ You can inspect an Operator's catalog contents, including available versions and
$ oc get package <catalog_name>-<package_name> -o yaml
----
After a you create or update a CR, create or configure the Operator by running the following command:
After you create or update a CR, create or configure the Operator by running the following command:
.Command syntax
[source,terminal]
@@ -70,7 +71,7 @@ $ oc apply -f <extension_name>.yaml
+
[source,terminal]
----
$ oc get operators.operators.operatorframework.io <operator_name> -o yaml
$ oc get operator.operators.operatorframework.io <operator_name> -o yaml
----
+
.Example output

View File

@@ -0,0 +1,121 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-plain-bundles.adoc
:_content-type: PROCEDURE
[id="olmv1-adding-plain-bundle-to-fbc_{context}"]
= Adding a plain bundle to a file-based catalog
The `opm render` command does not support adding plain bundles to catalogs. You must manually add plain bundles to your file-based catalog, as shown in the following procedure.
.Procedure
. Verify that the `index.json` or `index.yaml` file for your catalog is similar to the following example:
+
.Example `<catalog_dir>/index.json` file
[source,json]
----
{
{
"schema": "olm.package",
"name": "<extension_name>",
"defaultChannel": ""
}
}
----
. To create an `olm.bundle` blob, edit your `index.json` or `index.yaml` file, similar to the following example:
+
.Example `<catalog_dir>/index.json` file with `olm.bundle` blob
[source,json]
----
{
"schema": "olm.bundle",
"name": "<extension_name>.v<version>",
"package": "<extension_name>",
"image": "quay.io/<organization_name>/<repository_name>:<image_tag>",
"properties": [
{
"type": "olm.package",
"value": {
"packageName": "<extension_name>",
"version": "<bundle_version>"
}
},
{
"type": "olm.bundle.mediatype",
"value": "plain+v0"
}
]
}
----
. To create an `olm.channel` blob, edit your `index.json` or `index.yaml` file, similar to the following example:
+
.Example `<catalog_dir>/index.json` file with `olm.channel` blob
[source,json]
----
{
"schema": "olm.channel",
"name": "<desired_channel_name>",
"package": "<extension_name>",
"entries": [
{
"name": "<extension_name>.v<version>"
}
]
}
----
// Please refer to [channel naming conventions](https://olm.operatorframework.io/docs/best-practices/channel-naming/) for choosing the <desired_channel_name>. An example of the <desired_channel_name> is `candidate-v0`.
.Verification
. Open your `index.json` or `index.yaml` file and ensure it is similar to the following example:
+
.Example `<catalog_dir>/index.json` file
[source,json]
----
{
"schema": "olm.package",
"name": "example-extension",
"defaultChannel": "preview"
}
{
"schema": "olm.bundle",
"name": "example-extension.v0.0.1",
"package": "example-extension",
"image": "quay.io/example-org/example-extension-bundle:v0.0.1",
"properties": [
{
"type": "olm.package",
"value": {
"packageName": "example-extension",
"version": "0.0.1"
}
},
{
"type": "olm.bundle.mediatype",
"value": "plain+v0"
}
]
}
{
"schema": "olm.channel",
"name": "preview",
"package": "example-extension",
"entries": [
{
"name": "example-extension.v0.0.1"
}
]
}
----
. Validate your catalog by running the following command:
+
[source,terminal]
----
$ opm validate <catalog_dir>
----

View File

@@ -0,0 +1,38 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-plain-bundles.adoc
:_content-type: PROCEDURE
[id="olmv1-building-plain-bundle-image-source_{context}"]
= Building a plain bundle image from an image source
The Operator Controller currently supports installing plain bundles created only from a _plain bundle image_.
.Procedure
. At the root of your project, create a Dockerfile that can build a bundle image:
+
.Example `plainbundle.Dockerfile`
[source,docker]
----
FROM scratch <1>
ADD manifests /manifests
----
<1> Use the `FROM scratch` directive to make the size of the image smaller. No other files or directories are required in the bundle image.
. Build an Open Container Initiative (OCI)-compliant image by using your preferred build tool, similar to the following example:
+
[source,terminal]
----
$ podman build -f plainbundle.Dockerfile -t \
quay.io/<organization_name>/<repository_name>:<image_tag> . <1>
----
<1> Use an image tag that references a repository where you have push access privileges.
. Push the image to your remote registry by running the following command:
+
[source,terminal]
----
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>
----

View File

@@ -1,244 +0,0 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-managing-catalogs.adoc
ifdef::openshift-origin[]
:registry-image: quay.io/operator-framework/opm:latest
endif::[]
ifndef::openshift-origin[]
:registry-image: registry.redhat.io/openshift4/ose-operator-registry:v{product-version}
endif::[]
:_content-type: PROCEDURE
[id="olmv1-building-plain-bundle-image-source_{context}"]
= Building a plain bundle image from an image source
The Operator Controller currently supports installing plain bundles created only from a _plain bundle image_.
.Prequisites
* You have Kubernetes manifests for your bundle in a flat directory at the root of your project similar to the following structure:
+
.Example directory structure
[source,terminal]
----
manifests
├── namespace.yaml
├── service_account.yaml
├── cluster_role.yaml
├── cluster_role_binding.yaml
└── deployment.yaml
----
.Procedure
. At the root of your project, create a Dockerfile that can build a bundle image:
+
.Example `plainbundle.Dockerfile`
[source,docker]
----
FROM scratch <1>
ADD manifests /manifests
----
<1> Use the `FROM scratch` directive to make the size of the image smaller. No other files or directories are required in the bundle image.
. Build an OCI-compliant image using your preferred build tool, similar to the following example:
+
[source,terminal]
----
$ podman build -f plainbundle.Dockerfile -t \
quay.io/<organization_name>/<repository_name>:<image_tag> . <1>
----
<1> Use an image tag that references a repository where you have push access privileges.
. Push the image to your remote registry:
+
[source,terminal]
----
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>
----
[id="olmv1-creating-fbc_{context}"]
= Creating a file-based catalog
If you do not have a file-based catalog, you must perform the following steps to initialize the catalog.
.Procedure
. Create a directory for the catalog by running the following command:
+
[source,terminal]
----
$ mkdir <catalog_dir>
----
. Generate a Dockerfile that can build a catalog image by running the `opm generate dockerfile` command in the same directory level as the previous step:
+
[source,terminal,subs="attributes+"]
----
ifdef::openshift-origin[]
$ opm generate dockerfile <catalog_dir>
endif::[]
ifndef::openshift-origin[]
$ opm generate dockerfile <catalog_dir> \
-i {registry-image} <1>
endif::[]
----
ifndef::openshift-origin[]
<1> Specify the official Red Hat base image by using the `-i` flag, otherwise the Dockerfile uses the default upstream image.
endif::[]
+
[NOTE]
====
The generated Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
.Example directory structure
[source,terminal]
----
.
├── <catalog_dir>
└── <catalog_dir>.Dockerfile
----
====
. Populate the catalog with the package definition for your extension by running the `opm init` command:
+
[source,terminal]
----
$ opm init <extension_name> \
--output json \
> <catalog_dir>/index.json
----
+
This command generates an `olm.package` declarative config blob in the specified catalog configuration file.
[id="olmv1-adding-plain-bundle-to-fbc_{context}"]
= Adding a plain bundle to a file-based catalog
Currently, the `opm render` command does not support adding plain bundles to catalogs. You must manually add plain bundles to your file-based catalog, as shown in the following procedure.
.Procedure
. Verify that your catalog's `index.json` or `index.yaml` file is similar to the following example:
+
.Example `<catalog_dir>/index.json` file
[source,json]
----
{
{
"schema": "olm.package",
"name": "<extension_name>",
"defaultChannel": ""
}
}
----
. To create an `olm.bundle` blob, edit your `index.json` or `index.yaml` file, similar to the following example:
+
.Example `<catalog_dir>/index.json` file with `olm.bundle` blob
[source,json]
----
{
"schema": "olm.bundle",
"name": "<extension_name>.v<version>",
"package": "<extension_name>",
"image": "quay.io/<organization_name>/<repository_name>:<image_tag>",
"properties": [
{
"type": "olm.package",
"value": {
"packageName": "<extension_name>",
"version": "<bundle_version>"
}
},
{
"type": "olm.bundle.mediatype",
"value": "plain+v0"
}
]
}
----
. To create an `olm.channel` blob, edit your `index.json` or `index.yaml` file, similar to the following example:
+
.Example `<catalog_dir>/index.json` file with `olm.channel` blob
[source,json]
----
{
"schema": "olm.channel",
"name": "<desired_channel_name>",
"package": "<extension_name>",
"entries": [
{
"name": "<extension_name>.v<version>"
}
]
}
----
// Please refer to [channel naming conventions](https://olm.operatorframework.io/docs/best-practices/channel-naming/) for choosing the <desired_channel_name>. An example of the <desired_channel_name> is `candidate-v0`.
.Verification
* Open your `index.json` or `index.yaml` file and ensure it is similar to the following example:
+
.Example `<catalog_dir>/index.json` file
[source,json]
----
{
"schema": "olm.package",
"name": "example-extension",
}
{
"schema": "olm.bundle",
"name": "example-extension.v0.0.1",
"package": "example-extension",
"image": "quay.io/rashmigottipati/example-extension-bundle:v0.0.1",
"properties": [
{
"type": "olm.package",
"value": {
"packageName": "example-extension",
"version": "v0.0.1"
}
},
{
"type": "olm.bundle.mediatype",
"value": "plain+v0"
}
]
}
{
"schema": "olm.channel",
"name": "preview",
"package": "example-extension",
"entries": [
{
"name": "example-extension.v0.0.1"
}
]
}
----
[id="olmv1-publishing-fbc_{context}"]
= Building and publishing a file-based catalog
.Procedure
. Build your file-bsaed catalog as an image by running the following command:
+
[source,terminal]
----
$ podman build -f <catalog_dir>.Dockerfile -t \
quay.io/<organization_name>/<repository_name>:<image_tag> .
----
. Push your catalog image by running the following command:
+
[source,terminal]
----
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>
----
:!registry-image:

View File

@@ -0,0 +1,66 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-plain-bundles.adoc
ifdef::openshift-origin[]
:registry-image: quay.io/operator-framework/opm:latest
endif::[]
ifndef::openshift-origin[]
:registry-image: registry.redhat.io/openshift4/ose-operator-registry:v{product-version}
endif::[]
:_content-type: PROCEDURE
[id="olmv1-creating-fbc_{context}"]
= Creating a file-based catalog
If you do not have a file-based catalog, you must perform the following steps to initialize the catalog.
.Procedure
. Create a directory for the catalog by running the following command:
+
[source,terminal]
----
$ mkdir <catalog_dir>
----
. Generate a Dockerfile that can build a catalog image by running the `opm generate dockerfile` command in the same directory level as the previous step:
+
[source,terminal,subs="attributes+"]
----
ifdef::openshift-origin[]
$ opm generate dockerfile <catalog_dir>
endif::[]
ifndef::openshift-origin[]
$ opm generate dockerfile <catalog_dir> \
-i {registry-image} <1>
endif::[]
----
ifndef::openshift-origin[]
<1> Specify the official Red Hat base image by using the `-i` flag, otherwise the Dockerfile uses the default upstream image.
endif::[]
+
[NOTE]
====
The generated Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:
.Example directory structure
[source,terminal]
----
.
├── <catalog_dir>
└── <catalog_dir>.Dockerfile
----
====
. Populate the catalog with the package definition for your extension by running the `opm init` command:
+
[source,terminal]
----
$ opm init <extension_name> \
--output json \
> <catalog_dir>/index.json
----
+
This command generates an `olm.package` declarative config blob in the specified catalog configuration file.

View File

@@ -0,0 +1,73 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-dependency.adoc
:_content-type: CONCEPT
[id="olmv1-dependency-concepts_{context}"]
= Concepts
There are a set of expectations from the user that the package manager should never do the following:
* Install a package whose dependencies can not be fulfilled or that conflict with the dependencies of another package
* Install a package whose constraints can not be met by the current set of installable packages
* Update a package in a way that breaks another that depends on it
[id="olmv1-dependency-example-successful_{context}"]
== Example: Successful resolution
A user wants to install packages A and B that have the following dependencies:
|===
|Package A `v0.1.0` |Package B `latest`
|↓ (depends on) |↓ (depends on)
|Package C `v0.1.0` |Package D `latest`
|===
Additionally, the user wants to pin the version of A to `v0.1.0`.
*Packages and constraints passed to {olmv1}*
.Packages
* A
* B
.Constraints
* A `v0.1.0` depends on C `v0.1.0`
* A pinned to `v0.1.0`
* B depends on D
.Output
* Resolution set:
** A `v0.1.0`
** B `latest`
** C `v0.1.0`
** D `latest`
[id="olmv1-dependency-example-unsuccessful_{context}"]
== Example: Unsuccessful resolution
A user wants to install packages A and B that have the following dependencies:
|===
|Package A `v0.1.0` |Package B `latest`
|↓ (depends on) |↓ (depends on)
|Package C `v0.1.0` |Package C `v0.2.0`
|===
Additionally, the user wants to pin the version of A to `v0.1.0`.
*Packages and constraints passed to {olmv1}*
.Packages
* A
* B
.Constraints
* A `v0.1.0` depends on C `v0.1.0`
* A pinned to `v0.1.0`
* B `latest` depends on C `v0.2.0`
.Output
* Resolution set:
** Unable to resolve because A `v0.1.0` requires C `v0.1.0`, which conflicts with B `latest` requiring C `v0.2.0`

View File

@@ -23,6 +23,8 @@ $ oc get packages
----
+
.Example output
[%collapsible]
====
[source,text]
----
NAME AGE
@@ -39,6 +41,7 @@ redhat-operators-aws-efs-csi-driver-operator 5m27s
redhat-operators-aws-load-balancer-operator 5m27s
...
----
====
. Inspect the contents of an Operator or extension's custom resource (CR) by running the following command:
+
@@ -54,6 +57,8 @@ $ oc get package redhat-operators-quay-operator -o yaml
----
+
.Example output
[%collapsible]
====
[source,text]
----
apiVersion: catalogd.operatorframework.io/v1alpha1
@@ -200,3 +205,4 @@ spec:
packageName: quay-operator
status: {}
----
====

View File

@@ -0,0 +1,32 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-plain-bundles.adoc
:_content-type: CONCEPT
[id="olmv1-operator-api"]
= Operator API
Operator Controller provides a new `Operator` API object, which is a single resource that represents an instance of an installed Operator. This `operator.operators.operatorframework.io` API streamlines management of installed Operators by consolidating user-facing APIs into a single object.
[IMPORTANT]
====
In {olmv1}, `Operator` objects are cluster-scoped. This differs from earlier OLM versions where Operators could be either namespace-scoped or cluster-scoped, depending on the configuration of their related `Subscription` and `OperatorGroup` objects.
For more information about the earlier behavior, see _Multitenancy and Operator colocation_.
====
.Example `Operator` object
[source,yaml]
----
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
name: <operator_name>
spec:
packageName: <package_name>
channel: <channel_name>
version: <version_number>
----
include::snippets/olmv1-operator-api-group.adoc[]

View File

@@ -0,0 +1,25 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-plain-bundles.adoc
:_content-type: PROCEDURE
[id="olmv1-publishing-fbc_{context}"]
= Building and publishing a file-based catalog
.Procedure
. Build your file-based catalog as an image by running the following command:
+
[source,terminal]
----
$ podman build -f <catalog_dir>.Dockerfile -t \
quay.io/<organization_name>/<repository_name>:<image_tag> .
----
. Push your catalog image by running the following command:
+
[source,terminal]
----
$ podman push quay.io/<organization_name>/<repository_name>:<image_tag>
----

View File

@@ -1,13 +1,14 @@
// Module included in the following assemblies:
//
// * operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc
// * operators/olm_v1/arch/olmv1-catalogd.adoc
:_content-type: REFERENCE
[id="olmv1-red-hat-catalogs_{context}"]
= Red Hat-provided Operator catalogs in OLM 1.0
= Red Hat-provided Operator catalogs in {olmv1}
Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.
{olmv1-first} does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for {olmv1}.
.Example Red Hat Operators catalog
[source,yaml,subs="attributes+"]

View File

@@ -0,0 +1 @@
../../_attributes/

View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1 @@
../../modules/

View File

@@ -0,0 +1,23 @@
:_content-type: ASSEMBLY
[id="olmv1-catalogd"]
= Catalogd (Technology Preview)
include::_attributes/common-attributes.adoc[]
:context: olmv1-catalogd
toc::[]
{olmv1-first} uses the catalogd component and its resources to manage Operator and extension catalogs.
:FeatureName: {olmv1}
include::snippets/technology-preview.adoc[]
include::modules/olmv1-about-catalogs.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../../operators/understanding/olm-packaging-format.adoc#olm-file-based-catalogs_olm-packaging-format[File-based catalogs]
include::modules/olmv1-red-hat-catalogs.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../../operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc#olmv1-adding-a-catalog-to-a-cluster_olmv1-installing-operator[Adding a catalog to a cluster]
* xref:../../../operators/understanding/olm-rh-catalogs.adoc#olm-rh-catalogs_olm-rh-catalogs[About Red Hat-provided Operator catalogs]

View File

@@ -0,0 +1,20 @@
:_content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="olmv1-components"]
= {olmv1} components overview (Technology Preview)
:context: olmv1-components
toc::[]
:FeatureName: {olmv1}
include::snippets/technology-preview.adoc[]
{olmv1-first} comprises the following component projects:
Operator Controller:: xref:../../../operators/olm_v1/arch/olmv1-operator-controller.adoc#olmv1-operator-controller[Operator Controller] is the central component of {olmv1} that extends Kubernetes with an API through which users can install and manage the lifecycle of Operators and extensions. It consumes information from each of the following components.
RukPak:: xref:../../../operators/olm_v1/arch/olmv1-rukpak.adoc#olmv1-rukpak[RukPak] is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy.
+
RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions.
Catalogd:: xref:../../../operators/olm_v1/arch/olmv1-catalogd.adoc#olmv1-catalogd[Catalogd] is a Kubernetes extension that unpacks file-based catalog (FBC) content packaged and shipped in container images for consumption by on-cluster clients. As a component of the {olmv1} microservices architecture, catalogd hosts metadata for Kubernetes extensions packaged by the authors of the extensions, and as a result helps users discover installable content.

View File

@@ -0,0 +1,14 @@
:_content-type: ASSEMBLY
[id="olmv1-dependency"]
= Dependency resolution in {olmv1} (Technology Preview)
include::_attributes/common-attributes.adoc[]
:context: olmv1-dependency
toc::[]
{olmv1-first} uses a dependency manager for resolving constraints over catalogs of RukPak bundles.
:FeatureName: {olmv1}
include::snippets/technology-preview.adoc[]
include::modules/olmv1-dependency-concepts.adoc[leveloffset=+1]

View File

@@ -0,0 +1,20 @@
:_content-type: ASSEMBLY
[id="olmv1-operator-controller"]
= Operator Controller (Technology Preview)
include::_attributes/common-attributes.adoc[]
:context: olmv1-operator-controller
toc::[]
Operator Controller is the central component of {olmv1-first} and consumes the other {olmv1} components, RukPak and catalogd. It extends Kubernetes with an API through which users can install Operators and extensions.
:FeatureName: {olmv1}
include::snippets/technology-preview.adoc[]
include::modules/olmv1-operator-api.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../../operators/understanding/olm/olm-colocation.adoc#olm-colocation[Operator Lifecycle Manager (OLM) -> Multitenancy and Operator colocation]
include::modules/olmv1-about-target-versions.adoc[leveloffset=+2]

View File

@@ -0,0 +1,26 @@
:_content-type: ASSEMBLY
[id="olmv1-rukpak"]
= Rukpak (Technology Preview)
include::_attributes/common-attributes.adoc[]
:context: olmv1-rukpak
toc::[]
{olmv1-first} uses the RukPak component and its resources to manage cloud-native content.
:FeatureName: {olmv1}
include::snippets/technology-preview.adoc[]
include::modules/olm-rukpak-about.adoc[leveloffset=+1]
include::modules/olm-rukpak-provisioner.adoc[leveloffset=+1]
include::modules/olm-rukpak-bundle.adoc[leveloffset=+1]
include::modules/olm-rukpak-bundle-immutability.adoc[leveloffset=+2]
include::modules/olm-rukpak-plain-bundle.adoc[leveloffset=+2]
include::modules/olm-rukpak-registry-bundle.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
* xref:../../../operators/understanding/olm-packaging-format.adoc#olm-bundle-format_olm-packaging-format[Legacy OLM bundle format]
include::modules/olm-rukpak-bd.adoc[leveloffset=+1]

View File

@@ -0,0 +1 @@
../../snippets/

View File

@@ -1,32 +1,39 @@
:_content-type: ASSEMBLY
[id="olmv1-about"]
= About Operator Lifecycle Manager v1 (Technology Preview)
= About Operator Lifecycle Manager 1.0 (Technology Preview)
include::_attributes/common-attributes.adoc[]
:context: olmv1-about
toc::[]
{product-title} 4.14 introduces components for a next-generation iteration of Operator Lifecycle Manager (OLM) as a Technology Preview feature. Known during this phase as OLM v1, the updated framework evolves many of the concepts that have been part of the version of OLM included since the release of {product-title} 4.
Operator Lifecycle Manager (OLM) has been included with {product-title} 4 since its initial release. {product-title} 4.14 introduces components for a next-generation iteration of OLM as a Technology Preview feature, known during this phase as _{olmv1}_. This updated framework evolves many of the concepts that have been part of previous versions of OLM and adds new capabilities.
:FeatureName: OLM v1
:FeatureName: {olmv1}
include::snippets/technology-preview.adoc[]
During this Technology Preview phase of OLM v1 in {product-title} 4.14, administrators can use file-based catalogs to install and manage the following:
During this Technology Preview phase of {olmv1} in {product-title} 4.14, administrators can explore the following features:
Fully declarative model that supports GitOps workflows::
{olmv1} simplifies Operator management through two key APIs:
+
--
* A new `Operator` API, provided as `operator.operators.operatorframework.io` by the new Operator Controller component, streamlines management of installed Operators by consolidating user-facing APIs into a single object. This empowers administrators and SREs to automate processes and define desired states by using GitOps principles.
* The `Catalog` API, provided by the new catalogd component, serves as the foundation for {olmv1}, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges.
--
+
For more information, see xref:../../operators/olm_v1/arch/olmv1-operator-controller.adoc#olmv1-operator-controller[Operator Controller] and xref:../../operators/olm_v1/arch/olmv1-catalogd.adoc#olmv1-catalogd[Catalogd].
Improved control over Operator updates::
With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see xref:../../operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc#olmv1-updating-an-operator_olmv1-installing-operator[Updating an Operator].
Flexible Operator packaging format::
Administrators can use file-based catalogs to install and manage the following types of content:
+
--
* OLM-based Operators, similar to the existing OLM experience
* Plain bundles, which are static collections of arbitrary Kubernetes manifests
* _Plain bundles_, which are static collections of arbitrary Kubernetes manifests
--
+
In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see xref:../../operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc#olmv1-installing-an-operator-from-a-catalog[Installing an Operator from a catalog] and xref:../../operators/olm_v1/olmv1-managing-plain-bundles.adoc#olmv1-managing-plain-bundles[Managing plain bundles].
[id="olmv1-about-purpose"]
== Purpose
The mission of Operator Lifecycle Manager (OLM) has been to manage the lifecycle of cluster extensions centrally and declaratively on Kubernetes clusters. Its purpose has always been to make installing, running, and updating functional extensions to the cluster easy, safe, and reproducible for cluster administrators and platform-as-a-service (PaaS) administrators, throughout the lifecycle of the underlying cluster.
The existing version of OLM, which launched with {product-title} 4 and is included by default, was focused on providing unique support for these specific needs for a particular type of cluster extension, which have been referred as Operators. Operators are classified as one or more Kubernetes controllers, shipping with one or more API extensions (`CustomResourceDefinition` objects) to provide additional functionality to the cluster.
After running OLM in production clusters for many releases, it became apparent that there is a desire to deviate from this coupling of CRDs and controllers to encompass the lifecycling of extensions that are not just Operators.
Some of the goals of OLM v1 over the upcoming releases include improving Operator and extension lifecycle management in the following areas:
* Tenant isolation
* Dependencies and constraints
* Simplified packaging models
include::modules/olmv1-about-purpose.adoc[leveloffset=+1]

View File

@@ -1,6 +1,6 @@
:_content-type: ASSEMBLY
[id="olmv1-installing-an-operator-from-a-catalog"]
= Installing an Operator from a catalog in OLM 1.0 (Technology Preview)
= Installing an Operator from a catalog in {olmv1} (Technology Preview)
include::_attributes/common-attributes.adoc[]
:context: olmv1-installing-operator
@@ -8,22 +8,26 @@ toc::[]
Cluster administrators can add _catalogs_, or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog.
In the current Technology Preview release of Operator Lifecycle Manager (OLM) 1.0, you manage catalogs and Operators declaratively from the CLI using custom resources (CRs).
In the current Technology Preview release of {olmv1-first}, you manage catalogs and Operators declaratively from the CLI using custom resources (CRs).
:FeatureName: OLM 1.0
:FeatureName: {olmv1}
include::snippets/technology-preview.adoc[]
[id="prerequisites_olmv1-installing-an-operator-from-a-catalog"]
== Prerequisites
* Access to an {product-title} cluster using an account with `cluster-admin` permissions
* The `oc` command installed on your workstation
+
--
include::snippets/olmv1-cli-only.adoc[]
--
* The `TechPreviewNoUpgrades` feature set enabled on the cluster
+
[WARNING]
====
Enabling the `TechPreviewNoUpgrade` feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.
====
* The OpenShift CLI (`oc`) installed on your workstation
[role="_additional-resources"]
.Additional resources

View File

@@ -1,51 +0,0 @@
:_content-type: ASSEMBLY
[id="olmv1-managing-catalogs"]
= Managing catalogs for OLM v1 (Technology Preview)
include::_attributes/common-attributes.adoc[]
:context: olmv1-managing-catalogs
toc::[]
In OLM v1, a _plain bundle_ is a static collection of arbitrary Kubernetes manifests in YAML format. The experimental `olm.bundle.mediatype` property of the `olm.bundle` schema object differentiates a plain bundle (`plain+v0`) from a regular (`registry+v1`) bundle.
:FeatureName: OLM v1
include::snippets/technology-preview.adoc[]
// For more information, see the [Plain Bundle Specification](https://github.com/operator-framework/rukpak/blob/main/docs/bundles/plain.md) in the RukPak repository.
As a cluster administrator, you can build and publish a file-based catalog that includes a plain bundle image by completing the following procedures:
. Build a plain bundle image.
. Create a file-based catalog.
. Add the plain bundle image to your file-based catalog.
. Build your catalog as an image.
. Publish your catalog image.
[role="_additional-resources"]
.Additional resources
* xref:../../operators/olm_v1/olmv1-packaging-format.adoc#olmv1-packaging-format[RukPak component and packaging format]
[id="prerequisites_olmv1-plain-bundles"]
== Prerequisites
- Access to an {product-title} cluster using an account with `cluster-admin` permissions
- The `TechPreviewNoUpgrades` feature set enabled on the cluster
+
[WARNING]
====
Enabling the `TechPreviewNoUpgrade` feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.
====
- The `oc` command installed on your workstation
- `opm` CLI tool
- Docker or Podman
- Push access to a container registry, such as link:https://quay.io[Quay]
[role="_additional-resources"]
.Additional resources
* xref:../../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling[Enabling features using feature gates]
// - Only the `redhat-operators` catalog source enabled on the cluster. This is a restriction during the Technology Preview release.
include::modules/olmv1-catalog-plain.adoc[leveloffset=+1]

View File

@@ -0,0 +1,72 @@
:_content-type: ASSEMBLY
include::_attributes/common-attributes.adoc[]
[id="olmv1-managing-plain-bundles"]
= Managing plain bundles in {olmv1} (Technology Preview)
:context: olmv1-managing-catalogs
toc::[]
In {olmv1-first}, a _plain bundle_ is a static collection of arbitrary Kubernetes manifests in YAML format. The experimental `olm.bundle.mediatype` property of the `olm.bundle` schema object differentiates a plain bundle (`plain+v0`) from a regular (`registry+v1`) bundle.
:FeatureName: {olmv1}
include::snippets/technology-preview.adoc[]
// For more information, see the [Plain Bundle Specification](https://github.com/operator-framework/rukpak/blob/main/docs/bundles/plain.md) in the RukPak repository.
As a cluster administrator, you can build and publish a file-based catalog that includes a plain bundle image by completing the following procedures:
. Build a plain bundle image.
. Create a file-based catalog.
. Add the plain bundle image to your file-based catalog.
. Build your catalog as an image.
. Publish your catalog image.
[role="_additional-resources"]
.Additional resources
* xref:../../operators/olm_v1/arch/olmv1-rukpak.adoc#olmv1-rukpak[RukPak component and packaging format]
[id="prerequisites_olmv1-plain-bundles"]
== Prerequisites
* Access to an {product-title} cluster using an account with `cluster-admin` permissions
+
--
include::snippets/olmv1-cli-only.adoc[]
--
* The `TechPreviewNoUpgrades` feature set enabled on the cluster
+
[WARNING]
====
Enabling the `TechPreviewNoUpgrade` feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.
====
* The OpenShift CLI (`oc`) installed on your workstation
* The `opm` CLI installed on your workstation
* Docker or Podman installed on your workstation
* Push access to a container registry, such as link:https://quay.io[Quay]
* Kubernetes manifests for your bundle in a flat directory at the root of your project similar to the following structure:
+
.Example directory structure
[source,terminal]
----
manifests
├── namespace.yaml
├── service_account.yaml
├── cluster_role.yaml
├── cluster_role_binding.yaml
└── deployment.yaml
----
[role="_additional-resources"]
.Additional resources
* xref:../../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling[Enabling features using feature gates]
// - Only the `redhat-operators` catalog source enabled on the cluster. This is a restriction during the Technology Preview release.
include::modules/olmv1-building-plain-image.adoc[leveloffset=+1]
include::modules/olmv1-creating-fbc.adoc[leveloffset=+1]
include::modules/olmv1-adding-plain-to-fbc.adoc[leveloffset=+1]
include::modules/olmv1-publishing-fbc.adoc[leveloffset=+1]

View File

@@ -1,31 +0,0 @@
:_content-type: ASSEMBLY
[id="olmv1-packaging-format"]
= Packaging format for OLM v1 (Technology Preview)
include::_attributes/common-attributes.adoc[]
:context: olmv1-packaging-format
toc::[]
OLM v1 uses the RukPak component and its resources to manage content.
:FeatureName: OLM v1
include::snippets/technology-preview.adoc[]
include::modules/olm-rukpak-about.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources
* xref:../../operators/admin/olm-managing-po.adoc#olm-managing-po[Managing platform Operators]
* xref:../../operators/admin/olm-managing-po.adoc#olm-po-techpreview_olm-managing-po[Technology Preview restrictions for platform Operators]
include::modules/olm-rukpak-bundle.adoc[leveloffset=+2]
include::modules/olm-rukpak-bundle-immutability.adoc[leveloffset=+3]
include::modules/olm-rukpak-plain-bundle.adoc[leveloffset=+3]
include::modules/olm-rukpak-registry-bundle.adoc[leveloffset=+3]
[role="_additional-resources"]
.Additional resources
* xref:../../operators/understanding/olm-packaging-format.adoc#olm-bundle-format_olm-packaging-format[Legacy OLM bundle format]
include::modules/olm-rukpak-bd.adoc[leveloffset=+2]
include::modules/olm-rukpak-provisioner.adoc[leveloffset=+2]

View File

@@ -71,6 +71,7 @@ include::modules/olm-rukpak-about.adoc[leveloffset=+1]
* xref:../../operators/admin/olm-managing-po.adoc#olm-managing-po[Managing platform Operators]
* xref:../../operators/admin/olm-managing-po.adoc#olm-po-techpreview_olm-managing-po[Technology Preview restrictions for platform Operators]
* xref:../../operators/olm_v1/index.adoc#olmv1-about[About Operator Lifecycle Manager 1.0 (Technology Preview)]
include::modules/olm-rukpak-bundle.adoc[leveloffset=+2]
include::modules/olm-rukpak-bundle-immutability.adoc[leveloffset=+3]

View File

@@ -970,13 +970,13 @@ Fully declarative model that supports GitOps workflows::
+
--
* A new `Operator` API, provided as `operators.operators.operatorframework.io` by the new Operator Controller component, streamlines management of installed Operators by consolidating user-facing APIs into a single object. This empowers administrators and SREs to automate processes and define desired states by using GitOps principles.
* The `Catalog` API, provided by the new Catalogd component, serves as the foundation for {olmv1}, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges.
* The `Catalog` API, provided by the new catalogd component, serves as the foundation for {olmv1}, unpacking catalogs for on-cluster clients so that users can discover installable content, such as Operators and Kubernetes extensions. This provides increased visibility into all available Operator bundle versions, including their details, channels, and update edges.
--
+
For more information, see _Operator Controller_ and _Catalogd_.
For more information, see xref:../operators/olm_v1/arch/olmv1-operator-controller.adoc#olmv1-operator-controller[Operator Controller] and xref:../operators/olm_v1/arch/olmv1-catalogd.adoc#olmv1-catalogd[Catalogd].
Improved control over Operator updates::
With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see _Installing an Operator from a catalog_.
With improved insight into catalog content, administrators can specify target versions for installation and updates. This grants administrators more control over the target version of Operator updates. For more information, see xref:../operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc#olmv1-installing-an-operator-from-a-catalog[Installing an Operator from a catalog].
Flexible Operator packaging format::
Administrators can use file-based catalogs to install and manage the following types of content:
@@ -986,14 +986,11 @@ Administrators can use file-based catalogs to install and manage the following t
* _Plain bundles_, which are static collections of arbitrary Kubernetes manifests
--
+
In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see _Managing catalogs in {olmv1}_.
In addition, bundle size is no longer constrained by the etcd value size limit. For more information, see xref:../operators/olm_v1/olmv1-managing-plain-bundles.adoc#olmv1-managing-plain-bundles[Managing plain bundles in {olmv1}].
[NOTE]
====
For {product-title} {product-version}, documented procedures for {olmv1} are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the *Import YAML* and *Search* pages. However, the existing *OperatorHub* and *Installed Operators* pages do not yet observe {olmv1} components.
====
include::snippets/olmv1-cli-only.adoc[]
For more information, see _{olmv1-first}_.
For more information, see xref:../operators/olm_v1/index.adoc#olmv1-about[About Operator Lifecycle Manager 1.0].
[id="ocp-4-14-osdk"]
=== Operator development

View File

@@ -0,0 +1,11 @@
// Text snippet included in the following modules:
//
// * operators/olm_v1/olmv1-installing-an-operator-from-a-catalog.adoc
// * operators/olm_v1/olmv1-managing-plain-bundles.adoc
:_content-type: SNIPPET
[NOTE]
====
For {product-title} 4.14, documented procedures for {olmv1} are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the *Import YAML* and *Search* pages. However, the existing *OperatorHub* and *Installed Operators* pages do not yet display {olmv1} components.
====

View File

@@ -0,0 +1,17 @@
// Text snippet included in the following modules:
//
// * modules/olmv1-operator-api.adoc
:_content-type: SNIPPET
[NOTE]
====
When using the OpenShift CLI (`oc)`, the `Operator` resource provided with {olmv1} during this Technology Preview phase requires specifying the full `<resource>.<group>` format: `operator.operators.operatorframework.io`. For example:
[source,terminal]
----
$ oc get operator.operators.operatorframework.io
----
If you specify only the `Operator` resource without the API group, the CLI returns results for an earlier API (`operator.operators.coreos.com`) that is unrelated to {olmv1}.
====