1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

RHDEVDOCS-1952: Add Kafka source using ODC

This commit is contained in:
abrennan
2020-11-02 13:03:36 -06:00
committed by openshift-cherrypick-robot
parent a2ab11da3e
commit 2f6a13ce32
12 changed files with 158 additions and 91 deletions

View File

@@ -2542,9 +2542,6 @@ Topics:
# Channels
- Name: Event delivery workflows using channels
File: serverless-channels
# Sinkbinding
- Name: Using SinkBinding
File: serverless-sinkbinding
# Event sources
- Name: Event sources
Dir: event_sources
@@ -2557,9 +2554,13 @@ Topics:
File: serverless-apiserversource
- Name: Using PingSource
File: serverless-pingsource
- Name: Using SinkBinding
File: serverless-sinkbinding
- Name: Using a Kafka source
File: serverless-kafka-source
# Knative Kafka
# - Name: Using Apache Kafka with OpenShift Serverless
# File: serverless-kafka
- Name: Using Apache Kafka with OpenShift Serverless
File: serverless-kafka
# Networking
- Name: Networking
Dir: networking

View File

@@ -28,7 +28,7 @@ Key features of `kn` include:
====
Knative Eventing is currently available as a Technology Preview feature of {ServerlessProductName}.
====
* Create xref:../serverless/event_workflows/serverless-sinkbinding.adoc#serverless-sinkbinding[sink binding] to connect existing Kubernetes applications and Knative services.
* Create xref:../serverless/event_sources/serverless-sinkbinding.adoc#serverless-sinkbinding[sink binding] to connect existing Kubernetes applications and Knative services.
* Extend `kn` with flexible plugin architecture, similar to `kubectl`.
// * Easily integrate {ServerlessProductName} with OpenShift Pipelines by using `kn` in an OpenShift Pipelines task
// TODO: Add integrations later when we have docs about this.

BIN
images/verify-kafka-ODC.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

View File

@@ -9,7 +9,7 @@ Cluster administrators can enable the use of Apache Kafka functionality in an {S
.Prerequisites
* The {ServerlessOperatorName} and Knative Eventing are installed.
* You have installed {ServerlessProductName}, including Knative Eventing, in your {product-title} cluster.
* You have access to a Red Hat AMQ Streams cluster.
* You have cluster administrator permissions on {product-title}.
* You are logged in to the web console.

View File

@@ -1,77 +0,0 @@
// Module is included in the following assemblies:
//
// serverless/serverless-kafka.adoc
[id="serverless-install-kafka-yaml_{context}"]
= Installing Apache Kafka components using YAML
Cluster administrators can enable the use of Apache Kafka functionality in an {ServerlessProductName} deployment by instantiating the `KnativeKafka` custom resource definition provided by the *Knative Kafka* {ServerlessOperatorName} API.
.Prerequisites
* The {ServerlessOperatorName} and Knative Eventing are installed.
* You have access to a Red Hat AMQ Streams cluster.
* You have cluster administrator permissions on {product-title}.
.Procedure
. Create a YAML file that contains the following:
+
[source,yaml]
----
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
name: knative-kafka
namespace: knative-eventing
spec:
channel:
enabled: true <1>
bootstrapServers: <bootstrap_server> <2>
source:
enabled: true <3>
----
<1> Enables developers to use the `KafkaChannel` channel type in the cluster.
<2> A comma-separated list of bootstrapped servers from your AMQ Streams cluster.
<3> Enables developers to use the `KafkaSource` event source type in the cluster.
. Apply the YAML file:
+
[source,terminal]
----
$ oc apply -f <filename>
----
.Verification steps
. Check that the Kafka installation has completed successfully by checking the installation status conditions. For example:
+
[source,terminal]
----
$ oc get knativekafka.operator.serverless.openshift.io/knative-kafka \
-n knative-eventing \
--template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'
----
+
.Example output
[source,terminal]
----
DeploymentsAvailable=True
InstallSucceeded=True
Ready=True
----
+
If the conditions have a status of `Unknown` or `False`, wait a few moments and then try again.
. Check that the Knative Kafka resources have been created:
+
[source,terminal]
----
$ oc get pods -n knative-eventing
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
kafka-ch-controller-5d85f5f779-kqvs4 1/1 Running 0 126m
kafka-webhook-66bd8688d6-2grvf 1/1 Running 0 126m
----

View File

@@ -0,0 +1,37 @@
// Module included in the following assemblies:
//
// * serverless/event_sources/serverless-kafka-source.adoc
[id="serverless-kafka-source-odc_{context}"]
= Creating a Kafka event source by using the web console
You can create and verify a Kafka event source from the {product-title} web console.
.Prerequisites
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource are installed on your cluster.
* You have logged in to the web console.
* You are in the *Developer* perspective.
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure
. Navigate to the *Add* page and select *Event Source*.
. In the *Event Sources* page, select *Kafka Source* in the *Type* section.
. Configure the *Kafka Source* settings:
.. Add a comma-separated list of *Bootstrap Servers*.
.. Add a comma-separated list of *Topics*.
.. Add a *Consumer Group*.
.. Select the *Service Account Name* for the service account that you created.
.. Select the *Sink* for the event source. A *Sink* can be either a *Resource*, such as a channel, broker, or service, or a *URI*.
.. Enter a *Name* for the Kafka event source.
. Click *Create*.
.Verfication steps
You can verify that the Kafka event source was created and is connected to the sink by viewing the *Topology* page.
. In the *Developer* perspective, navigate to *Topology*.
. View the Kafka event source and sink.
+
image::verify-kafka-ODC.png[View the Kafka source and service in the Topology view]

View File

@@ -0,0 +1,61 @@
// Module included in the following assemblies:
//
// * serverless/event_sources/serverless-kafka-source.adoc
[id="serverless-kafka-source-yaml_{context}"]
= Creating a Kafka event source by using YAML
You can create a Kafka event source by using YAML.
.Prerequisites
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource are installed on your cluster.
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
.Procedure
. Create a YAML file containing the following:
+
[source,yaml]
----
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
name: <source-name>
spec:
consumerGroup: <group-name> <1>
bootstrapServers:
- <list-of-bootstrap-servers>
topics:
- <list-of-topics> <2>
sink:
----
<1> A consumer group is a group of consumers that use the same group ID, and consume data from a topic.
<2> A topic provides a destination for the storage of data. Each topic is split into one or more partitions.
+
.Example `KafkaSource` object
[source,yaml]
----
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
name: kafka-source
spec:
consumerGroup: knative-group
bootstrapServers:
- my-cluster-kafka-bootstrap.kafka:9092
topics:
- knative-demo-topic
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
----
. Apply the YAML file:
+
[source,terminal]
----
$ oc apply -f <filename>
----

View File

@@ -7,22 +7,24 @@ toc::[]
An _event source_ is an object that links an event producer with an event _sink_, or consumer. A sink can be a Knative service, channel, or broker that receives events from an event source.
[id="knative-event-sources-creating"]
== Creating event sources
Currently, {ServerlessProductName} supports the following event source types:
ApiServerSource:: Connects a sink to the Kubernetes API server.
PingSource:: Periodically sends ping events with a constant payload. It can be used as a timer.
SinkBinding:: Allows you to connect core Kubernetes resource objects such as a `Deployment`, `Job`, or `StatefulSet` with a sink.
KafkaSource:: Connect a Kafka cluster to a sink as an event source.
:FeatureName: Apache Kafka on {ServerlessProductName}
include::modules/technology-preview.adoc[leveloffset=+2]
You can create and manage Knative event sources using the **Developer** perspective in the {product-title} web console, the `kn` CLI, or by applying YAML files.
* Create an xref:../../serverless/event_sources/serverless-apiserversource.adoc#serverless-apiserversource[ApiServerSource].
* Create an xref:../../serverless/event_sources/serverless-pingsource.adoc#serverless-pingsource[PingSource].
* Create a xref:../../serverless/event_workflows/serverless-sinkbinding.adoc#serverless-sinkbinding[SinkBinding].
// Add Kafka once docs available
* Create a xref:../../serverless/event_sources/serverless-sinkbinding.adoc#serverless-sinkbinding[SinkBinding].
* Create a xref:../../serverless/event_sources/serverless-kafka-source.adoc#serverless-kafka-source[KafkaSource].
[id="knative-event-sources-additional-resources"]
== Additional resources
* For more information about eventing workflows using {ServerlessProductName}, see xref:../../serverless/architecture/serverless-event-architecture.adoc#serverless-event-architecture[Knative Eventing architecture].
* For more information about using Kafka event sources, see xref:../../serverless/serverless-kafka.adoc#serverless-kafka[Using Apache Kafka with {ServerlessProductName}].

View File

@@ -0,0 +1,19 @@
include::modules/serverless-document-attributes.adoc[]
[id="serverless-kafka-source"]
= Using a Kafka source
include::modules/common-attributes.adoc[]
:context: serverless-kafka-source
toc::[]
:FeatureName: Apache Kafka on {ServerlessProductName}
include::modules/technology-preview.adoc[leveloffset=+2]
The Apache Kafka event source brings messages into Knative. It reads events from an Apache Kafka cluster and passes these events to an event sink so that they can be consumed. You can use the `KafkaSource` event source with {ServerlessProductName}.
include::modules/serverless-kafka-source-odc.adoc[leveloffset=+1]
include::modules/serverless-kafka-source-yaml.adoc[leveloffset=+1]
[id="serverless-kafka-source-additional-resources"]
== Additional resources
* See the link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/kafka-concepts_str#kafka-concepts-key_str[Red Hat AMQ Streams] documentation for more information about Kafka concepts.

View File

@@ -11,7 +11,6 @@ A PingSource is used to periodically send ping events with a constant payload to
A PingSource can be used to schedule sending events, similar to a timer.
.Example PingSource YAML
[source,yaml]
----
apiVersion: sources.knative.dev/v1alpha2

View File

@@ -12,6 +12,31 @@ include::modules/technology-preview.adoc[leveloffset=+2]
You can use the `KafkaChannel` channel type and `KafkaSource` event source with {ServerlessProductName}.
To do this, you must install the Knative Kafka components, and configure the integration between {ServerlessProductName} and a supported link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/index[Red Hat AMQ Streams] cluster.
The {ServerlessOperatorName} provides the Knative Kafka API that can be used to create a `KnativeKafka` custom resource:
.Example `KnativeKafka` custom resource
[source,yaml]
----
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
name: knative-kafka
namespace: knative-eventing
spec:
channel:
enabled: true <1>
bootstrapServers: <bootstrap_server> <2>
source:
enabled: true <3>
----
<1> Enables developers to use the `KafkaChannel` channel type in the cluster.
<2> A comma-separated list of bootstrap servers from your AMQ Streams cluster.
<3> Enables developers to use the `KafkaSource` event source type in the cluster.
// Install Kafka
include::modules/serverless-install-kafka-odc.adoc[leveloffset=+1]
include::modules/serverless-install-kafka-yaml.adoc[leveloffset=+1]
[id="serverless-kafka-next-steps"]
== Next steps
* Create a xref:../serverless/event_sources/serverless-kafka-source.adoc#serverless-kafka-source[KafkaSource].