1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Added Kafka docs

This commit is contained in:
abrennan
2020-10-14 10:37:19 -05:00
committed by openshift-cherrypick-robot
parent 6f6ffe0b15
commit 46440f99e1
5 changed files with 147 additions and 1 deletions

View File

@@ -1629,7 +1629,7 @@ Topics:
- Name: Creating a multicomponent application with odo
File: creating-a-multicomponent-application-with-odo
- Name: Creating an application with a database
File: creating-an-application-with-a-database
File: creating-an-application-with-a-database
- Name: Using devfiles in odo
File: using-devfiles-in-odo
- Name: Working with storage
@@ -2545,6 +2545,9 @@ Topics:
File: serverless-apiserversource
- Name: Using PingSource
File: serverless-pingsource
# Knative Kafka
# - Name: Using Apache Kafka with OpenShift Serverless
# File: serverless-kafka
# Networking
- Name: Networking
Dir: networking

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 KiB

View File

@@ -0,0 +1,49 @@
// Module is included in the following assemblies:
//
// serverless/serverless-kafka.adoc
[id="serverless-install-kafka-odc_{context}"]
= Installing Apache Kafka components using the web console
Cluster administrators can enable the use of Apache Kafka functionality in an {ServerlessProductName} deployment by instantiating the `KnativeKafka` custom resource definition provided by the *Knative Kafka* {ServerlessOperatorName} API.
.Prerequisites
* The {ServerlessOperatorName} and Knative Eventing are installed.
* You have access to a Red Hat AMQ Streams cluster.
* You have cluster administrator permissions on {product-title}.
* You are logged in to the web console.
.Procedure
. In the *Administrator* perspective, navigate to *Operators* → *Installed Operators*.
. Check that the *Project* dropdown at the top of the page is set to *Project: knative-eventing*.
. Click *Knative Kafka* in the list of *Provided APIs* for the {ServerlessOperatorName} to go to the *Knative Kafka* tab.
. Click *Create Knative Kafka*.
. Optional: Configure the *KnativeKafka* object in the *Create Knative Kafka* page. To do so, use either the default form provided or edit the YAML.
.. Using the form is recommended for simpler configurations that do not require full control of *KnativeKafka* object creation.
.. Editing the YAML is recommended for more complex configurations that require full control of *KnativeKafka* object creation. You can access the YAML by clicking the *Edit YAML* link in the top right of the *Create Knative Kafka* page.
. Click *Create* after you have completed any of the optional configurations for Kafka. You are automatically directed to the *Knative Kafka* tab where *knative-kafka* is in the list of resources.
.Verification steps
. Click on the *knative-kafka* resource in the *Knative Kafka* tab. You are automatically directed to the *Knative Kafka Overview* page.
. View the list of *Conditions* for the resource and confirm that they have a status of *True*.
+
image::knative-kafka-overview.png[Kafka Knative Overview page showing Conditions]
+
If the conditions have a status of *Unknown* or *False*, wait a few moments to refresh the page.
. Check that the Knative Kafka resources have been created:
+
[source,terminal]
----
$ oc get pods -n knative-eventing
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
kafka-ch-controller-5d85f5f779-kqvs4 1/1 Running 0 126m
kafka-webhook-66bd8688d6-2grvf 1/1 Running 0 126m
----

View File

@@ -0,0 +1,77 @@
// Module is included in the following assemblies:
//
// serverless/serverless-kafka.adoc
[id="serverless-install-kafka-yaml_{context}"]
= Installing Apache Kafka components using YAML
Cluster administrators can enable the use of Apache Kafka functionality in an {ServerlessProductName} deployment by instantiating the `KnativeKafka` custom resource definition provided by the *Knative Kafka* {ServerlessOperatorName} API.
.Prerequisites
* The {ServerlessOperatorName} and Knative Eventing are installed.
* You have access to a Red Hat AMQ Streams cluster.
* You have cluster administrator permissions on {product-title}.
.Procedure
. Create a YAML file that contains the following:
+
[source,yaml]
----
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
name: knative-kafka
namespace: knative-eventing
spec:
channel:
enabled: true <1>
bootstrapServers: <bootstrap_server> <2>
source:
enabled: true <3>
----
<1> Enables developers to use the `KafkaChannel` channel type in the cluster.
<2> A comma-separated list of bootstrapped servers from your AMQ Streams cluster.
<3> Enables developers to use the `KafkaSource` event source type in the cluster.
. Apply the YAML file:
+
[source,terminal]
----
$ oc apply -f <filename>
----
.Verification steps
. Check that the Kafka installation has completed successfully by checking the installation status conditions. For example:
+
[source,terminal]
----
$ oc get knativekafka.operator.serverless.openshift.io/knative-kafka \
-n knative-eventing \
--template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'
----
+
.Example output
[source,terminal]
----
DeploymentsAvailable=True
InstallSucceeded=True
Ready=True
----
+
If the conditions have a status of `Unknown` or `False`, wait a few moments and then try again.
. Check that the Knative Kafka resources have been created:
+
[source,terminal]
----
$ oc get pods -n knative-eventing
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
kafka-ch-controller-5d85f5f779-kqvs4 1/1 Running 0 126m
kafka-webhook-66bd8688d6-2grvf 1/1 Running 0 126m
----

View File

@@ -0,0 +1,17 @@
include::modules/serverless-document-attributes.adoc[]
[id="serverless-kafka"]
= Using Apache Kafka with {ServerlessProductName}
include::modules/common-attributes.adoc[]
:context: serverless-kafka
toc::[]
:FeatureName: Apache Kafka on {ServerlessProductName}
include::modules/technology-preview.adoc[leveloffset=+2]
You can use the `KafkaChannel` channel type and `KafkaSource` event source with {ServerlessProductName}.
To do this, you must install the Knative Kafka components, and configure the integration between {ServerlessProductName} and a supported link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/index[Red Hat AMQ Streams] cluster.
// Install Kafka
include::modules/serverless-install-kafka-odc.adoc[leveloffset=+1]
include::modules/serverless-install-kafka-yaml.adoc[leveloffset=+1]