1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00
Files
openshift-docs/serverless/develop/serverless-kafka-developer.adoc
2022-02-03 17:26:44 +00:00

63 lines
3.2 KiB
Plaintext

:_content-type: ASSEMBLY
[id="serverless-kafka-developer"]
= Knative Kafka
include::modules/serverless-document-attributes.adoc[]
include::modules/common-attributes.adoc[]
:context: serverless-kafka-developer
toc::[]
Knative Kafka functionality is available in an {ServerlessProductName} installation xref:../../serverless/admin_guide/serverless-kafka-admin.adoc#serverless-install-kafka-odc_serverless-kafka-admin[if a cluster administrator has installed the `KnativeKafka` custom resource].
[NOTE]
====
Knative Kafka is not currently supported for IBM Z and IBM Power Systems.
====
Knative Kafka provides additional options, such as:
* Kafka source
* Kafka channel
* Kafka broker
:FeatureName: Kafka broker
include::modules/technology-preview.adoc[leveloffset=+1]
include::modules/serverless-kafka-event-delivery.adoc[leveloffset=+1]
See the xref:../../serverless/develop/serverless-event-delivery.adoc#serverless-event-delivery[Event delivery] documentation for more information about delivery guarantees.
[id="serverless-kafka-developer-source"]
== Kafka source
You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink.
// dev console
include::modules/serverless-kafka-source-odc.adoc[leveloffset=+2]
// kn commands
include::modules/serverless-kafka-source-kn.adoc[leveloffset=+2]
include::modules/specifying-sink-flag-kn.adoc[leveloffset=+3]
// YAML
include::modules/serverless-kafka-source-yaml.adoc[leveloffset=+2]
[id="serverless-kafka-developer-broker"]
== Kafka broker
:FeatureName: Kafka broker
include::modules/technology-preview.adoc[leveloffset=+2]
If a cluster administrator has configured your {ServerlessProductName} deployment to use Kafka broker as the default broker type, xref:../../serverless/develop/serverless-using-brokers.adoc#serverless-using-brokers-creating-brokers[creating a broker by using the default settings] creates a Kafka-based `Broker` object. If your {ServerlessProductName} deployment is not configured to use Kafka broker as the default broker type, you can still use the following procedure to create a Kafka-based broker.
include::modules/serverless-kafka-broker.adoc[leveloffset=+2]
// Kafka channels
include::modules/serverless-create-kafka-channel-yaml.adoc[leveloffset=+1]
[id="additional-resources_serverless-kafka-developer"]
== Additional resources
* See the link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/kafka-concepts_str#kafka-concepts-key_str[Red Hat AMQ Streams] documentation for more information about Kafka concepts.
* See the Red Hat AMQ Streams link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html-single/using_amq_streams_on_rhel/index#assembly-kafka-encryption-and-authentication-str[TLS and SASL on Kafka] documentation.
* See the xref:../../serverless/develop/serverless-event-delivery.adoc#serverless-event-delivery[Event delivery] documentation for information about configuring event delivery parameters.
* See the xref:../../serverless/admin_guide/serverless-kafka-admin.adoc#serverless-kafka-admin[Knative Kafka cluster administrator documentation] for information about installing Kafka components and setting default configurations if you have cluster administrator permissions.