mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
Merge pull request #40627 from openshift-cherrypick-robot/cherry-pick-40560-to-enterprise-4.10
[enterprise-4.10] SRVKE-747: Split knative kafka docs into different personas
This commit is contained in:
@@ -3173,26 +3173,25 @@ Topics:
|
||||
File: serverless-apiserversource
|
||||
- Name: Using a ping source
|
||||
File: serverless-pingsource
|
||||
- Name: Using a Kafka source
|
||||
File: serverless-kafka-source
|
||||
- File: serverless-custom-event-sources
|
||||
Name: Custom event sources
|
||||
- Name: Creating and deleting channels
|
||||
File: serverless-creating-channels
|
||||
- Name: Subscriptions
|
||||
File: serverless-subs
|
||||
- Name: Knative Kafka
|
||||
File: serverless-kafka-developer
|
||||
# Admin guide
|
||||
- Name: Administer
|
||||
Dir: admin_guide
|
||||
Topics:
|
||||
- Name: Configuring OpenShift Serverless
|
||||
File: serverless-configuration
|
||||
# Eventing
|
||||
- Name: Configuring channel defaults
|
||||
File: serverless-configuring-channels
|
||||
# Ingress options
|
||||
- Name: Integrating Service Mesh with OpenShift Serverless
|
||||
File: serverless-ossm-setup
|
||||
# Eventing
|
||||
- Name: Knative Kafka
|
||||
File: serverless-kafka-admin
|
||||
- Name: Creating Knative Eventing components in the Administrator perspective
|
||||
File: serverless-cluster-admin-eventing
|
||||
# - Name: Configuring the Knative Eventing custom resource
|
||||
@@ -3203,6 +3202,9 @@ Topics:
|
||||
File: serverless-cluster-admin-serving
|
||||
- Name: Configuring the Knative Serving custom resource
|
||||
File: knative-serving-CR-config
|
||||
# Ingress options
|
||||
- Name: Integrating Service Mesh with OpenShift Serverless
|
||||
File: serverless-ossm-setup
|
||||
# Monitoring
|
||||
- Name: Monitoring serverless components
|
||||
File: serverless-admin-monitoring
|
||||
@@ -3242,6 +3244,8 @@ Topics:
|
||||
File: serverless-custom-domains
|
||||
- Name: Using a custom TLS certificate for domain mapping
|
||||
File: serverless-custom-tls-cert-domain-mapping
|
||||
- Name: Security configuration for Knative Kafka
|
||||
File: serverless-kafka-security
|
||||
# Knative Eventing
|
||||
- Name: Knative Eventing
|
||||
Dir: knative_eventing
|
||||
@@ -3255,9 +3259,6 @@ Topics:
|
||||
# Event delivery
|
||||
- Name: Event delivery
|
||||
File: serverless-event-delivery
|
||||
# Knative Kafka
|
||||
- Name: Knative Kafka
|
||||
File: serverless-kafka
|
||||
# Functions
|
||||
- Name: Functions
|
||||
Dir: functions
|
||||
|
||||
@@ -1,18 +1,37 @@
|
||||
// Module is included in the following assemblies:
|
||||
//
|
||||
// serverless/knative_eventing/serverless-kafka.adoc
|
||||
// serverless/admin_guide/serverless-kafka-admin.adoc
|
||||
|
||||
[id="serverless-install-kafka-odc_{context}"]
|
||||
= Installing Knative Kafka components by using the web console
|
||||
= Installing Knative Kafka
|
||||
|
||||
Cluster administrators can enable the use of Knative Kafka functionality in an {ServerlessProductName} deployment by instantiating the `KnativeKafka` custom resource definition provided by the *Knative Kafka* {ServerlessOperatorName} API.
|
||||
The {ServerlessOperatorName} provides the Knative Kafka API that can be used to create a `KnativeKafka` custom resource:
|
||||
|
||||
.Example `KnativeKafka` custom resource
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: operator.serverless.openshift.io/v1alpha1
|
||||
kind: KnativeKafka
|
||||
metadata:
|
||||
name: knative-kafka
|
||||
namespace: knative-eventing
|
||||
spec:
|
||||
channel:
|
||||
enabled: true <1>
|
||||
bootstrapServers: <bootstrap_servers> <2>
|
||||
source:
|
||||
enabled: true <3>
|
||||
----
|
||||
<1> Enables developers to use the `KafkaChannel` channel type in the cluster.
|
||||
<2> A comma-separated list of bootstrap servers from your AMQ Streams cluster.
|
||||
<3> Enables developers to use the `KafkaSource` event source type in the cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have installed {ServerlessProductName}, including Knative Eventing, in your {product-title} cluster.
|
||||
* You have access to a Red Hat AMQ Streams cluster.
|
||||
* You have cluster administrator permissions on {product-title}.
|
||||
* You are logged in to the web console.
|
||||
* You are logged in to the {product-title} web console.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
10
modules/serverless-kafka-event-delivery.adoc
Normal file
10
modules/serverless-kafka-event-delivery.adoc
Normal file
@@ -0,0 +1,10 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * serverless/develop/serverless-kafka-developer.adoc
|
||||
|
||||
[id="serverless-kafka-delivery-retries_{context}"]
|
||||
= Event delivery and retries
|
||||
|
||||
Using Kafka components in an event-driven architecture provides "at least once" event delivery. This means that operations are retried until a return code value is received. This makes applications more resilient to lost events; however, it might result in duplicate events being sent.
|
||||
|
||||
For the Kafka event source, there is a fixed number of retries for event delivery by default. For Kafka channels, retries are only performed if they are configured in the Kafka channel `Delivery` spec.
|
||||
@@ -1,3 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * serverless/develop/serverless-kafka-developer.adoc
|
||||
|
||||
[id="serverless-kafka-source-kn_{context}"]
|
||||
= Creating a Kafka event source by using the Knative CLI
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * serverless/event_sources/serverless-kafka-source.adoc
|
||||
// * serverless/develop/serverless-kafka-developer.adoc
|
||||
|
||||
[id="serverless-kafka-source-odc_{context}"]
|
||||
= Creating a Kafka event source by using the web console
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * serverless/event_sources/serverless-kafka-source.adoc
|
||||
// * serverless/develop/serverless-kafka-developer.adoc
|
||||
|
||||
[id="serverless-kafka-source-yaml_{context}"]
|
||||
= Creating a Kafka event source by using YAML
|
||||
|
||||
@@ -29,4 +29,3 @@ include::modules/serverless-creating-subscription-admin-web-console.adoc[levelof
|
||||
* See xref:../../serverless/develop/serverless-subs.adoc#serverless-subs[Subscriptions].
|
||||
* See xref:../../serverless/knative_eventing/serverless-triggers.adoc#serverless-triggers[Triggers].
|
||||
* See xref:../../serverless/discover/serverless-channels.adoc#serverless-channels[Channels].
|
||||
* See xref:../../serverless/knative_eventing/serverless-kafka.adoc#serverless-kafka[Knative Kafka].
|
||||
|
||||
27
serverless/admin_guide/serverless-kafka-admin.adoc
Normal file
27
serverless/admin_guide/serverless-kafka-admin.adoc
Normal file
@@ -0,0 +1,27 @@
|
||||
include::modules/serverless-document-attributes.adoc[]
|
||||
[id="serverless-kafka-admin"]
|
||||
= Knative Kafka
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: serverless-kafka-admin
|
||||
|
||||
toc::[]
|
||||
|
||||
In addition to the Knative Eventing components that are provided as part of a core {ServerlessProductName} installation, cluster administrators can install the `KnativeKafka` custom resource (CR).
|
||||
|
||||
The `KnativeKafka` CR provides users with additional options, such as:
|
||||
|
||||
* Kafka event source
|
||||
* Kafka channel
|
||||
// * Kafka broker
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Knative Kafka is not currently supported for IBM Z and IBM Power Systems.
|
||||
====
|
||||
|
||||
include::modules/serverless-install-kafka-odc.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_serverless-kafka-admin"]
|
||||
== Additional resources
|
||||
|
||||
* See the link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/kafka-concepts_str#kafka-concepts-key_str[Red Hat AMQ Streams] documentation for more information about Kafka concepts.
|
||||
41
serverless/develop/serverless-kafka-developer.adoc
Normal file
41
serverless/develop/serverless-kafka-developer.adoc
Normal file
@@ -0,0 +1,41 @@
|
||||
[id="serverless-kafka-developer"]
|
||||
= Knative Kafka
|
||||
include::modules/serverless-document-attributes.adoc[]
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: serverless-kafka-developer
|
||||
|
||||
toc::[]
|
||||
|
||||
Knative Kafka functionality is available in an {ServerlessProductName} installation xref:../../serverless/admin_guide/serverless-kafka-admin.adoc#serverless-kafka-admin[if a cluster administrator has installed the `KnativeKafka` custom resource].
|
||||
|
||||
Knative Kafka provides additional options, such as:
|
||||
|
||||
* Kafka event source
|
||||
* xref:../../serverless/develop/serverless-creating-channels.adoc#serverless-creating-channels[Kafka channel]
|
||||
// * Kafka broker
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Knative Kafka is not currently supported for IBM Z and IBM Power Systems.
|
||||
====
|
||||
|
||||
include::modules/serverless-kafka-event-delivery.adoc[leveloffset=+1]
|
||||
|
||||
[id="serverless-kafka-developer-event-source"]
|
||||
== Using a Kafka event source
|
||||
|
||||
You can create a Knative Kafka event source that reads events from an Apache Kafka cluster and passes these events to a sink.
|
||||
|
||||
// dev console
|
||||
include::modules/serverless-kafka-source-odc.adoc[leveloffset=+2]
|
||||
// kn commands
|
||||
include::modules/serverless-kafka-source-kn.adoc[leveloffset=+2]
|
||||
include::modules/specifying-sink-flag-kn.adoc[leveloffset=+3]
|
||||
// YAML
|
||||
include::modules/serverless-kafka-source-yaml.adoc[leveloffset=+2]
|
||||
|
||||
[id="additional-resources_serverless-kafka-developer"]
|
||||
== Additional resources
|
||||
|
||||
* See the link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/kafka-concepts_str#kafka-concepts-key_str[Red Hat AMQ Streams] documentation for more information about Kafka concepts.
|
||||
* See xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[Event sources].
|
||||
@@ -1,29 +0,0 @@
|
||||
include::modules/serverless-document-attributes.adoc[]
|
||||
[id="serverless-kafka-source"]
|
||||
= Using a Kafka source
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: serverless-kafka-source
|
||||
|
||||
toc::[]
|
||||
|
||||
You can create a Knative Kafka event source that reads events from an Apache Kafka cluster and passes these events to a sink.
|
||||
|
||||
[id="prerequisites_serverless-kafka-source"]
|
||||
== Prerequisites
|
||||
|
||||
You can use the `KafkaSource` event source with {ServerlessProductName} after you have xref:../../serverless/install/installing-knative-eventing.adoc#installing-knative-eventing[Knative Eventing] and xref:../../serverless/knative_eventing/serverless-kafka.adoc#serverless-kafka[Knative Kafka] installed on your cluster.
|
||||
|
||||
// dev console
|
||||
include::modules/serverless-kafka-source-odc.adoc[leveloffset=+1]
|
||||
// kn commands
|
||||
include::modules/serverless-kafka-source-kn.adoc[leveloffset=+1]
|
||||
include::modules/specifying-sink-flag-kn.adoc[leveloffset=+2]
|
||||
// YAML
|
||||
include::modules/serverless-kafka-source-yaml.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_serverless-kafka-source"]
|
||||
== Additional resources
|
||||
|
||||
* See xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[Event sources].
|
||||
* See xref:../../serverless/knative_eventing/serverless-kafka.adoc#serverless-kafka[Knative Kafka].
|
||||
* See the link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/kafka-concepts_str#kafka-concepts-key_str[Red Hat AMQ Streams] documentation for more information about Kafka concepts.
|
||||
@@ -16,6 +16,6 @@ xref:../../serverless/develop/serverless-apiserversource.adoc#serverless-apiserv
|
||||
|
||||
xref:../../serverless/develop/serverless-pingsource.adoc#serverless-pingsource[Ping source]:: Produces events with a fixed payload on a specified cron schedule.
|
||||
|
||||
xref:../../serverless/develop/serverless-kafka-source.adoc#serverless-kafka-source[Kafka source]:: Connects a Kafka cluster to a sink as an event source.
|
||||
xref:../../serverless/develop/serverless-kafka-developer.adoc#serverless-kafka-developer-event-source[Kafka event source]:: Connects a Kafka cluster to a sink as an event source.
|
||||
|
||||
You can also create a xref:../../serverless/develop/serverless-custom-event-sources.adoc#serverless-custom-event-sources[custom event source].
|
||||
|
||||
@@ -1,73 +0,0 @@
|
||||
include::modules/serverless-document-attributes.adoc[]
|
||||
[id="serverless-kafka"]
|
||||
= Knative Kafka
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: serverless-kafka
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use the `KafkaChannel` channel type and `KafkaSource` event source with {ServerlessProductName}. To do this, you must install the Knative Kafka components, and configure the integration between {ServerlessProductName} and a supported link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/index[Red Hat AMQ Streams] cluster.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Knative Kafka is not currently supported for IBM Z and IBM Power Systems.
|
||||
====
|
||||
|
||||
[id="serverless-kafka-delivery-retries"]
|
||||
== Event delivery and retries
|
||||
|
||||
Using Kafka components in your event-driven architecture provides "at least once" guarantees for event delivery. This means that operations are retried until a return code value is received. However, while this makes your application more resilient to lost events, it might result in duplicate events being sent.
|
||||
|
||||
For the Kafka event source, there is a fixed number of retries for event delivery by default. For Kafka channels, retries are only performed if they are configured in the Kafka channel `Delivery` spec.
|
||||
|
||||
[id="install-serverless-kafka"]
|
||||
== Installing Knative Kafka
|
||||
|
||||
The {ServerlessOperatorName} provides the Knative Kafka API that can be used to create a `KnativeKafka` custom resource:
|
||||
|
||||
.Example `KnativeKafka` custom resource
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: operator.serverless.openshift.io/v1alpha1
|
||||
kind: KnativeKafka
|
||||
metadata:
|
||||
name: knative-kafka
|
||||
namespace: knative-eventing
|
||||
spec:
|
||||
channel:
|
||||
enabled: true <1>
|
||||
bootstrapServers: <bootstrap_servers> <2>
|
||||
source:
|
||||
enabled: true <3>
|
||||
----
|
||||
<1> Enables developers to use the `KafkaChannel` channel type in the cluster.
|
||||
<2> A comma-separated list of bootstrap servers from your AMQ Streams cluster.
|
||||
<3> Enables developers to use the `KafkaSource` event source type in the cluster.
|
||||
|
||||
// Install Kafka
|
||||
include::modules/serverless-install-kafka-odc.adoc[leveloffset=+2]
|
||||
|
||||
[id="serverless-kafka-channel-link"]
|
||||
== Using Kafka channel
|
||||
|
||||
Create a xref:../../serverless/develop/serverless-creating-channels.adoc#serverless-creating-channels[Kafka channel].
|
||||
|
||||
[id="serverless-kafka-source-link"]
|
||||
== Using Kafka source
|
||||
|
||||
Create a xref:../../serverless/develop/serverless-kafka-source.adoc#serverless-kafka-source[Kafka event source].
|
||||
|
||||
// Configure TLS and SASL for Kafka
|
||||
[id="serverless-kafka-authentication"]
|
||||
== Configuring authentication for Kafka
|
||||
|
||||
In production, Kafka clusters are often secured using the TLS or SASL authentication methods. This section shows how to configure the Kafka channel to work against a protected Red Hat AMQ Streams (Kafka) cluster using TLS or SASL.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you choose to enable SASL, Red Hat recommends to also enable TLS.
|
||||
====
|
||||
|
||||
include::modules/serverless-kafka-tls.adoc[leveloffset=+2]
|
||||
include::modules/serverless-kafka-sasl.adoc[leveloffset=+2]
|
||||
include::modules/serverless-kafka-sasl-public-certs.adoc[leveloffset=+2]
|
||||
18
serverless/security/serverless-kafka-security.adoc
Normal file
18
serverless/security/serverless-kafka-security.adoc
Normal file
@@ -0,0 +1,18 @@
|
||||
[id="serverless-kafka-security"]
|
||||
= Security configuration for Knative Kafka
|
||||
include::modules/common-attributes.adoc[]
|
||||
include::modules/serverless-document-attributes.adoc[]
|
||||
:context: serverless-kafka-security
|
||||
|
||||
toc::[]
|
||||
|
||||
In production, Kafka clusters are often secured using the TLS or SASL authentication methods. This section shows how to configure a Kafka channel to work against a protected Red Hat AMQ Streams cluster using TLS or SASL.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you choose to enable SASL, Red Hat recommends to also enable TLS.
|
||||
====
|
||||
|
||||
include::modules/serverless-kafka-tls.adoc[leveloffset=+1]
|
||||
include::modules/serverless-kafka-sasl.adoc[leveloffset=+1]
|
||||
include::modules/serverless-kafka-sasl-public-certs.adoc[leveloffset=+1]
|
||||
Reference in New Issue
Block a user