mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
SRVKE-747: Broker docs clean up and add Kafka broker
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
7edf83f64a
commit
fcbb3dce9c
@@ -3199,6 +3199,8 @@ Topics:
|
||||
File: serverless-configuring-routes
|
||||
- Name: Event sinks
|
||||
File: serverless-event-sinks
|
||||
- Name: Event delivery
|
||||
File: serverless-event-delivery
|
||||
- Name: Using the API server source
|
||||
File: serverless-apiserversource
|
||||
- Name: Using a ping source
|
||||
@@ -3209,6 +3211,12 @@ Topics:
|
||||
File: serverless-creating-channels
|
||||
- Name: Subscriptions
|
||||
File: serverless-subs
|
||||
# Brokers
|
||||
- Name: Brokers
|
||||
File: serverless-using-brokers
|
||||
# Triggers
|
||||
- Name: Triggers
|
||||
File: serverless-triggers
|
||||
- Name: Knative Kafka
|
||||
File: serverless-kafka-developer
|
||||
# Admin guide
|
||||
@@ -3218,8 +3226,8 @@ Topics:
|
||||
- Name: Configuring OpenShift Serverless
|
||||
File: serverless-configuration
|
||||
# Eventing
|
||||
- Name: Configuring channel defaults
|
||||
File: serverless-configuring-channels
|
||||
- Name: Configuring Knative Eventing defaults
|
||||
File: serverless-configuring-eventing-defaults
|
||||
- Name: Knative Kafka
|
||||
File: serverless-kafka-admin
|
||||
- Name: Creating Knative Eventing components in the Administrator perspective
|
||||
@@ -3278,19 +3286,6 @@ Topics:
|
||||
File: serverless-custom-tls-cert-domain-mapping
|
||||
- Name: Security configuration for Knative Kafka
|
||||
File: serverless-kafka-security
|
||||
# Knative Eventing
|
||||
- Name: Knative Eventing
|
||||
Dir: knative_eventing
|
||||
Topics:
|
||||
# Brokers
|
||||
- Name: Brokers
|
||||
File: serverless-using-brokers
|
||||
# Triggers
|
||||
- Name: Triggers
|
||||
File: serverless-triggers
|
||||
# Event delivery
|
||||
- Name: Event delivery
|
||||
File: serverless-event-delivery
|
||||
# Functions
|
||||
- Name: Functions
|
||||
Dir: functions
|
||||
|
||||
19
modules/serverless-broker-types.adoc
Normal file
19
modules/serverless-broker-types.adoc
Normal file
@@ -0,0 +1,19 @@
|
||||
[id="serverless-broker-types_{context}"]
|
||||
= Broker types
|
||||
|
||||
There are multiple broker implementations available for use with {ServerlessProductName}, each of which have different event delivery guarantees and use different underlying technologies. You can choose the broker implementation when creating a broker by specifying a broker class, otherwise the default broker class is used. The default broker class can be configured by cluster administrators.
|
||||
// TO DO: Need to add docs about setting default broker class.
|
||||
|
||||
[id="serverless-using-brokers-channel-based"]
|
||||
== Channel-based broker
|
||||
|
||||
The channel-based broker implementation internally uses channels for event delivery. Channel-based brokers provide different event delivery guarantees based on the channel implementation a broker instance uses, for example:
|
||||
|
||||
* A broker using the `InMemoryChannel` implementation is useful for development and testing purposes, but does not provide adequate event delivery guarantees for production environments.
|
||||
|
||||
* A broker using the `KafkaChannel` implementation provides the event delivery guarantees required for a production environment.
|
||||
|
||||
[id="serverless-using-brokers-kafka"]
|
||||
== Kafka broker
|
||||
|
||||
The Kafka broker is a broker implementation that uses Kafka internally to provide at-least once delivery guarantees. It supports multiple Kafka versions, and has a native integration with Kafka for storing and routing events.
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * serverless/channels/serverless-channels.adoc
|
||||
// * serverless/admin_guide/serverless-configuring-eventing-defaults.adoc
|
||||
|
||||
[id="serverless-channel-default_{context}"]
|
||||
= Configuring the default channel implementation
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * serverless/knative_eventing/serverless-creating-channels.adoc
|
||||
// * serverless/knative_eventing/serverless-kafka.adoc
|
||||
// * serverless/develop/serverless-creating-channels.adoc
|
||||
// * serverless/develop/serverless-kafka-developer.adoc
|
||||
|
||||
[id="serverless-create-kafka-channel-yaml_{context}"]
|
||||
= Creating a Kafka channel by using YAML
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// serverless/knative_eventing/serverless-event-delivery.adoc
|
||||
// serverless/develop/serverless-event-delivery.adoc
|
||||
|
||||
[id="serverless-event-delivery-component-behaviors_{context}"]
|
||||
= Event delivery behavior for Knative Eventing channels
|
||||
|
||||
@@ -21,10 +21,25 @@ spec:
|
||||
bootstrapServers: <bootstrap_servers> <2>
|
||||
source:
|
||||
enabled: true <3>
|
||||
broker:
|
||||
enabled: true <4>
|
||||
defaultConfig:
|
||||
bootstrapServers: <bootstrap_servers> <5>
|
||||
numPartitions: <num_partitions> <6>
|
||||
replicationFactor: <replication_factor> <7>
|
||||
----
|
||||
<1> Enables developers to use the `KafkaChannel` channel type in the cluster.
|
||||
<2> A comma-separated list of bootstrap servers from your AMQ Streams cluster.
|
||||
<3> Enables developers to use the `KafkaSource` event source type in the cluster.
|
||||
<4> Enables developers to use the Knative Kafka broker implementation in the cluster.
|
||||
<5> A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster.
|
||||
<6> Defines the number of partitions of the Kafka topics, backed by the `Broker` objects. The default is `10`.
|
||||
<7> Defines the replication factor of the Kafka topics, backed by the `Broker` objects. The default is `3`.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The `replicationFactor` value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster.
|
||||
====
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -42,7 +57,7 @@ spec:
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
To use the Kafka channel or Kafka source on your cluster, you must toggle the *Enable* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel, you must specify the Boostrap Servers.
|
||||
To use the Kafka channel, source, or broker on your cluster, you must toggle the *enabled* switch for the options you want to use to *true*. These switches are set to *false* by default. Additionally, to use the Kafka channel or broker, you must specify the bootstrap servers.
|
||||
====
|
||||
.. Using the form is recommended for simpler configurations that do not require full control of *KnativeKafka* object creation.
|
||||
.. Editing the YAML is recommended for more complex configurations that require full control of *KnativeKafka* object creation. You can access the YAML by clicking the *Edit YAML* link in the top right of the *Create Knative Kafka* page.
|
||||
|
||||
57
modules/serverless-kafka-broker-sasl-default-config.adoc
Normal file
57
modules/serverless-kafka-broker-sasl-default-config.adoc
Normal file
@@ -0,0 +1,57 @@
|
||||
// Module is included in the following assemblies:
|
||||
//
|
||||
// * serverless/admin_guide/serverless-kafka-admin.adoc
|
||||
|
||||
[id="serverless-kafka-broker-sasl-default-config_{context}"]
|
||||
= Configuring SASL authentication for Kafka brokers
|
||||
|
||||
As a cluster administrator, you can set up _Simple Authentication and Security Layer_ (SASL) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` CR are installed on your {product-title} cluster.
|
||||
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
|
||||
* You have a username and password for a Kafka cluster.
|
||||
* You have chosen the SASL mechanism to use, for example `PLAIN`, `SCRAM-SHA-256`, or `SCRAM-SHA-512`.
|
||||
* If TLS is enabled, you also need the `ca.crt` certificate file for the Kafka cluster.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
It is recommended to enable TLS in addition to SASL.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create the certificate files as a secret in the `knative-eventing` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create secret -n knative-eventing generic <secret_name> \
|
||||
--from-literal=protocol=SASL_SSL \
|
||||
--from-literal=sasl.mechanism=<sasl_mechanism> \
|
||||
--from-file=ca.crt=caroot.pem \
|
||||
--from-literal=password="SecretPassword" \
|
||||
--from-literal=user="my-sasl-user"
|
||||
----
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Use the key names `ca.crt`, `password`, and `sasl.mechanism`. Do not change them.
|
||||
====
|
||||
|
||||
. Edit the `KnativeKafka` CR and add a reference to your secret in the `broker` spec:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: operator.serverless.openshift.io/v1alpha1
|
||||
kind: KnativeKafka
|
||||
metadata:
|
||||
namespace: knative-eventing
|
||||
name: knative-kafka
|
||||
spec:
|
||||
broker:
|
||||
enabled: true
|
||||
defaultConfig:
|
||||
authSecretName: <secret_name>
|
||||
...
|
||||
----
|
||||
50
modules/serverless-kafka-broker-tls-default-config.adoc
Normal file
50
modules/serverless-kafka-broker-tls-default-config.adoc
Normal file
@@ -0,0 +1,50 @@
|
||||
// Module is included in the following assemblies:
|
||||
//
|
||||
// * serverless/admin_guide/serverless-kafka-admin.adoc
|
||||
|
||||
[id="serverless-kafka-broker-tls-default-config_{context}"]
|
||||
= Configuring TLS authentication for Kafka brokers
|
||||
|
||||
As a cluster administrator, you can set up _Transport Layer Security_ (TLS) authentication for Kafka brokers by modifying the `KnativeKafka` custom resource (CR).
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` CR are installed on your {product-title} cluster.
|
||||
* You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in {product-title}.
|
||||
* You have a Kafka cluster CA certificate stored as a `.pem` file.
|
||||
* You have a Kafka cluster client certificate and a key stored as `.pem` files.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create the certificate files as a secret in the `knative-eventing` namespace:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create secret -n knative-eventing generic <secret_name> \
|
||||
--from-literal=protocol=SSL \
|
||||
--from-file=ca.crt=caroot.pem \
|
||||
--from-file=user.crt=certificate.pem \
|
||||
--from-file=user.key=key.pem
|
||||
----
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Use the key names `ca.crt`, `user.crt`, and `user.key`. Do not change them.
|
||||
====
|
||||
|
||||
. Edit the `KnativeKafka` CR and add a reference to your secret in the `broker` spec:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: operator.serverless.openshift.io/v1alpha1
|
||||
kind: KnativeKafka
|
||||
metadata:
|
||||
namespace: knative-eventing
|
||||
name: knative-kafka
|
||||
spec:
|
||||
broker:
|
||||
enabled: true
|
||||
defaultConfig:
|
||||
authSecretName: <secret_name>
|
||||
...
|
||||
----
|
||||
40
modules/serverless-kafka-broker.adoc
Normal file
40
modules/serverless-kafka-broker.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * serverless/develop/serverless-kafka-developer.adoc
|
||||
// * serverless/develop/serverless-using-brokers.adoc
|
||||
|
||||
[id="serverless-kafka-broker_{context}"]
|
||||
= Creating a Kafka broker
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The {ServerlessOperatorName}, Knative Eventing, and the `KnativeKafka` custom resource are installed on your {product-title} cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a Kafka-based broker as a YAML file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: eventing.knative.dev/v1
|
||||
kind: Broker
|
||||
metadata:
|
||||
annotations:
|
||||
eventing.knative.dev/broker.class: Kafka <1>
|
||||
name: example-kafka-broker
|
||||
spec:
|
||||
config:
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
name: kafka-broker-config <2>
|
||||
namespace: knative-eventing
|
||||
----
|
||||
<1> The broker class. If not specified, brokers use the default class as configured by cluster administrators. To use the Kafka broker, this value must be `Kafka`.
|
||||
<2> The default config map for Knative Kafka brokers. This config map is created when the Kafka broker functionality is enabled on the cluster by a cluster administrator.
|
||||
|
||||
. Apply the Kafka-based broker YAML file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f <filename>
|
||||
----
|
||||
@@ -3,7 +3,7 @@
|
||||
// * serverless/develop/serverless-kafka-developer.adoc
|
||||
|
||||
[id="serverless-kafka-delivery-retries_{context}"]
|
||||
= Event delivery and retries
|
||||
= Kafka event delivery and retries
|
||||
|
||||
Using Kafka components in an event-driven architecture provides "at least once" event delivery. This means that operations are retried until a return code value is received. This makes applications more resilient to lost events; however, it might result in duplicate events being sent.
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ include::modules/serverless-creating-subscription-admin-web-console.adoc[levelof
|
||||
[id="additional-resources_serverless-cluster-admin-eventing"]
|
||||
== Additional resources
|
||||
|
||||
* See xref:../../serverless/knative_eventing/serverless-using-brokers.adoc#serverless-using-brokers[Brokers].
|
||||
* See xref:../../serverless/develop/serverless-subs.adoc#serverless-subs[Subscriptions].
|
||||
* See xref:../../serverless/knative_eventing/serverless-triggers.adoc#serverless-triggers[Triggers].
|
||||
* See xref:../../serverless/discover/serverless-channels.adoc#serverless-channels[Channels].
|
||||
* xref:../../serverless/develop/serverless-using-brokers.adoc#serverless-using-brokers[Brokers]
|
||||
* xref:../../serverless/develop/serverless-subs.adoc#serverless-subs[Subscriptions]
|
||||
* xref:../../serverless/develop/serverless-triggers.adoc#serverless-triggers[Triggers]
|
||||
* xref:../../serverless/discover/serverless-channels.adoc#serverless-channels[Channels]
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
:_content-type: ASSEMBLY
|
||||
include::modules/serverless-document-attributes.adoc[]
|
||||
[id="serverless-configuring-channels"]
|
||||
= Configuring channel defaults
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: serverless-configuring-channels
|
||||
|
||||
toc::[]
|
||||
|
||||
If you have cluster administrator permissions, you can set default options for channels, either for the whole cluster or for a specific namespace. These options are modified using config maps.
|
||||
|
||||
include::modules/serverless-channel-default.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,11 @@
|
||||
include::modules/serverless-document-attributes.adoc[]
|
||||
[id="serverless-configuring-eventing-defaults"]
|
||||
= Configuring Knative Eventing defaults
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: serverless-configuring-eventing-defaults
|
||||
|
||||
toc::[]
|
||||
|
||||
If you have cluster administrator permissions, you can set default options for Knative Eventing components, either for the whole cluster or for a specific namespace.
|
||||
|
||||
include::modules/serverless-channel-default.adoc[leveloffset=+1]
|
||||
@@ -9,19 +9,30 @@ toc::[]
|
||||
|
||||
In addition to the Knative Eventing components that are provided as part of a core {ServerlessProductName} installation, cluster administrators can install the `KnativeKafka` custom resource (CR).
|
||||
|
||||
The `KnativeKafka` CR provides users with additional options, such as:
|
||||
|
||||
* Kafka event source
|
||||
* Kafka channel
|
||||
// * Kafka broker
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Knative Kafka is not currently supported for IBM Z and IBM Power Systems.
|
||||
====
|
||||
|
||||
The `KnativeKafka` CR provides users with additional options, such as:
|
||||
|
||||
* Kafka source
|
||||
* Kafka channel
|
||||
* Kafka broker
|
||||
|
||||
:FeatureName: Kafka broker
|
||||
include::modules/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/serverless-install-kafka-odc.adoc[leveloffset=+1]
|
||||
|
||||
[id="serverless-kafka-admin-default-configs"]
|
||||
== Configuring default settings for Kafka components
|
||||
|
||||
If you have cluster administrator permissions, you can set default options for Knative Kafka components, either for the whole cluster or for a specific namespace.
|
||||
|
||||
include::modules/serverless-kafka-broker-tls-default-config.adoc[leveloffset=+2]
|
||||
include::modules/serverless-kafka-broker-sasl-default-config.adoc[leveloffset=+2]
|
||||
|
||||
[id="additional-resources_serverless-kafka-admin"]
|
||||
== Additional resources
|
||||
|
||||
|
||||
@@ -7,8 +7,7 @@ include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can configure event delivery parameters for Knative Eventing that are applied in cases where an event fails to be delivered by a xref:../../serverless/develop/serverless-subs.adoc#serverless-subs[subscription]. Event delivery parameters are configured individually per subscription.
|
||||
// TODO: Update docs to add triggers once this is implemented.
|
||||
You can configure event delivery parameters for Knative Eventing that are applied in cases where an event fails to be delivered by a xref:../../serverless/develop/serverless-subs.adoc#serverless-subs[subscription] or xref:../../serverless/develop/serverless-triggers.adoc#serverless-triggers[trigger] to a subscriber. Event delivery parameters are configured individually per subscriber.
|
||||
|
||||
include::modules/serverless-event-delivery-component-behaviors.adoc[leveloffset=+1]
|
||||
|
||||
@@ -26,8 +25,4 @@ Back off delay:: You can set the `backoffDelay` delivery parameter to specify th
|
||||
Back off policy:: The `backoffPolicy` delivery parameter can be used to specify the retry back off policy. The policy can be specified as either `linear` or `exponential`. When using the `linear` back off policy, the back off delay is the time interval specified between retries. When using the `exponential` backoff policy, the back off delay is equal to `backoffDelay*2^<numberOfRetries>`.
|
||||
|
||||
include::modules/serverless-subscription-event-delivery-config.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_serverless-event-delivery"]
|
||||
== Additional resources
|
||||
|
||||
* See xref:../../serverless/develop/serverless-subs.adoc#serverless-subs-creating-subs[Creating subscriptions].
|
||||
// add docs for configuration in triggers
|
||||
@@ -7,25 +7,30 @@ include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Knative Kafka functionality is available in an {ServerlessProductName} installation xref:../../serverless/admin_guide/serverless-kafka-admin.adoc#serverless-kafka-admin[if a cluster administrator has installed the `KnativeKafka` custom resource].
|
||||
|
||||
Knative Kafka provides additional options, such as:
|
||||
|
||||
* Kafka event source
|
||||
* xref:../../serverless/develop/serverless-creating-channels.adoc#serverless-creating-channels[Kafka channel]
|
||||
// * Kafka broker
|
||||
Knative Kafka functionality is available in an {ServerlessProductName} installation xref:../../serverless/admin_guide/serverless-kafka-admin.adoc#serverless-install-kafka-odc_serverless-kafka-admin[if a cluster administrator has installed the `KnativeKafka` custom resource].
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Knative Kafka is not currently supported for IBM Z and IBM Power Systems.
|
||||
====
|
||||
|
||||
Knative Kafka provides additional options, such as:
|
||||
|
||||
* Kafka source
|
||||
* Kafka channel
|
||||
* Kafka broker
|
||||
|
||||
:FeatureName: Kafka broker
|
||||
include::modules/technology-preview.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/serverless-kafka-event-delivery.adoc[leveloffset=+1]
|
||||
|
||||
[id="serverless-kafka-developer-event-source"]
|
||||
== Using a Kafka event source
|
||||
See the xref:../../serverless/develop/serverless-event-delivery.adoc#serverless-event-delivery[Event delivery] documentation for more information about delivery guarantees.
|
||||
|
||||
You can create a Knative Kafka event source that reads events from an Apache Kafka cluster and passes these events to a sink.
|
||||
[id="serverless-kafka-developer-source"]
|
||||
== Kafka source
|
||||
|
||||
You can create a Kafka source that reads events from an Apache Kafka cluster and passes these events to a sink.
|
||||
|
||||
// dev console
|
||||
include::modules/serverless-kafka-source-odc.adoc[leveloffset=+2]
|
||||
@@ -35,8 +40,23 @@ include::modules/specifying-sink-flag-kn.adoc[leveloffset=+3]
|
||||
// YAML
|
||||
include::modules/serverless-kafka-source-yaml.adoc[leveloffset=+2]
|
||||
|
||||
[id="serverless-kafka-developer-broker"]
|
||||
== Kafka broker
|
||||
|
||||
:FeatureName: Kafka broker
|
||||
include::modules/technology-preview.adoc[leveloffset=+2]
|
||||
|
||||
If a cluster administrator has configured your {ServerlessProductName} deployment to use Kafka broker as the default broker type, xref:../../serverless/develop/serverless-using-brokers.adoc#serverless-using-brokers-creating-brokers[creating a broker by using the default settings] creates a Kafka-based `Broker` object. If your {ServerlessProductName} deployment is not configured to use Kafka broker as the default broker type, you can still use the following procedure to create a Kafka-based broker.
|
||||
|
||||
include::modules/serverless-kafka-broker.adoc[leveloffset=+2]
|
||||
|
||||
// Kafka channels
|
||||
include::modules/serverless-create-kafka-channel-yaml.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_serverless-kafka-developer"]
|
||||
== Additional resources
|
||||
|
||||
* See the link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/amq_streams_on_openshift_overview/kafka-concepts_str#kafka-concepts-key_str[Red Hat AMQ Streams] documentation for more information about Kafka concepts.
|
||||
* See xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[Event sources].
|
||||
* See the Red Hat AMQ Streams link:https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html-single/using_amq_streams_on_rhel/index#assembly-kafka-encryption-and-authentication-str[TLS and SASL on Kafka] documentation.
|
||||
* See the xref:../../serverless/develop/serverless-event-delivery.adoc#serverless-event-delivery[Event delivery] documentation for information about configuring event delivery parameters.
|
||||
* See the xref:../../serverless/admin_guide/serverless-kafka-admin.adoc#serverless-kafka-admin[Knative Kafka cluster administrator documentation] for information about installing Kafka components and setting default configurations if you have cluster administrator permissions.
|
||||
|
||||
@@ -11,7 +11,7 @@ After events have been sent to a channel from an event source or producer, these
|
||||
|
||||
image::serverless-event-channel-workflow.png[Channel workflow overview]
|
||||
|
||||
If a subscriber rejects an event, there are no re-delivery attempts by default. Developers can configure xref:../../serverless/knative_eventing/serverless-event-delivery.adoc#serverless-event-delivery[re-delivery attempts] by modifying the `delivery` spec in a `Subscription` object.
|
||||
If a subscriber rejects an event, there are no re-delivery attempts by default. Developers can configure xref:../../serverless/develop/serverless-event-delivery.adoc#serverless-event-delivery[re-delivery attempts] by modifying the `delivery` spec in a `Subscription` object.
|
||||
|
||||
[id="serverless-subs-creating-subs"]
|
||||
== Creating subscriptions
|
||||
|
||||
@@ -7,16 +7,25 @@ include::modules/serverless-document-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Brokers can be used in combination with xref:../../serverless/knative_eventing/serverless-triggers.adoc#serverless-triggers[triggers] to deliver events from an xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[event source] to an event sink.
|
||||
Brokers can be used in combination with xref:../../serverless/develop/serverless-triggers.adoc#serverless-triggers[triggers] to deliver events from an xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[event source] to an event sink.
|
||||
|
||||
image::serverless-event-broker-workflow.png[Broker event delivery overview]
|
||||
|
||||
Events can be sent from an event source to a broker as an HTTP POST request.
|
||||
Events can be sent from an event source to a broker as an HTTP `POST` request. After events have entered the broker, they can be filtered by https://github.com/cloudevents/spec/blob/v1.0/spec.md#context-attributes[CloudEvent attributes] using triggers, and sent as an HTTP `POST` request to an event sink.
|
||||
|
||||
After events have entered the broker, they can be filtered by https://github.com/cloudevents/spec/blob/v1.0/spec.md#context-attributes[CloudEvent attributes] using triggers, and sent as an HTTP POST request to an event sink.
|
||||
include::modules/serverless-broker-types.adoc[leveloffset=+1]
|
||||
|
||||
:FeatureName: Kafka broker
|
||||
include::modules/technology-preview.adoc[leveloffset=+2]
|
||||
|
||||
.Additional resources
|
||||
|
||||
* See the xref:../../serverless/develop/serverless-event-delivery.adoc#serverless-event-delivery[Event delivery] documentation for more information about delivery guarantees.
|
||||
|
||||
* See the xref:../../serverless/develop/serverless-kafka-developer.adoc#serverless-kafka-developer-broker[Kafka broker] documentation for information about using Kafka brokers.
|
||||
|
||||
[id="serverless-using-brokers-creating-brokers"]
|
||||
== Creating a broker
|
||||
== Creating a broker that uses default settings
|
||||
|
||||
{ServerlessProductName} provides a `default` Knative broker that you can create by using the `kn` CLI. You can also create the `default` broker by adding the `eventing.knative.dev/injection: enabled` annotation to a trigger, or by adding the `eventing.knative.dev/injection=enabled` label to a namespace.
|
||||
|
||||
@@ -17,6 +17,6 @@ xref:../../serverless/develop/serverless-apiserversource.adoc#serverless-apiserv
|
||||
|
||||
xref:../../serverless/develop/serverless-pingsource.adoc#serverless-pingsource[Ping source]:: Produces events with a fixed payload on a specified cron schedule.
|
||||
|
||||
xref:../../serverless/develop/serverless-kafka-developer.adoc#serverless-kafka-developer-event-source[Kafka event source]:: Connects a Kafka cluster to a sink as an event source.
|
||||
xref:../../serverless/develop/serverless-kafka-developer.adoc#serverless-kafka-developer-source[Kafka event source]:: Connects a Kafka cluster to a sink as an event source.
|
||||
|
||||
You can also create a xref:../../serverless/develop/serverless-custom-event-sources.adoc#serverless-custom-event-sources[custom event source].
|
||||
|
||||
@@ -24,5 +24,5 @@ The following are limitations of `InMemoryChannel` type channels:
|
||||
[id="next-steps_serverless-channels"]
|
||||
== Next steps
|
||||
|
||||
* If you are a cluster administrator, you can configure default settings for channels. See xref:../../serverless/admin_guide/serverless-configuring-channels.adoc#serverless-configuring-channels[Configuring channel defaults].
|
||||
* If you are a cluster administrator, you can configure default settings for channels. See xref:../../serverless/admin_guide/serverless-configuring-eventing-defaults.adoc#serverless-channel-default_serverless-configuring-eventing-defaults[Configuring channel defaults].
|
||||
* See xref:../../serverless/develop/serverless-creating-channels.adoc#serverless-creating-channels[Creating and deleting channels].
|
||||
|
||||
@@ -16,4 +16,4 @@ Callable resources:: Able to receive an event delivered over HTTP and transform
|
||||
You can propagate an event from an xref:../../serverless/discover/knative-event-sources.adoc#knative-event-sources[event source] to multiple event sinks by using:
|
||||
|
||||
* xref:../../serverless/discover/serverless-channels.adoc#serverless-channels[channels] and subscriptions, or
|
||||
* xref:../../serverless/knative_eventing/serverless-using-brokers.adoc#serverless-using-brokers[brokers] and xref:../../serverless/knative_eventing/serverless-triggers.adoc#serverless-triggers[triggers].
|
||||
* xref:../../serverless/develop/serverless-using-brokers.adoc#serverless-using-brokers[brokers] and xref:../../serverless/develop/serverless-triggers.adoc#serverless-triggers[triggers].
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
../images
|
||||
@@ -1 +0,0 @@
|
||||
../modules
|
||||
Reference in New Issue
Block a user