diff --git a/logging/cluster-logging-external.adoc b/logging/cluster-logging-external.adoc index d693550758..5340d54880 100644 --- a/logging/cluster-logging-external.adoc +++ b/logging/cluster-logging-external.adoc @@ -47,7 +47,7 @@ include::modules/cluster-logging-collector-log-forward-cloudwatch.adoc[leveloffs [id="cluster-logging-collector-log-forward-sts-cloudwatch_{context}"] === Forwarding logs to Amazon CloudWatch from STS enabled clusters -For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request by using the +For clusters with AWS Security Token Service (STS) enabled, you can create an AWS service account manually or create a credentials request by using the ifdef::openshift-enterprise,openshift-origin[] xref:../authentication/managing_cloud_provider_credentials/about-cloud-credential-operator.adoc[Cloud Credential Operator(CCO)] endif::[] @@ -180,6 +180,8 @@ include::modules/cluster-logging-troubleshooting-loki-entry-out-of-order-errors. include::modules/cluster-logging-collector-log-forward-gcp.adoc[leveloffset=+1] +include::modules/logging-forward-splunk.adoc[leveloffset=+1] + include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=+1] include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1] diff --git a/modules/logging-forward-splunk.adoc b/modules/logging-forward-splunk.adoc new file mode 100644 index 0000000000..583fd4760b --- /dev/null +++ b/modules/logging-forward-splunk.adoc @@ -0,0 +1,62 @@ +// Module included in the following assemblies: +// cluster-logging-external.adoc +// + +:_content-type: PROCEDURE +[id="logging-forward-splunk_{context}"] += Forwarding logs to Splunk + +You can forward logs to the link:https://docs.splunk.com/Documentation/Splunk/9.0.0/Data/UsetheHTTPEventCollector[Splunk HTTP Event Collector (HEC)] in addition to, or instead of, the internal default {product-title} log store. + +[NOTE] +==== +Using this feature with Fluentd is not supported. +==== + +.Prerequisites +* Red Hat OpenShift Logging Operator 5.6 and higher +* ClusterLogging instance with vector specified as collector +* Base64 encoded Splunk HEC token + +.Procedure + +. Create a secret using your Base64 encoded Splunk HEC token. ++ +[source,terminal] +---- +$ oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken= +---- ++ +. Create or edit the `ClusterLogForwarder` Custom Resource (CR) using the template below: ++ +[source,yaml] +---- + apiVersion: "logging.openshift.io/v1" + kind: "ClusterLogForwarder" + metadata: + name: "instance" <1> + namespace: "openshift-logging" <2> + spec: + outputs: + - name: splunk-receiver <3> + secret: + name: vector-splunk-secret <4> + type: splunk <5> + url: <6> + pipelines: <7> + - inputRefs: + - application + - infrastructure + name: <8> + outputRefs: + - splunk-receiver <9> +---- +<1> The name of the ClusterLogForwarder CR must be `instance`. +<2> The namespace for the ClusterLogForwarder CR must be `openshift-logging`. +<3> Specify a name for the output. +<4> Specify the name of the secret that contains your HEC token. +<5> Specify the output type as `splunk`. +<6> Specify the URL (including port) of your Splunk HEC. +<7> Specify which log types to forward by using the pipeline: `application`, `infrastructure`, or `audit`. +<8> Optional: Specify a name for the pipeline. +<9> Specify the name of the output to use when forwarding logs with this pipeline.