1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Fixing build and some other minor issues

This commit is contained in:
Andrea Hoffer
2021-08-02 14:28:45 -04:00
committed by openshift-cherrypick-robot
parent b0a663f10b
commit 81e6f47d7f
20 changed files with 44 additions and 43 deletions

View File

@@ -13,7 +13,7 @@ Triggers in {pipelines-title} support insecure HTTP and secure HTTPS connections
{pipelines-title} runs a `tekton-operator-proxy-webhook` pod that watches for the labels in the namespace. When you add the label to the namespace, the webhook sets the `service.beta.openshift.io/serving-cert-secret-name=<secret_name>` annotation on the `EventListener` object. This, in turn, creates secrets and the required certificates.
[source,terminal,subs="atrributes+"]
[source,terminal,subs="attributes+"]
----
service.beta.openshift.io/serving-cert-secret-name=<secret_name>
----

View File

@@ -188,7 +188,7 @@ Replica shards for at least one primary shard are not allocated to nodes.
//
// .Troubleshooting
// TBD
// Note for writer: This is a warning alert and we haven't documented troubleshooting steps for warning alerts yet. I guess you can skip this in currrent release.
// Note for writer: This is a warning alert and we haven't documented troubleshooting steps for warning alerts yet. I guess you can skip this in current release.
[id="elasticsearch-node-disk-low-watermark-reached"]
== Elasticsearch Node Disk Low Watermark Reached

View File

@@ -51,7 +51,7 @@ kind: BuildConfig
This example omits elements that are not related to image change triggers.
====
.Prerequisite
.Prerequisites
* You have configured multiple image change triggers. These triggers have triggered one or more builds.

View File

@@ -3,7 +3,7 @@
// * scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.adoc
[id="adjusting-nic-queues-with-the-performance-profile_{context}"]
== Adjusting the NIC queues with the performance profile
= Adjusting the NIC queues with the performance profile
The performance profile lets you adjust the queue count for each network device.

View File

@@ -3,7 +3,7 @@
// * scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.adoc
[id="logging-associated-with-adjusting-nic-queues_{context}"]
== Logging associated with adjusting NIC queues
= Logging associated with adjusting NIC queues
Log messages detailing the assigned devices are recorded in the respective Tuned daemon logs. The following messages might be recorded to the `/var/log/tuned/tuned.log` file:

View File

@@ -3,7 +3,7 @@
// * scalability_and_performance/cnf-performance-addon-operator-for-low-latency-nodes.adoc
[id="verifying-queue-status_{context}"]
== Verifying the queue status
= Verifying the queue status
In this section, a number of examples illustrate different performance profiles and how to verify the changes are applied.

View File

@@ -10,7 +10,7 @@
The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new
`kubelet-config-controller` added to the Machine Config Controller (MCC). This allows you to use a `KubeletConfig` custom resource (CR) to edit the kubelet parameters.
You should have one `KubeletConfig` CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all the pools, you only need one `KubeletConfig` CR for all the pools.
You should have one `KubeletConfig` CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all the pools, you only need one `KubeletConfig` CR for all the pools.
You should edit an existing `KubeletConfig` CR to modify existing settings or add new settings instead of creating a new CR for each change. It is recommended to create a new CR only to modify a different machine config pool, or for changes that are intended to be temporary so that you can revert the changes.
@@ -220,7 +220,7 @@ Allocatable:
pods: 500 <1>
...
----
<1> In this example, the `pods` parameter should report the value you set in the `KubletConfig` object.
<1> In this example, the `pods` parameter should report the value you set in the `KubeletConfig` object.
. Verify the change in the `KubeletConfig` object:
+

View File

@@ -5,7 +5,7 @@
[id="nw-router-configuring-dual-stack_{context}"]
= Configuring the {product-title} Ingress Controller for dual-stack networking
If your {product-title} cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is is externally reachable by {product-title} routes.
If your {product-title} cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is externally reachable by {product-title} routes.
The Ingress Controller automatically serves services that have both IPv4 and IPv6 endpoints, but you can configure the Ingress Controller for single-stack or dual-stack services.

View File

@@ -3,9 +3,9 @@
To create a route with the re-encrypted TLS termination, run:
[source,terminal,subs="atrributes+"]
[source,terminal,subs="attributes+"]
----
oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>
$ oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>
----
Alternatively, you can create a re-encrypted TLS termination YAML file to create a secure route.

View File

@@ -123,7 +123,7 @@ spec:
* Support for optional workspaces are added to the `start` command.
* If the plugins are are not present in the `plugins` directory, they are searched in the current path.
* If the plugins are not present in the `plugins` directory, they are searched in the current path.
* The `tkn start [task | clustertask | pipeline]` command starts interactively and ask for the `params` value, even when you specify the default parameters are specified. To stop the interactive prompts, pass the `--use-param-defaults` flag at the time of invoking the command. For example:
+

View File

@@ -7,43 +7,43 @@ This section uses the link:https://github.com/openshift/pipelines-tutorial[pipel
. Create the `TriggerBinding` resource from the YAML file available in the pipelines-tutorial repository:
+
[source,terminal,subs="atrributes+"]
[source,terminal,subs="attributes+"]
----
oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/01_binding.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/01_binding.yaml
----
. Create the `TriggerTemplate` resource from the YAML file available in the pipelines-tutorial repository:
+
[source,terminal,subs="atrributes+"]
[source,terminal,subs="attributes+"]
----
oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/02_template.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/02_template.yaml
----
. Create the `Trigger` resource directly from the pipelines-tutorial repository:
+
[source,terminal,subs="atrributes+"]
[source,terminal,subs="attributes+"]
----
oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/03_trigger.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/03_trigger.yaml
----
. Create an `EventListener` resource using a secure HTTPS connection:
.. Add a label to enable the secure HTTPS connection to the `Eventlistener` resource:
+
[source,terminal,subs="atrributes+"]
[source,terminal,subs="attributes+"]
----
oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled
$ oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled
----
.. Create the `EventListener` resource from the YAML file available in the pipelines-tutorial repository:
+
[source,terminal,subs="atrributes+"]
[source,terminal,subs="attributes+"]
----
oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/04_event_listener.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/04_event_listener.yaml
----
.. Create a route with the re-encrypted TLS termination:
+
[source,terminal,subs="atrributes+"]
[source,terminal,subs="attributes+"]
----
oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>
$ oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>
----

View File

@@ -24,13 +24,13 @@ To view a summary of metrics, select any node or edge in the graph to display it
[id="ossm-observability-topology_{context}"]
== Namespace graphs
The namespace graph is a map of the services, deployments, and workflows in your namespace and arrows that show how data flows through them.
The namespace graph is a map of the services, deployments, and workflows in your namespace and arrows that show how data flows through them.
.Prerequisite
.Prerequisites
* Install the Bookinfo sample application.
.Procedure
.Procedure
. Send traffic to the mesh by entering the following command several times.
+
@@ -44,4 +44,3 @@ This command simulates a user visiting the `productpage` microservice of the app
. In the main navigation, click *Graph* to view a namespace graph.
. Select `bookinfo` from the *Namespace* menu.

View File

@@ -16,7 +16,7 @@ These features provide early access to upcoming product features, enabling custo
{ProductName} 2.0.1 introduces technology preview support for the OVN-Kubernetes network type on {product-title} 4.6 and 4.7.
== WebAsssembly technology preview
== WebAssembly technology preview
{ProductName} 2.0.0 introduces support for WebAssembly extensions to Envoy Proxy.

View File

@@ -69,7 +69,7 @@ spec:
* `old: {}`
* `intermediate: {}`
* `custom:`
<2> For the `custom` type, specify a list of TLS ciphers and minimum accepted TLS version.
<3> For the `custom` type, specify a list of TLS ciphers and minimum accepted TLS version.
. Save the file to apply the changes.

View File

@@ -30,7 +30,7 @@ spec:
source:
pvc:
namespace: "<source-namespace>" <2>
name: "<my-favorite-vm-disk>" <3>)
name: "<my-favorite-vm-disk>" <3>
storage: <4>
resources:
requests:

View File

@@ -9,7 +9,7 @@ Before you install a cluster on infrastructure that you provision, you must crea
.Prerequisites
* Deploy and configure a HTTP server to host the {op-system} image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create.
* Deploy and configure an HTTP server to host the {op-system} image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create.
[IMPORTANT]
====

View File

@@ -11,7 +11,7 @@ After the {ProductShortName} integration with {ServerlessProductName} and Kourie
[IMPORTANT]
====
You must set the annotation `sidecar.istio.io/rewriteAppHTTPProbers: "true"` in your Knative service as {ServerlessProductName} versions 1.14.0 and higher use a HTTP probe as the readiness probe for Knative services by default.
You must set the annotation `sidecar.istio.io/rewriteAppHTTPProbers: "true"` in your Knative service as {ServerlessProductName} versions 1.14.0 and higher use an HTTP probe as the readiness probe for Knative services by default.
====
include::modules/serverless-ossm-enable-sidecar-injection-with-kourier.adoc[leveloffset=+1]

View File

@@ -1,10 +1,10 @@
[id="ossm-production"]
include::modules/ossm-document-attributes.adoc[]
= Configuring Service Mesh for production
= Configuring Service Mesh for production
:context: ossm-architecture
toc::[]
When you are ready to move from a basic installation to production, you must configure your control plane, tracing, and security certificates to meet production requirements.
When you are ready to move from a basic installation to production, you must configure your control plane, tracing, and security certificates to meet production requirements.
.Prerequisites
@@ -13,6 +13,7 @@ When you are ready to move from a basic installation to production, you must con
include::modules/ossm-smcp-prod.adoc[leveloffset=+1]
= Additional resources
[id="additional-resources_ossm-production"]
== Additional resources
For more information about tuning {ProductShortName} for performance, see xref:../../service_mesh/v2x/ossm-performance-scalability.adoc#ossm-performance-scalability[Performance and Scalability].
* For more information about tuning {ProductShortName} for performance, see xref:../../service_mesh/v2x/ossm-performance-scalability.adoc#ossm-performance-scalability[Performance and scalability].

View File

@@ -7,12 +7,13 @@ include::modules/ossm-cr-example.adoc[leveloffset=+1]
include::modules/ossm-cr-threescale.adoc[leveloffset=+1]
== More information
[id="additional-resources_ossm-reference"]
== Additional resources
For more information about how to configure the features in the `ServiceMeshControlPlane` see the following link.
* For more information about how to configure the features in the `ServiceMeshControlPlane`, see the following links:
* xref:../../service_mesh/v2x/ossm-security.adoc#ossm-security-mtls_ossm-security[Security]
** xref:../../service_mesh/v2x/ossm-security.adoc#ossm-security-mtls_ossm-security[Security]
* xref:../../service_mesh/v2x/ossm-traffic-manage.adoc#ossm-routing-bookinfo_routing-traffic[Traffic management]
** xref:../../service_mesh/v2x/ossm-traffic-manage.adoc#ossm-routing-bookinfo_routing-traffic[Traffic management]
* xref:../../service_mesh/v2x/ossm-observability.adoc#ossm-observability[Metrics and traces]
** xref:../../service_mesh/v2x/ossm-observability.adoc#ossm-observability[Metrics and traces]

View File

@@ -317,7 +317,7 @@ s| Feature s| {oke} s| {product-title} s| Operator name
| OpenShift Container Storage | Not Included - Requires separate subscription | Not Included - Requires separate subscription | OpenShift Container Storage
s| Feature s| {oke} s| {product-title} s| Operator name
| Ansible Automation Platform Resource Operator | Not Included - Requires separate subscription | Not Included - Requires separate subscription | Ansible Automation Platform Resource Operator
| Business Automation provided by Red hat | Not Included - Requires separate subscription | Not Included - Requires separate subscription | Business Automation Operator
| Business Automation provided by Red Hat | Not Included - Requires separate subscription | Not Included - Requires separate subscription | Business Automation Operator
| Data Grid provided by Red Hat | Not Included - Requires separate subscription | Not Included - Requires separate subscription | Data Grid Operator
| Red Hat Integration provided by Red Hat | Not Included - Requires separate subscription | Not Included - Requires separate subscription | Red Hat Integration Operator
| Red Hat Integration - 3Scale provided by Red Hat | Not Included - Requires separate subscription | Not Included - Requires separate subscription | 3scale