mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 21:46:22 +01:00
Correcting some style issues
This commit is contained in:
@@ -27,7 +27,7 @@ This section uses the `pipelines-tutorial` example to demonstrate the preceding
|
||||
== Prerequisites
|
||||
|
||||
* You have access to an {product-title} cluster.
|
||||
* You have installed xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[OpenShift Pipelines] using the {pipelines-title} Operator listed in the OpenShift OperatorHub. Once installed, it is applicable to the entire cluster.
|
||||
* You have installed xref:../../cicd/pipelines/installing-pipelines.adoc#installing-pipelines[OpenShift Pipelines] using the {pipelines-title} Operator listed in the OpenShift OperatorHub. After it is installed, it is applicable to the entire cluster.
|
||||
* You have installed xref:../../cli_reference/tkn_cli/installing-tkn.adoc#installing-tkn[OpenShift Pipelines CLI].
|
||||
* You have forked the front-end link:https://github.com/openshift/pipelines-vote-ui/tree/{pipelines-ver}[`pipelines-vote-ui`] and back-end link:https://github.com/openshift/pipelines-vote-api/tree/{pipelines-ver}[`pipelines-vote-api`] Git repositories using your GitHub ID, and have administrator access to these repositories.
|
||||
* Optional: You have cloned the link:https://github.com/openshift/pipelines-tutorial/tree/{pipelines-ver}[`pipelines-tutorial`] Git repository.
|
||||
|
||||
@@ -46,7 +46,7 @@ For more information, see xref:../operators/understanding/olm-what-operators-are
|
||||
|
||||
To install {product-title} 3.11, you prepared your {op-system-base-full} hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster.
|
||||
|
||||
In {product-title} {product-version}, you use the OpenShift installation program to create a minimum set of resources required for a cluster. Once the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, {op-system-first} systems are managed by the Machine Config Operator (MCO) that runs in the {product-title} cluster.
|
||||
In {product-title} {product-version}, you use the OpenShift installation program to create a minimum set of resources required for a cluster. After the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, {op-system-first} systems are managed by the Machine Config Operator (MCO) that runs in the {product-title} cluster.
|
||||
|
||||
For more information, see xref:../architecture/architecture-installation.adoc#installation-process_architecture-installation[Installation process].
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ to bridge the internet into your {product-title} cluster's VPC. The Amazon
|
||||
Machine Image (AMI) you use does matter. With {op-system-first},
|
||||
for example, you can provide keys via Ignition, like the installer does.
|
||||
|
||||
. Once you provisioned your Amazon EC2 instance and can SSH into it, you must add
|
||||
. After you provisioned your Amazon EC2 instance and can SSH into it, you must add
|
||||
the SSH key that you associated with your {product-title} installation. This key
|
||||
can be different from the key for the bastion instance, but does not have to be.
|
||||
+
|
||||
|
||||
@@ -19,7 +19,7 @@ You can add bare metal hosts to the cluster in the web console.
|
||||
. Specify a unique name for the new bare metal host.
|
||||
. Set the *Boot MAC address*.
|
||||
. Set the *Baseboard Management Console (BMC) Address*.
|
||||
. Optionally, enable power management for the host. This allows {product-title} to control the power state of the host.
|
||||
. Optional: Enable power management for the host. This allows {product-title} to control the power state of the host.
|
||||
. Enter the user credentials for the host's baseboard management controller (BMC).
|
||||
. Select to power on the host after creation, and select *Create*.
|
||||
. Scale up the number of replicas to match the number of available bare metal hosts. Navigate to *Compute* -> *MachineSets*, and increase the number of machine replicas in the cluster by selecting *Edit Machine count* from the *Actions* drop-down menu.
|
||||
|
||||
@@ -14,7 +14,7 @@ You can configure pods to request bound service account tokens by using volume p
|
||||
|
||||
.Procedure
|
||||
|
||||
. Optionally, set the service account issuer.
|
||||
. Optional: Set the service account issuer.
|
||||
+
|
||||
This step is typically not required if the bound tokens are used only within the cluster.
|
||||
+
|
||||
|
||||
@@ -54,7 +54,7 @@ spec:
|
||||
ifndef::openshift-online[]
|
||||
`Dockerfile`, to build from an inline Dockerfile,
|
||||
endif::[]
|
||||
or `Binary`, to accept binary payloads. It is possible to have multiple sources at once. Refer to the documentation for each source type for details.
|
||||
or `Binary`, to accept binary payloads. It is possible to have multiple sources at once. See the documentation for each source type for details.
|
||||
<5> The `strategy` section describes the build strategy used to execute the build. You can specify a `Source`
|
||||
ifndef::openshift-online[]
|
||||
, `Docker`, or `Custom`
|
||||
|
||||
@@ -52,7 +52,7 @@ spec:
|
||||
type: JenkinsPipeline
|
||||
----
|
||||
+
|
||||
. Once you create a `BuildConfig` object with a `jenkinsPipelineStrategy`, tell the
|
||||
. After you create a `BuildConfig` object with a `jenkinsPipelineStrategy`, tell the
|
||||
pipeline what to do by using an inline `jenkinsfile`:
|
||||
+
|
||||
[NOTE]
|
||||
|
||||
@@ -18,7 +18,7 @@ The number of primary shards for the index templates is equal to the number of E
|
||||
|
||||
The Red Hat OpenShift Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume.
|
||||
You can use a `ClusterLogging` custom resource (CR) to increase the number of Elasticsearch nodes, as needed.
|
||||
Refer to the link:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html[Elasticsearch documentation] for considerations involved in configuring storage.
|
||||
See the link:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html[Elasticsearch documentation] for considerations involved in configuring storage.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -2,10 +2,10 @@
|
||||
// * logging/cluster-logging-dashboards.adoc
|
||||
|
||||
[id="cluster-logging-dashboards-access_{context}"]
|
||||
= Accessing the Elastisearch and Openshift Logging dashboards
|
||||
= Accessing the Elastisearch and Openshift Logging dashboards
|
||||
|
||||
|
||||
You can view the *Logging/Elasticsearch Nodes* and *Openshift Logging* dashboards in the {product-title} web console.
|
||||
You can view the *Logging/Elasticsearch Nodes* and *Openshift Logging* dashboards in the {product-title} web console.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -19,7 +19,7 @@ For the *Logging/Elasticsearch Nodes* dashboard, you can select the Elasticsearc
|
||||
+
|
||||
The appropriate dashboard is displayed, showing multiple charts of data.
|
||||
|
||||
. Optionally, select a different time range to display or refresh rate for the data from the *Time Range* and *Refresh Interval* menus.
|
||||
. Optional: Select a different time range to display or refresh rate for the data from the *Time Range* and *Refresh Interval* menus.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -34,7 +34,7 @@ status:
|
||||
|
||||
The remediation payload is stored in the `spec.current` attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a `MachineConfig` object. For Platform scans, the remediation payload is often a different kind of an object (for example, a `ConfigMap` or `Secret` object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text.
|
||||
|
||||
To see exactly what the remediation does when applied, the `MachineConfig` object contents use the Ignition objects for the configuration. Refer to the link:https://coreos.github.io/ignition/specs/[Ignition specification] for further information about the format. In our example, `the spec.config.storage.files[0].path` attribute specifies the file that is being create by this remediation (`/etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf`) and the `spec.config.storage.files[0].contents.source` attribute specifies the contents of that file.
|
||||
To see exactly what the remediation does when applied, the `MachineConfig` object contents use the Ignition objects for the configuration. See the link:https://coreos.github.io/ignition/specs/[Ignition specification] for further information about the format. In our example, `the spec.config.storage.files[0].path` attribute specifies the file that is being create by this remediation (`/etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf`) and the `spec.config.storage.files[0].contents.source` attribute specifies the contents of that file.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -78,7 +78,7 @@ You may, optionally, create an S3 bucket within your own AWS account, and config
|
||||
|
||||
=== Prerequisites
|
||||
|
||||
You must first create the S3 bucket within your own AWS account, in the same AWS region that your {product-title} cluster is deployed. This S3 bucket can be configured with all public access blocked, including system permissions. Once your S3 bucket is created, you must attach a policy to your bucket as https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy[outlined by AWS].
|
||||
You must first create the S3 bucket within your own AWS account, in the same AWS region that your {product-title} cluster is deployed. This S3 bucket can be configured with all public access blocked, including system permissions. After your S3 bucket is created, you must attach a policy to your bucket as https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy[outlined by AWS].
|
||||
|
||||
=== Configuring the LoadBalancer service
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ value. The following `oc patch` command will change the PVC's size:
|
||||
$ oc patch pvc <pvc_name> -p '{"spec": {"resources": {"requests": {"storage": "8Gi"}}}}'
|
||||
----
|
||||
|
||||
. Once the cloud provider object has finished re-sizing, the PVC might be set to
|
||||
. After the cloud provider object has finished re-sizing, the PVC might be set to
|
||||
`FileSystemResizePending`. The following command is used to check
|
||||
the condition:
|
||||
+
|
||||
@@ -85,5 +85,5 @@ Mounted By: mysql-1-q4nz7 <3>
|
||||
$ oc delete pod mysql-1-q4nz7
|
||||
----
|
||||
|
||||
. Once the pod is running, the newly requested size is available and the
|
||||
. After the pod is running, the newly requested size is available and the
|
||||
`FileSystemResizePending` condition is removed from the PVC.
|
||||
|
||||
@@ -47,6 +47,6 @@ After the ISO is copied to the USB drive, you can use the USB drive to install {
|
||||
|
||||
. Click *Install cluster*.
|
||||
|
||||
. Monitor the installation's progress. Watch the cluster events. Once the installation process finishes writing the discovery image to the server's drive, the server will reboot. Remove the USB drive and reset the BIOS to boot to the server's local media rather than the USB drive.
|
||||
. Monitor the installation's progress. Watch the cluster events. After the installation process finishes writing the discovery image to the server's drive, the server will reboot. Remove the USB drive and reset the BIOS to boot to the server's local media rather than the USB drive.
|
||||
|
||||
The server will reboot several times, deploying a control plane followed by a worker.
|
||||
|
||||
@@ -107,7 +107,7 @@ endif::ibm-z,ibm-z-kvm[]
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the `machine-approver` if the Kubelet requests a new certificate with identical parameters.
|
||||
Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the `machine-approver` if the Kubelet requests a new certificate with identical parameters.
|
||||
====
|
||||
+
|
||||
[NOTE]
|
||||
|
||||
@@ -84,10 +84,10 @@ sshKey: '<ssh_pub_key>'
|
||||
+
|
||||
<1> Scale the worker machines based on the number of worker nodes that are part of the {product-title} cluster.
|
||||
ifdef::upstream[]
|
||||
<2> Refer to the xref:bmc-addressing_{context}[BMC addressing] sections for more options.
|
||||
<2> See the xref:bmc-addressing_{context}[BMC addressing] sections for more options.
|
||||
endif::[]
|
||||
ifndef::upstream[]
|
||||
<2> Refer to the BMC addressing sections for more options.
|
||||
<2> See the BMC addressing sections for more options.
|
||||
endif::[]
|
||||
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ The installer for installer-provisioned {product-title} clusters validates the h
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Refer to the hardware documentation for the nodes or contact the hardware vendor for information on updating the firmware.
|
||||
See the hardware documentation for the nodes or contact the hardware vendor for information on updating the firmware.
|
||||
|
||||
There are no known firmware limitations for HP servers.
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ test-cluster.example.com
|
||||
----
|
||||
|
||||
ifeval::[{product-version}>4.7]
|
||||
{product-title} 4.8 and later releases include functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. Once the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
|
||||
{product-title} 4.8 and later releases include functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS.
|
||||
endif::[]
|
||||
|
||||
ifdef::upstream[]
|
||||
|
||||
@@ -79,7 +79,7 @@ spec:
|
||||
+
|
||||
Replace `<num>` for the worker number of the bare metal node in the two `name` fields and the `credentialsName` field. Replace `<base64-of-uid>` with the `base64` string of the user name. Replace `<base64-of-pwd>` with the `base64` string of the password. Replace `<NIC1-mac-address>` with the MAC address of the bare metal node's first NIC.
|
||||
+
|
||||
Refer to the BMC addressing section for additional BMC configuration options. Replace `<protocol>` with the BMC protocol, such as IPMI, RedFish, or others.
|
||||
See the BMC addressing section for additional BMC configuration options. Replace `<protocol>` with the BMC protocol, such as IPMI, RedFish, or others.
|
||||
Replace `<bmc-ip>` with the IP address of the bare metal node's baseboard management controller.
|
||||
+
|
||||
[NOTE]
|
||||
|
||||
@@ -81,7 +81,7 @@ NAME STATUS PROVISIONING STATUS CONSUMER BM
|
||||
openshift-worker-<num> OK provisioning openshift-worker-<num>-65tjz ipmi://<out-of-band-ip> unknown true
|
||||
----
|
||||
+
|
||||
The `provisioning` status remains until the {product-title} cluster provisions the node. This can take 30 minutes or more. Once complete, the status will change to `provisioned`.
|
||||
The `provisioning` status remains until the {product-title} cluster provisions the node. This can take 30 minutes or more. After the node is provisioned, the status will change to `provisioned`.
|
||||
+
|
||||
[source,bash]
|
||||
----
|
||||
@@ -89,7 +89,7 @@ NAME STATUS PROVISIONING STATUS CONSUMER BM
|
||||
openshift-worker-<num> OK provisioned openshift-worker-<num>-65tjz ipmi://<out-of-band-ip> unknown true
|
||||
----
|
||||
|
||||
. Once provisioned, ensure the bare metal node is ready.
|
||||
. After provisioning completes, ensure the bare metal node is ready.
|
||||
+
|
||||
[source,bash]
|
||||
----
|
||||
|
||||
@@ -80,7 +80,7 @@ When deploying a {product-title} cluster without the `provisioning` network, you
|
||||
====
|
||||
|
||||
|
||||
. Once you obtain the IP address, log in to the bootstrap VM using the `ssh` command:
|
||||
. After you obtain the IP address, log in to the bootstrap VM using the `ssh` command:
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
= Installing the Kubernetes NMState Operator
|
||||
|
||||
You must install the Kubernetes NMState Operator from the OpenShift web console while logged in with administrator privileges. Once installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.
|
||||
You must install the Kubernetes NMState Operator from the OpenShift web console while logged in with administrator privileges. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -22,7 +22,7 @@ You must install the Kubernetes NMState Operator from the OpenShift web console
|
||||
|
||||
. Click *Install* to install the Operator.
|
||||
|
||||
. Once the Operator finishes installing, click *View Operator*.
|
||||
. After the Operator finishes installing, click *View Operator*.
|
||||
|
||||
. Under *Provided APIs*, click *Create Instance* to open the dialog box for creating an instance of `kubernetes-nmstate`.
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ Also, the default installation makes it possible to use the OpenShift service fo
|
||||
|
||||
By default, the reporting API is secured with TLS and authentication. This is done by configuring the Reporting Operator to deploy a pod containing both the Reporting Operator's container, and a sidecar container running OpenShift auth-proxy.
|
||||
|
||||
To access the reporting API, the Metering Operator exposes a route. Once that route has been installed, you can run the following command to get the route's hostname.
|
||||
To access the reporting API, the Metering Operator exposes a route. After that route has been installed, you can run the following command to get the route's hostname.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -16,7 +16,7 @@ You begin working with metrics by entering one or several Prometheus Query Langu
|
||||
. For multiple queries, click *Add Query*.
|
||||
. For deleting queries, click {kebab} for the query, then select *Delete query*.
|
||||
. For keeping but not running a query, click the *Disable query* button.
|
||||
. Once you finish creating queries, click the *Run Queries* button. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
|
||||
. After you finish creating queries, click the *Run Queries* button. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
[id="nodes-containers-init-creating_{context}"]
|
||||
= Creating Init Containers
|
||||
|
||||
The following example outlines a simple Pod which has two Init Containers. The first waits for `myservice` and the second waits for `mydb`. Once both containers complete, the pod begins.
|
||||
The following example outlines a simple Pod which has two Init Containers. The first waits for `myservice` and the second waits for `mydb`. After both containers complete, the pod begins.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -34,16 +34,16 @@ spec:
|
||||
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
|
||||
restartPolicy: OnFailure <6>
|
||||
----
|
||||
1. Optionally, specify how many pod replicas a job should run in parallel; defaults to `1`.
|
||||
<1> Optional: Specify how many pod replicas a job should run in parallel; defaults to `1`.
|
||||
* For non-parallel jobs, leave unset. When unset, defaults to `1`.
|
||||
2. Optionally, specify how many successful pod completions are needed to mark a job completed.
|
||||
<2> Optional: Specify how many successful pod completions are needed to mark a job completed.
|
||||
* For non-parallel jobs, leave unset. When unset, defaults to `1`.
|
||||
* For parallel jobs with a fixed completion count, specify the number of completions.
|
||||
* For parallel jobs with a work queue, leave unset. When unset defaults to the `parallelism` value.
|
||||
3. Optionally, specify the maximum duration the job can run.
|
||||
4. Optionally, specify the number of retries for a job. This field defaults to six.
|
||||
5. Specify the template for the pod the controller creates.
|
||||
6. Specify the restart policy of the pod:
|
||||
<3> Optional: Specify the maximum duration the job can run.
|
||||
<4> Optional: Specify the number of retries for a job. This field defaults to six.
|
||||
<5> Specify the template for the pod the controller creates.
|
||||
<6> Specify the restart policy of the pod:
|
||||
* `Never`. Do not restart the job.
|
||||
* `OnFailure`. Restart the job only if it fails.
|
||||
* `Always`. Always restart the job.
|
||||
|
||||
@@ -39,7 +39,7 @@ spec:
|
||||
+
|
||||
[NOTE]
|
||||
=====
|
||||
Please refer to the `Configuring SR-IOV network devices` section for a detailed explanation on each option in `SriovNetworkNodePolicy`.
|
||||
See the `Configuring SR-IOV network devices` section for a detailed explanation on each option in `SriovNetworkNodePolicy`.
|
||||
|
||||
When applying the configuration specified in a `SriovNetworkNodePolicy` object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes.
|
||||
It may take several minutes for a configuration change to apply.
|
||||
@@ -74,7 +74,7 @@ spec:
|
||||
+
|
||||
[NOTE]
|
||||
=====
|
||||
Please refer to the `Configuring SR-IOV additional network` section for a detailed explanation on each option in `SriovNetwork`.
|
||||
See the `Configuring SR-IOV additional network` section for a detailed explanation on each option in `SriovNetwork`.
|
||||
=====
|
||||
+
|
||||
. Create the `SriovNetwork` object by running the following command:
|
||||
|
||||
@@ -42,7 +42,7 @@ spec:
|
||||
+
|
||||
[NOTE]
|
||||
=====
|
||||
Please refer to `Configuring SR-IOV network devices` section for detailed explanation on each option in `SriovNetworkNodePolicy`.
|
||||
See the `Configuring SR-IOV network devices` section for detailed explanation on each option in `SriovNetworkNodePolicy`.
|
||||
|
||||
When applying the configuration specified in a `SriovNetworkNodePolicy` object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes.
|
||||
It may take several minutes for a configuration change to apply.
|
||||
@@ -78,7 +78,7 @@ spec:
|
||||
+
|
||||
[NOTE]
|
||||
=====
|
||||
Please refer to `Configuring SR-IOV additional network` section for detailed explanation on each option in `SriovNetwork`.
|
||||
See the `Configuring SR-IOV additional network` section for detailed explanation on each option in `SriovNetwork`.
|
||||
=====
|
||||
|
||||
. Create the `SriovNetworkNodePolicy` object by running the following command:
|
||||
|
||||
@@ -45,7 +45,7 @@ spec:
|
||||
+
|
||||
[NOTE]
|
||||
=====
|
||||
Please refer to the `Configuring SR-IOV network devices` section for a detailed explanation on each option in `SriovNetworkNodePolicy`.
|
||||
See the `Configuring SR-IOV network devices` section for a detailed explanation on each option in `SriovNetworkNodePolicy`.
|
||||
|
||||
When applying the configuration specified in a `SriovNetworkNodePolicy` object, the SR-IOV Operator may drain the nodes, and in some cases, reboot nodes.
|
||||
It may take several minutes for a configuration change to apply.
|
||||
@@ -81,7 +81,7 @@ spec:
|
||||
+
|
||||
[NOTE]
|
||||
=====
|
||||
Please refer to `Configuring SR-IOV additional network` section for detailed explanation on each option in `SriovNetwork`.
|
||||
See the `Configuring SR-IOV additional network` section for detailed explanation on each option in `SriovNetwork`.
|
||||
=====
|
||||
|
||||
. Create the `SriovNetworkNodePolicy` object by running the following command:
|
||||
|
||||
@@ -33,7 +33,7 @@ image::odc_project_metrics.png[]
|
||||
.. From the list of pods, clear the colored square boxes to remove the metrics for specific pods to further filter your query result.
|
||||
.. Click *Show PromQL* to see the Prometheus query. You can further modify this query with the help of prompts to customize the query and filter the metrics you want to see for that namespace.
|
||||
.. Use the drop-down list to set a time range for the data being displayed. You can click *Reset Zoom* to reset it to the default time range.
|
||||
.. Optionally, in the *Select Query* list, select *Custom Query* to create a custom Prometheus query and filter relevant metrics.
|
||||
.. Optional: In the *Select Query* list, select *Custom Query* to create a custom Prometheus query and filter relevant metrics.
|
||||
|
||||
* Use the *Alerts* tab to see the rules that trigger alerts for the applications in your project, identify the alerts firing in the project, and silence them if required.
|
||||
+
|
||||
|
||||
@@ -21,7 +21,7 @@ The *Pipelines* view in the *Developer* perspective lists all the pipelines in a
|
||||
.Pipeline details
|
||||
image::op-pipeline-details.png[Pipeline details]
|
||||
+
|
||||
. Optionally, in the *Pipeline details* page:
|
||||
. Optional: In the *Pipeline details* page:
|
||||
* Click the *Metrics* tab to see the following information about pipelines:
|
||||
** *Pipeline Success Ratio*
|
||||
** *Number of Pipeline Runs*
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="ossm-operatorhub-remove-operators_{context}"]
|
||||
= Removing the installed Operators
|
||||
|
||||
You must remove the Operators to successfully remove {ProductName}. Once you remove the {ProductName} Operator, you must remove the Kiali Operator, the Jaeger Operator, and the OpenShift Elasticsearch Operator.
|
||||
You must remove the Operators to successfully remove {ProductName}. After you remove the {ProductName} Operator, you must remove the Kiali Operator, the Jaeger Operator, and the OpenShift Elasticsearch Operator.
|
||||
|
||||
[id="ossm-remove-operator-servicemesh_{context}"]
|
||||
== Removing the Operators
|
||||
|
||||
@@ -94,7 +94,7 @@ A {ProductName} control plane component called Istio OpenShift Routing (IOR) syn
|
||||
|
||||
[id="ossm-catch-all-domains_{context}"]
|
||||
=== Catch-all domains
|
||||
Catch-all domains ("\*") are not supported. If one is found in the Gateway definition, {ProductName} _will_ create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will __not__ be a catch all ("*") route, instead it will have a hostname in the form `<route-name>[-<project>].<suffix>`. Refer to the OpenShift documentation for more information about how default hostnames work and how a cluster administrator can customize it.
|
||||
Catch-all domains ("\*") are not supported. If one is found in the Gateway definition, {ProductName} _will_ create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will __not__ be a catch all ("*") route, instead it will have a hostname in the form `<route-name>[-<project>].<suffix>`. See the OpenShift documentation for more information about how default hostnames work and how a cluster administrator can customize it.
|
||||
|
||||
[id="ossm-subdomains_{context}"]
|
||||
=== Subdomains
|
||||
|
||||
@@ -106,7 +106,7 @@ A {ProductName} control plane component called Istio OpenShift Routing (IOR) syn
|
||||
|
||||
[id="ossm-catch-all-domains_{context}"]
|
||||
=== Catch-all domains
|
||||
Catch-all domains ("\*") are not supported. If one is found in the Gateway definition, {ProductName} _will_ create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will __not__ be a catch all ("*") route, instead it will have a hostname in the form `<route-name>[-<project>].<suffix>`. Refer to the OpenShift documentation for more information about how default hostnames work and how a `cluster-admin` can customize it. If you use {product-dedicated}, refer to the {product-dedicated} the `dedicated-admin` role.
|
||||
Catch-all domains ("\*") are not supported. If one is found in the Gateway definition, {ProductName} _will_ create the route, but will rely on OpenShift to create a default hostname. This means that the newly created route will __not__ be a catch all ("*") route, instead it will have a hostname in the form `<route-name>[-<project>].<suffix>`. See the OpenShift documentation for more information about how default hostnames work and how a `cluster-admin` can customize it. If you use {product-dedicated}, refer to the {product-dedicated} the `dedicated-admin` role.
|
||||
|
||||
[id="ossm-subdomains_{context}"]
|
||||
=== Subdomains
|
||||
|
||||
@@ -40,7 +40,7 @@ $ oc delete localvolume --all --all-namespaces
|
||||
|
||||
.. Click *Remove* in the window that appears.
|
||||
|
||||
. The PVs created by the Local Storage Operator will remain in the cluster until deleted. Once these volumes are no longer in use, delete them by running the following command:
|
||||
. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -128,7 +128,7 @@ The driver toolkit was introduced to {product-title} 4.6 as of version 4.6.30, i
|
||||
$ oc create -f 0000-buildconfig.yaml
|
||||
----
|
||||
|
||||
. Once the builder pod completes successfully, deploy the driver container image as a `DaemonSet`.
|
||||
. After the builder pod completes successfully, deploy the driver container image as a `DaemonSet`.
|
||||
|
||||
.. The driver container must run with the privileged security context in order to load the kernel modules on the host. The following YAML file contains the RBAC rules and the `DaemonSet` for running the driver container. Save this YAML as `1000-driver-container.yaml`.
|
||||
+
|
||||
@@ -204,7 +204,7 @@ spec:
|
||||
$ oc create -f 1000-drivercontainer.yaml
|
||||
----
|
||||
|
||||
. Once the pods are running on the worker nodes, verify that the `simple_kmod` kernel module is loaded successfully on the host machines with `lsmod`.
|
||||
. After the pods are running on the worker nodes, verify that the `simple_kmod` kernel module is loaded successfully on the host machines with `lsmod`.
|
||||
|
||||
.. Verify that the pods are running:
|
||||
+
|
||||
|
||||
@@ -46,5 +46,5 @@ Service accounts are represented with the `ServiceAccount` object. Examples:
|
||||
Each user must authenticate in
|
||||
some way to access {product-title}. API requests with no authentication
|
||||
or invalid authentication are authenticated as requests by the `anonymous`
|
||||
system user. Once authenticated, policy determines what the user is
|
||||
system user. After authenticated, policy determines what the user is
|
||||
authorized to do.
|
||||
|
||||
@@ -20,7 +20,7 @@ container image registry.
|
||||
//.Additional resources
|
||||
//* link:https://quay.io[Quay.io]
|
||||
//* link:https://quay.io/tutorial/[Quay Tutorial]
|
||||
//* Refer to link:https://access.redhat.com/documentation/en-us/red_hat_quay/2.9/html-single/getting_started_with_red_hat_quay/[Getting Started with Red Hat Quay]
|
||||
//* See link:https://access.redhat.com/documentation/en-us/red_hat_quay/2.9/html-single/getting_started_with_red_hat_quay/[Getting Started with Red Hat Quay]
|
||||
//for information about setting up your own Red Hat Quay registry.
|
||||
//* To learn how to set up credentials to access
|
||||
//Red Hat Quay as a secured registry, refer to Allowing Pods to Reference Images from Other Secured Registries.
|
||||
|
||||
@@ -25,7 +25,7 @@ to go to the Create Operator Subscription page.
|
||||
|
||||
. Select *Install*. The *Container Security* Operator appears after a few moments on the *Installed Operators* screen.
|
||||
|
||||
. Optionally, you can add custom certificates to the CSO. In this example, create a certificate
|
||||
. Optional: You can add custom certificates to the CSO. In this example, create a certificate
|
||||
named `quay.crt` in the current directory. Then run the following command to add the cert to the CSO:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -35,7 +35,7 @@ grading system. A freshness grade is a measure of the oldest and most severe
|
||||
security errata available for an image. "A" is more up to date than "F". See
|
||||
link:https://access.redhat.com/articles/2803031[Container Health Index grades as used inside the Red Hat Ecosystem Catalog] for more details on this grading system.
|
||||
|
||||
Refer to the link:https://access.redhat.com/security/[Red Hat Product Security Center]
|
||||
See the link:https://access.redhat.com/security/[Red Hat Product Security Center]
|
||||
for details on security updates and vulnerabilities related to Red Hat software.
|
||||
Check out link:https://access.redhat.com/security/security-updates/#/security-advisories[Red Hat Security Advisories]
|
||||
to search for specific advisories and CVEs.
|
||||
|
||||
@@ -12,4 +12,4 @@ If you experience difficulty with a procedure described in this documentation, v
|
||||
* Submit a support case to Red Hat Global Support Services (GSS)
|
||||
* Access other product documentation
|
||||
|
||||
If you have a suggestion for improving this guide or have found an error, please submit a Bugzilla report at http://bugzilla.redhat.com against *Product* for the *Documentation* component. Please provide specific details, such as the section number, guide name, and {ServerlessProductName} version so we can easily locate the content.
|
||||
If you have a suggestion for improving this guide or have found an error, please submit a Bugzilla report at http://bugzilla.redhat.com against *Product* for the *Documentation* component. Provide specific details, such as the section number, guide name, and {ServerlessProductName} version so we can easily locate the content.
|
||||
|
||||
@@ -10,7 +10,7 @@ To use Topology Manager, you must configure an allocation policy in the `cpumana
|
||||
|
||||
.Prequisites
|
||||
|
||||
* Configure the CPU Manager policy to be `static`. Refer to Using CPU Manager in the Scalability and Performance section.
|
||||
* Configure the CPU Manager policy to be `static`. See the Using CPU Manager in the Scalability and Performance section.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
@@ -18,6 +18,6 @@ To identify issues with your cluster, you can use Insights in {cloud-redhat-com}
|
||||
|
||||
// TODO: verify that these settings apply for Service Mesh and OpenShift virtualization, etc.
|
||||
If you have a suggestion for improving this documentation or have found an
|
||||
error, please submit a link:http://bugzilla.redhat.com[Bugzilla report] against the
|
||||
error, submit a link:http://bugzilla.redhat.com[Bugzilla report] against the
|
||||
*OpenShift Container Platform* product for the *Documentation* component. Please
|
||||
provide specific details, such as the section name and {product-title} version.
|
||||
|
||||
@@ -57,7 +57,7 @@ env": [
|
||||
$ oc logs -f build/rails-app-1
|
||||
----
|
||||
|
||||
. Once the build is complete, look at the running pods in {product-title}:
|
||||
. After the build is complete, look at the running pods in {product-title}:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -245,7 +245,7 @@ claiming host `eldest.example.test` in the namespace `ns1` exists, wildcard
|
||||
routes in that namespace can claim subdomain `example.test`. When the route for
|
||||
host `eldest.example.test` is deleted, the next oldest route
|
||||
`senior.example.test` would become the oldest route and would not affect any
|
||||
other routes. Once the route for host `senior.example.test` is deleted, the next
|
||||
other routes. After the route for host `senior.example.test` is deleted, the next
|
||||
oldest route `junior.example.test` becomes the oldest route and block the
|
||||
wildcard route claimant.
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ existing Windows virtual machine.
|
||||
====
|
||||
This procedure uses a generic approach to adding drivers to Windows.
|
||||
The process might differ slightly between versions of Windows.
|
||||
Refer to the installation documentation for your version of Windows
|
||||
See the installation documentation for your version of Windows
|
||||
for specific installation steps.
|
||||
====
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ Install the VirtIO drivers from the attached SATA CD driver during Windows insta
|
||||
[NOTE]
|
||||
====
|
||||
This procedure uses a generic approach to the Windows installation and the
|
||||
installation method might differ between versions of Windows. Refer to the
|
||||
installation method might differ between versions of Windows. See the
|
||||
documentation for the version of Windows that you are installing.
|
||||
====
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
[id="web-console-overview_{context}"]
|
||||
= Understanding and accessing the web console
|
||||
|
||||
The web console runs as a pod on the master. The static assets required to run the web console are served by the pod. Once {product-title} is successfully installed using `openshift-install create cluster`, find the URL for the web console and login credentials for your installed cluster in the CLI output of the installation program. For example:
|
||||
The web console runs as a pod on the master. The static assets required to run the web console are served by the pod. After {product-title} is successfully installed using `openshift-install create cluster`, find the URL for the web console and login credentials for your installed cluster in the CLI output of the installation program. For example:
|
||||
|
||||
[source,terminal]
|
||||
.Example output
|
||||
|
||||
@@ -19,16 +19,16 @@ However, having remote worker nodes can introduce higher latency, intermittent l
|
||||
* *Latency spikes or temporary reduction in throughput*: As with any network, any changes in network conditions between your cluster and the remote worker nodes can negatively impact your cluster. These types of situations are beyond the scope of this documentation.
|
||||
|
||||
Note the following limitations when planning a cluster with remote worker nodes:
|
||||
|
||||
* Remote worker nodes are supported on only bare metal clusters with user-provisioned infrastructure.
|
||||
|
||||
* Remote worker nodes are supported on only bare metal clusters with user-provisioned infrastructure.
|
||||
|
||||
* {product-title} does not support remote worker nodes that use a different cloud provider than the on-premise cluster uses.
|
||||
|
||||
* Moving workloads from one Kubernetes zone to a different Kubernetes zone can be problematic due to system and environment issues, such as a specific type of memory not being available in a different zone.
|
||||
|
||||
* Proxies and firewalls can present additional limitations that are beyond the scope of this document. Refer to the relevant {product-title} documentation for how to address such limitations, such as xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[Configuring your firewall].
|
||||
* Proxies and firewalls can present additional limitations that are beyond the scope of this document. See the relevant {product-title} documentation for how to address such limitations, such as xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[Configuring your firewall].
|
||||
|
||||
* You are responsible for configuring and maintaining L2/L3-level network connectivity between the control plane and the network-edge nodes.
|
||||
* You are responsible for configuring and maintaining L2/L3-level network connectivity between the control plane and the network-edge nodes.
|
||||
|
||||
include::modules/nodes-edge-remote-workers-network.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -5,6 +5,6 @@ include::modules/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Each container in a pod has a container image. Once you have created an image and pushed it to a registry, you can then refer to it in the pod.
|
||||
Each container in a pod has a container image. After you have created an image and pushed it to a registry, you can then refer to it in the pod.
|
||||
|
||||
include::modules/images-image-pull-policy-overview.adoc[leveloffset=+1]
|
||||
|
||||
@@ -7,7 +7,7 @@ toc::[]
|
||||
|
||||
== Purpose
|
||||
|
||||
Node certificates are signed by the cluster; they come from a certificate authority (CA) that is generated by the bootstrap process. Once the cluster is installed, the node certificates are auto-rotated.
|
||||
Node certificates are signed by the cluster; they come from a certificate authority (CA) that is generated by the bootstrap process. After the cluster is installed, the node certificates are auto-rotated.
|
||||
|
||||
== Management
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ include::modules/technology-preview.adoc[leveloffset=+2]
|
||||
|
||||
{FunctionsProductName} enables developers to create and deploy stateless, event-driven functions as a Knative service on {product-title}.
|
||||
|
||||
The `kn func` CLI is provided as a plug-in for the Knative `kn` CLI. {FunctionsProductName} uses the link:https://buildpacks.io/[CNCF Buildpack API] to create container images. Once a container image has been created, you can use the `kn func` CLI to deploy the container image as a Knative service on the cluster.
|
||||
The `kn func` CLI is provided as a plug-in for the Knative `kn` CLI. {FunctionsProductName} uses the link:https://buildpacks.io/[CNCF Buildpack API] to create container images. After a container image has been created, you can use the `kn func` CLI to deploy the container image as a Knative service on the cluster.
|
||||
|
||||
[id="serverless-functions-about-runtimes"]
|
||||
== Supported runtimes
|
||||
|
||||
@@ -31,7 +31,7 @@ include::modules/ossm-installation-activities.adoc[leveloffset=+1]
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
Please see xref:../../logging/config/cluster-logging-log-store.adoc[Configuring the log store] for details on configuring the default Jaeger parameters for Elasticsearch in a production environment.
|
||||
See xref:../../logging/config/cluster-logging-log-store.adoc[Configuring the log store] for details on configuring the default Jaeger parameters for Elasticsearch in a production environment.
|
||||
====
|
||||
|
||||
== Next steps
|
||||
|
||||
@@ -17,7 +17,7 @@ this space, the disk partitions and file system(s) in the virtual machine
|
||||
might need to be expanded.
|
||||
|
||||
The resizing procedure varies based on the operating system that is installed on the virtual machine.
|
||||
Refer to the operating system documentation for details.
|
||||
See the operating system documentation for details.
|
||||
====
|
||||
|
||||
== Prerequisites
|
||||
|
||||
@@ -13,7 +13,7 @@ The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built i
|
||||
====
|
||||
When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.
|
||||
|
||||
The resizing procedure varies based on the operating system installed on the virtual machine. Refer to the operating system documentation for details.
|
||||
The resizing procedure varies based on the operating system installed on the virtual machine. See the operating system documentation for details.
|
||||
====
|
||||
|
||||
== Prerequisites
|
||||
|
||||
@@ -199,7 +199,7 @@ and more.
|
||||
You can xref:../storage/expanding-persistent-volumes.adoc#expanding-persistent-volumes[expand persistent volumes], configure xref:../storage/dynamic-provisioning.adoc#dynamic-provisioning[dynamic provisioning], and use CSI to xref:../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-using-csi[configure], xref:../storage/container_storage_interface/persistent-storage-csi-cloning.adoc#persistent-storage-csi-cloning[clone], and use xref:../storage/container_storage_interface/persistent-storage-csi-snapshots.adoc#persistent-storage-csi-snapshots[snapshots] of persistent storage.
|
||||
|
||||
- **xref:../operators/understanding/olm-understanding-operatorhub.adoc#olm-understanding-operatorhub[Manage Operators]**: Lists of Red Hat, ISV, and community Operators can
|
||||
be reviewed by cluster administrators and xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-adding-operators-to-a-cluster[installed on their clusters]. After installation, you can xref:../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[run], xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[upgrade], back up, or otherwise manage the Operator on your cluster.
|
||||
be reviewed by cluster administrators and xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-adding-operators-to-a-cluster[installed on their clusters]. After you install them, you can xref:../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[run], xref:../operators/admin/olm-upgrading-operators.adoc#olm-upgrading-operators[upgrade], back up, or otherwise manage the Operator on your cluster.
|
||||
|
||||
=== Change cluster components
|
||||
|
||||
@@ -246,7 +246,7 @@ cluster can be performed by {product-title} cluster administrators. As a
|
||||
|
||||
////
|
||||
- **xref:../operators/understanding/olm-what-operators-are.adoc#olm-what-operators-are[Manage Operators]**: Lists of Red Hat, ISV, and community Operators can
|
||||
be reviewed by cluster administrators and xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-adding-operators-to-a-cluster[installed on their clusters]. Once installed, you can xref:../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[run], upgrade, back up or otherwise manage the Operator on your cluster (based on what the Operator is designed to do).
|
||||
be reviewed by cluster administrators and xref:../operators/admin/olm-adding-operators-to-cluster.adoc#olm-adding-operators-to-a-cluster[installed on their clusters]. After you install them, you can xref:../operators/user/olm-creating-apps-from-installed-operators.adoc#olm-creating-apps-from-installed-operators[run], upgrade, back up or otherwise manage the Operator on your cluster (based on what the Operator is designed to do).
|
||||
////
|
||||
|
||||
- **xref:../administering_a_cluster/dedicated-admin-role.adoc#dedicated-managing-administrators_dedicated-administrator[Manage RBAC authorizations]**: Grant permissions to users or groups and manage service accounts.
|
||||
|
||||
@@ -169,7 +169,7 @@ Red Hat Middleware Bundles that include OpenShift embedded in them only contain
|
||||
{product-title}.
|
||||
|
||||
=== OpenShift Serverless
|
||||
{oke} does not include OpenShift Serverless support. Please use {product-title}
|
||||
{oke} does not include OpenShift Serverless support. Use {product-title}
|
||||
for this support.
|
||||
|
||||
=== Quay Integration compatible
|
||||
|
||||
Reference in New Issue
Block a user