1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Terminology style updates for arch

This commit is contained in:
Andrea Hoffer
2020-11-12 10:24:49 -05:00
committed by openshift-cherrypick-robot
parent d898f0d9f4
commit 19125e35d7
11 changed files with 76 additions and 78 deletions

View File

@@ -24,7 +24,7 @@ You can use GitOps tooling to create repeatable and predictable processes for ma
By using {product-title} to automate both your cluster configuration and container development process, you can pick and choose where and when to adopt GitOps practices. Using a CI pipeline that pairs with your GitOps strategy and execution plan is ideal. {product-title} provides the flexibility to choose when and how you integrate this methodology into your business practices and pipelines.
With GitOps integration, you can declaratively configure and store your OCP cluster configuration
With GitOps integration, you can declaratively configure and store your {product-title} cluster configuration
GitOps works well with {product-title} because you can both declaratively configure clusters and store the state of the cluster configuration in Git. For more information, see xref:../installing/install_config/customizations.adoc#customizations[Available cluster customizations].

View File

@@ -161,7 +161,7 @@ image::developer-catalog.png[{product-title} Developer Catalog]
[id="understanding-development-registry-options"]
=== Registry options
Container Registries are where you store container images so you can share them
Container registries are where you store container images so you can share them
with others and make them available to the platform where they ultimately run.
You can select large, public container registries that offer free accounts or a
premium version that offer more storage and special features. You can also
@@ -183,7 +183,7 @@ link:https://quay.io/[Quay.io]. The Quay.io registry is owned and managed by Red
Hat. Many of the components used in {product-title} are stored in Quay.io,
including container images and the Operators that are used to deploy
{product-title} itself. Quay.io also offers the means of storing other types of
content, including Helm Charts.
content, including Helm charts.
If you want your own, private container registry, {product-title} itself
includes a private container registry that is installed with {product-title}
@@ -197,7 +197,7 @@ from those registries. Some of those credentials are presented on a cluster-wide
basis from {product-title}, while other credentials can be assigned to individuals.
[id="creating-kubernetes-manifest-openshift"]
== Creating a Kubernetes Manifest for {product-title}
== Creating a Kubernetes manifest for {product-title}
While the container image is the basic building block for a containerized
application, more information is required to manage and deploy that application
@@ -213,34 +213,34 @@ to the next environment, roll it back to earlier versions, if necessary, and
share it with others
[id="understanding-kubernetes-pods"]
=== About Kubernetes Pods and services
=== About Kubernetes pods and services
While the container image is the basic unit with docker, the basic units that
Kubernetes works with are called
link:https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/[Pods].
Pods represent the next step in building out an application. A Pod can contain
one or more than one container. The key is that the Pod is the single unit
link:https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/[pods].
Pods represent the next step in building out an application. A pod can contain
one or more than one container. The key is that the pod is the single unit
that you deploy, scale, and manage.
Scalability and namespaces are probably the main items to consider when determining
what goes in a Pod. For ease of deployment, you might want to deploy a container
in a Pod and include its own logging and monitoring container in the Pod. Later,
when you run the Pod and need to scale up an additional instance, those other
containers are scaled up with it. For namespaces, containers in a Pod share the
what goes in a pod. For ease of deployment, you might want to deploy a container
in a pod and include its own logging and monitoring container in the pod. Later,
when you run the pod and need to scale up an additional instance, those other
containers are scaled up with it. For namespaces, containers in a pod share the
same network interfaces, shared storage volumes, and resource limitations,
such as memory and CPU, which makes it easier to manage the contents of the Pod
as a single unit. Containers in a Pod can also communicate with each other by
such as memory and CPU, which makes it easier to manage the contents of the pod
as a single unit. Containers in a pod can also communicate with each other by
using standard inter-process communications, such as System V semaphores or
POSIX shared memory.
While individual Pods represent a scalable unit in Kubernetes, a
While individual pods represent a scalable unit in Kubernetes, a
link:https://kubernetes.io/docs/concepts/services-networking/service/[service]
provides a means of grouping together a set of Pods to create a complete, stable
provides a means of grouping together a set of pods to create a complete, stable
application that can complete tasks such as load balancing. A service is also
more permanent than a Pod because the service remains available from the same
more permanent than a pod because the service remains available from the same
IP address until you delete it. When the service is in use, it is requested by
name and the {product-title} cluster resolves that name into the IP addresses
and ports where you can reach the Pods that compose the service.
and ports where you can reach the pods that compose the service.
By their nature, containerized applications are separated from the operating
systems where they run and, by extension, their users. Part of your Kubernetes
@@ -250,20 +250,20 @@ link:https://kubernetes.io/docs/concepts/services-networking/network-policies/[n
that allow fine-grained control over communication with your containerized
applications. To connect incoming requests for HTTP, HTTPS, and other services
from outside your cluster to services inside your cluster, you can use an
link:https://kubernetes.io/docs/concepts/services-networking/ingress/[Ingress]
link:https://kubernetes.io/docs/concepts/services-networking/ingress/[`Ingress`]
resource.
If your container requires on-disk storage instead of database storage, which
might be provided through a service, you can add
link:https://kubernetes.io/docs/concepts/storage/volumes/[volumes]
to your manifests to make that storage available to your Pods. You can configure
to your manifests to make that storage available to your pods. You can configure
the manifests to create persistent volumes (PVs) or dynamically create volumes that
are added to your Pod definitions.
are added to your `Pod` definitions.
After you define a group of Pods that compose your application, you can define
those Pods in
link:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/[deployments]
and xref:../applications/deployments/what-deployments-are.adoc#what-deployments-are[deploymentconfigs].
After you define a group of pods that compose your application, you can define
those pods in
link:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/[`Deployment`]
and xref:../applications/deployments/what-deployments-are.adoc#what-deployments-are[`DeploymentConfig`] objects.
[id="application-types"]
=== Application types
@@ -278,23 +278,22 @@ application, consider if the application is:
starts up to produce a report and exits when the report is complete. The
application might not run again then for a month. Suitable {product-title}
objects for these types of applications include
link:https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/[Jobs]
and https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/[CronJob] objects.
link:https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/[`Job`]
and https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/[`CronJob`] objects.
* Expected to run continuously. For long-running applications, you can write a
xref:../applications/deployments/what-deployments-are.adoc#deployments-kube-deployments[Deployment]
or a xref:../applications/deployments/what-deployments-are.adoc#deployments-and-deploymentconfigs[DeploymentConfig].
xref:../applications/deployments/what-deployments-are.adoc#deployments-kube-deployments[deployment].
* Required to be highly available. If your application requires high
availability, then you want to size your deployment to have more than one
instance. A Deployment or DeploymentConfig can incorporate a
link:https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/[ReplicaSet]
for that type of application. With ReplicaSets, Pods run across multiple nodes
instance. A `Deployment` or `DeploymentConfig` object can incorporate a
link:https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/[replica set]
for that type of application. With replica sets, pods run across multiple nodes
to make sure the application is always available, even if a worker goes down.
* Need to run on every node. Some types of Kubernetes applications are intended
to run in the cluster itself on every master or worker node. DNS and monitoring
applications are examples of applications that need to run continuously on every
node. You can run this type of application as a
link:https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[DaemonSet].
You can also run a DaemonSet on a subset of nodes, based on node labels.
link:https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[daemon set].
You can also run a daemon set on a subset of nodes, based on node labels.
* Require life-cycle management. When you want to hand off your application so
that others can use it, consider creating an
link:https://coreos.com/operators/[Operator]. Operators let you build in
@@ -305,8 +304,8 @@ Operators to selected namespaces so that users in the cluster can run them.
requirements or numbering requirements. For example, you might be
required to run exactly three instances of the application and to name the
instances `0`, `1`, and `2`. A
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/[StatefulSet]
is suitable for this application. StatefulSets are most useful for applications
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/[stateful set]
is suitable for this application. Stateful sets are most useful for applications
that require independent storage, such as databases and zookeeper clusters.
[id="supporting-components"]
@@ -326,8 +325,8 @@ with their applications.
* Templates, which are useful for a one-off type of application, where the
lifecycle of a component is not important after it is installed. A template provides an easy
way to get started developing a Kubernetes application with minimal overhead.
A template can be a list of resource definitions, which could be deployments,
services, routes, or other objects. If you want to change names or resources,
A template can be a list of resource definitions, which could be `Deployment`,
`Service`, `Route`, or other objects. If you want to change names or resources,
you can set these values as parameters in the template.
You can configure the supporting Operators and

View File

@@ -43,7 +43,7 @@ webhooks:
----
<1> Specifies a mutating admission plug-in configuration.
<2> The name for the webhook object. Replace `<webhook_name>` with the appropriate value.
<2> The name for the `MutatingWebhookConfiguration` object. Replace `<webhook_name>` with the appropriate value.
<3> The name of the webhook to call. Replace `<webhook_name>` with the appropriate value.
<4> Information about how to connect to, trust, and send data to the webhook server.
<5> The namespace where the front-end service is created.
@@ -97,7 +97,7 @@ webhooks:
----
<1> Specifies a validating admission plug-in configuration.
<2> The name for the webhook object. Replace `<webhook_name>` with the appropriate value.
<2> The name for the `ValidatingWebhookConfiguration` object. Replace `<webhook_name>` with the appropriate value.
<3> The name of the webhook to call. Replace `<webhook_name>` with the appropriate value.
<4> Information about how to connect to, trust, and send data to the webhook server.
<5> The namespace where the front-end service is created.

View File

@@ -27,7 +27,7 @@ determines on which nodes to start containers and pods. Important services run
stopping container workloads, and a service proxy, which manages communication
for pods across workers.
In {product-title}, MachineSets control the worker machines. Machines with
In {product-title}, machine sets control the worker machines. Machines with
the worker role drive compute workloads that are governed by a specific machine
pool that autoscales them. Because {product-title} has the capacity to support
multiple machine types, the worker machines are classed as _compute_ machines.
@@ -46,7 +46,7 @@ than just the Kubernetes services for managing the {product-title} cluster.
Because all of the machines with the control plane role are master machines,
the terms _master_ and _control plane_ are used interchangeably to describe
them. Instead of being grouped into a
MachineSet, master machines are defined by a series of standalone machine API
machine set, master machines are defined by a series of standalone machine API
resources. Extra controls apply to master machines to prevent you from deleting
all master machines and breaking your cluster.
@@ -65,7 +65,7 @@ Kubernetes API server, etcd, Kubernetes controller manager, and HAProxy services
|===
|Component |Description
|Kubernetes API server
|The Kubernetes API server validates and configures the data for pods, Services,
|The Kubernetes API server validates and configures the data for pods, services,
and replication controllers. It also provides a focal point for the shared state of the cluster.
|etcd
|etcd stores the persistent master state while other components watch etcd for

View File

@@ -41,7 +41,7 @@ to provide fast installation, Operator-based management, and simplified upgrades
{op-system} includes:
* Ignition, which {product-title} uses as a firstboot system configuration for initially bringing up and configuring machines.
* CRI-O, a Kubernetes native container runtime implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. CRI-O provides facilities for running, stopping, and restarting containers. It fully replaces the Docker Container Engine , which was used in {product-title} 3.
* CRI-O, a Kubernetes native container runtime implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. CRI-O provides facilities for running, stopping, and restarting containers. It fully replaces the Docker Container Engine, which was used in {product-title} 3.
* Kubelet, the primary node agent for Kubernetes that is responsible for
launching and monitoring containers.

View File

@@ -206,7 +206,7 @@ spec:
<4> Defines the target port within pods. This example uses port 8443.
<5> Specifies the port used by the readiness probe. This example uses port 8443.
. Deploy the DaemonSet:
. Deploy the daemon set:
+
[source,terminal]
----
@@ -275,7 +275,7 @@ items:
$ oc apply -f webhook-service.yaml
----
. Define a Custom Resource Definition for the webhook server, in a file called `webhook-crd.yaml`:
. Define a custom resource definition for the webhook server, in a file called `webhook-crd.yaml`:
+
[source,yaml]
----
@@ -292,7 +292,7 @@ spec:
singular: namespacereservation <6>
kind: NamespaceReservation <7>
----
<1> Reflects Custom Resource Definition `spec` values and is in the format `<plural>.<group>`. This example uses the `namespacereservations` resource.
<1> Reflects `CustomResourceDefinition` `spec` values and is in the format `<plural>.<group>`. This example uses the `namespacereservations` resource.
<2> REST API group name.
<3> REST API version name.
<4> Accepted values are `Namespaced` or `Cluster`.
@@ -300,7 +300,7 @@ spec:
<6> Alias seen in `oc` output.
<7> The reference for resource manifests.
. Apply the Custom Resource Definition:
. Apply the custom resource definition:
+
[source,terminal]
----
@@ -369,7 +369,7 @@ webhooks:
- namespaces
failurePolicy: Fail
----
<1> Name for the webhook object. This example uses the `namespacereservations` resource.
<1> Name for the `ValidatingWebhookConfiguration` object. This example uses the `namespacereservations` resource.
<2> Name of the webhook to call. This example uses the `namespacereservations` resource.
<3> Enables access to the webhook server through the aggregated API.
<4> The webhook URL used for admission requests. This example uses the `namespacereservation` resource.

View File

@@ -5,9 +5,9 @@
[id="digging-into-machine-config_{context}"]
= Changing Ignition configs after installation
Machine Config Pools manage a cluster of nodes and their corresponding Machine
Configs. Machine Configs contain configuration information for a cluster.
To list all Machine Config Pools that are known:
Machine config pools manage a cluster of nodes and their corresponding machine
configs. Machine configs contain configuration information for a cluster.
To list all machine config pools that are known:
[source,terminal]
----
@@ -22,7 +22,7 @@ master master-1638c1aea398413bb918e76632f20799 False   False    False
worker worker-2feef4f8288936489a5a832ca8efe953 False   False    False
----
To list all Machine Configs:
To list all machine configs:
[source,terminal]
----
@@ -45,17 +45,17 @@ worker-2feef4f8288936489a5a832ca8efe953   4.0.0-0.150.0.0-dirty   3.1.0    
----
The Machine Config Operator acts somewhat differently than Ignition when it
comes to applying these machineconfigs. The machineconfigs are read in order
(from 00* to 99*). Labels inside the machineconfigs identify the type of node
comes to applying these machine configs. The machine configs are read in order
(from 00* to 99*). Labels inside the machine configs identify the type of node
each is for (master or worker). If the same file appears in multiple
machineconfig files, the last one wins. So, for example, any file that appears
machine config files, the last one wins. So, for example, any file that appears
in a 99* file would replace the same file that appeared in a 00* file.
The input machineconfig objects are unioned into a "rendered" machineconfig
The input `MachineConfig` objects are unioned into a "rendered" `MachineConfig`
object, which will be used as a target by the operator and is the value you
can see in the machineconfigpool.
can see in the machine config pool.
To see what files are being managed from a machineconfig, look for Path:
inside a particular machineconfig. For example:
To see what files are being managed from a machine config, look for "Path:"
inside a particular `MachineConfig` object. For example:
[source,terminal]
----
@@ -70,6 +70,6 @@ $ oc describe machineconfigs 01-worker-container-runtime | grep Path:
            Path:            /etc/crio/crio.conf
----
Be sure to give the machineconfig a later name
Be sure to give the machine config file a later name
(such as 10-worker-container-runtime). Keep in mind that the content of each
file is in URL-style data. Then apply the new machineconfig to the cluster.
file is in URL-style data. Then apply the new machine config to the cluster.

View File

@@ -51,7 +51,6 @@ $ echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gd
.Example output
[source,terminal]
----
This is the bootstrap machine; it will be destroyed when the master is fully up.
The primary service is "bootkube.service". To watch its status, run, e.g.:

View File

@@ -65,7 +65,7 @@ managing containers, {op-system} replaces the Docker CLI tool with a compatible
set of container tools. The podman CLI tool supports many container runtime
features, such as running, starting, stopping, listing, and removing containers
and container images. The skopeo CLI tool can copy, authenticate, and sign
images. You can use the crictl CLI tool to work with containers and pods from the
images. You can use the `crictl` CLI tool to work with containers and pods from the
CRI-O container engine. While direct use of these tools in {op-system} is
discouraged, you can use them for debugging purposes.
@@ -77,7 +77,7 @@ extracted, and written to disk, then the bootloader is modified to boot into
the new version. The machine will reboot into the update in a rolling manner to
ensure cluster capacity is minimally impacted.
* **Updated through MachineConfigOperator**:
* **Updated through the Machine Config Operator**:
In {product-title}, the Machine Config Operator handles operating system upgrades.
Instead of upgrading individual packages, as is done with `yum`
upgrades, `rpm-ostree` delivers upgrades of the OS as an atomic unit. The
@@ -116,7 +116,7 @@ cluster can be accomplished for debugging purposes, you should not directly conf
Instead, if you need to add or change features on your {product-title} nodes,
consider making changes in the following ways:
* **Kubernetes workload objects (daemon sets, deployments, etc.)**: If you need to
* **Kubernetes workload objects (`DaemonSet`, `Deployment`, etc.)**: If you need to
add services or other user-level features to your cluster, consider adding them as
Kubernetes workload objects. Keeping those features outside of specific node
configurations is the best way to reduce the risk of breaking the cluster on
@@ -125,7 +125,7 @@ subsequent upgrades.
* **Day-2 customizations**: If possible, bring up a cluster without making any
customizations to cluster nodes and make necessary node changes after the cluster is up.
Those changes are easier to track later and less likely to break updates.
Creating MachineConfigs or modifying Operator custom resources
Creating machine configs or modifying Operator custom resources
are ways of making these customizations.
* **Day-1 customizations**: For customizations that you must implement when the
@@ -139,7 +139,7 @@ Here are examples of customizations you could do on day-1:
* **Kernel arguments**: If particular kernel features or tuning is needed on nodes when the cluster first boots.
* **Disk encryption**: If your security needs require that the root filesystem on the nodes are encrypted, such as with FIPS support.
* **Disk encryption**: If your security needs require that the root file system on the nodes are encrypted, such as with FIPS support.
* **Kernel modules**: If a particular hardware device, such as a network card or video card, does not have a usable module available by default in the Linux kernel.
@@ -147,8 +147,8 @@ Here are examples of customizations you could do on day-1:
such as the location of time servers.
To accomplish these tasks, you can augment the `openshift-install` process to include additional
objects such as MachineConfigs.
Those procedures that result in creating MachineConfigs can be passed to the Machine Config Operator
objects such as `MachineConfig` objects.
Those procedures that result in creating machine configs can be passed to the Machine Config Operator
after the cluster is up.
@@ -160,12 +160,12 @@ The Ignition config files that the installation program generates contain certif
[id="rhcos-deployed_{context}"]
== Choosing how to configure {op-system}
Differences between {op-system} deployments for {product-title} are based on
Differences between {op-system} installations for {product-title} are based on
whether you are deploying on an infrastructure provisioned by the installer or by the user:
* **Installer provisioned**: Some cloud environments offer pre-configured infrastructures
that allow you to bring up an {product-title} cluster with minimal configuration.
For these types of deployments, you can supply Ignition configs
For these types of installations, you can supply Ignition configs
that place content on each node so it is there when the cluster first boots.
* **User provisioned**: If you are provisioning your own infrastructure, you have more flexibility
@@ -175,7 +175,7 @@ However, in most cases where configuration is required on the operating system
itself, it is best to provide that configuration through an Ignition config.
The Ignition facility runs only when the {op-system} system is first set up.
After that, Ignition configs can be supplied later using the MachineConfigs.
After that, Ignition configs can be supplied later using the machine config.
[id="rhcos-about-ignition_{context}"]
== About Ignition
@@ -192,7 +192,7 @@ cluster machines. Most of the actual system setup happens on each machine
itself. For each machine,
Ignition takes the {op-system} image and boots the {op-system} kernel. Options
on the kernel command line, identify the type of deployment and the location of
the Ignition-enabled initial Ram Disk (initramfs).
the Ignition-enabled initial Ram disk (initramfs).
////
////
@@ -271,7 +271,7 @@ files does not matter. Ignition will sort and implement each setting in ways tha
links by depth.
* Because Ignition can start with a completely empty hard disk, it can do
something cloud-init cant do: set up systems on bare metal from scratch
something cloud-init cannot do: set up systems on bare metal from scratch
(using features such as PXE boot). In the bare metal case, the Ignition config
is injected into the boot partition so Ignition can find it and configure
the system correctly.

View File

@@ -20,7 +20,7 @@ plane. It monitors all of the cluster nodes and orchestrates their configuration
updates.
* The `machine-config-daemon` daemon set, which runs on
each node in the cluster and updates a machine to configuration as defined by
MachineConfig and as instructed by the MachineConfigController. When the node detects
machine config and as instructed by the MachineConfigController. When the node detects
a change, it drains off its pods, applies the update, and reboots. These changes
come in the form of Ignition configuration files that apply the specified
machine configuration and control kubelet configuration. The update itself is
@@ -36,7 +36,7 @@ configuration changes, or other changes to the operating system or {product-titl
configuration.
When you perform node management operations, you create or modify a
KubeletConfig custom resource (CR).
`KubeletConfig` custom resource (CR).
//See https://github.com/openshift/machine-config-operator/blob/master/docs/KubeletConfigDesign.md[KubeletConfigDesign] for details.
[IMPORTANT]

View File

@@ -21,7 +21,7 @@ An Operator can be set to an unmanaged state using the following methods:
+
Individual Operators have a `managementState` parameter in their configuration.
This can be accessed in different ways, depending on the Operator. For example,
the Cluster Logging Operator accomplishes this by modifying a Custom Resource
the Cluster Logging Operator accomplishes this by modifying a custom resource
(CR) that it manages, while the Cluster Samples Operator uses a cluster-wide
configuration resource.
+