1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 06:46:26 +01:00
Files
openshift-docs/modules/rhcos-about.adoc
2020-11-12 18:20:21 +00:00

314 lines
16 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
// Module included in the following assemblies:
//
// * architecture/architecture_rhcos.adoc
[id="rhcos-about_{context}"]
= About {op-system}
{op-system-first} represents the next generation of single-purpose
container operating system technology. Created by the same development teams
that created Red Hat Enterprise Linux Atomic Host and CoreOS Container Linux,
{op-system} combines the quality standards of {op-system-base-full}
with the automated, remote upgrade features from Container Linux.
{op-system} is supported only as a component of {product-title}
{product-version} for all {product-title} machines. {op-system} is the only
supported operating system for {product-title} control plane, or master,
machines. While {op-system} is the default operating system for all cluster
machines, you can create compute machines, which are also known as worker machines, that use {op-system-base} as their
operating system. There are two general ways {op-system} is deployed in
{product-title} {product-version}:
* If you install your cluster on infrastructure that the cluster provisions,
{op-system} images are downloaded to the target platform during installation,
and suitable Ignition config files, which control the {op-system} configuration,
are used to deploy the machines.
* If you install your cluster on infrastructure
that you manage, you must follow the installation documentation to obtain the
{op-system} images, generate Ignition config files, and use the Ignition config
files to provision your machines.
[id="rhcos-key-features_{context}"]
== Key {op-system} features
The following list describes key features of the {op-system} operating system:
* **Based on {op-system-base}**: The underlying operating system consists primarily of {op-system-base} components.
The same quality, security, and control measures that support {op-system-base} also support
{op-system}. For example, {op-system} software is in
RPM packages, and each {op-system} system starts up with a {op-system-base} kernel and a set
of services that are managed by the systemd init system.
* **Controlled immutability**: Although it contains {op-system-base} components, {op-system}
is designed to be managed
more tightly than a default {op-system-base} installation. Management is
performed remotely from the {product-title} cluster. When you set up your
{op-system} machines, you can modify only a few system settings. This controlled
immutability allows {product-title} to
store the latest state of {op-system} systems in the cluster so it is always
able to create additional machines and perform updates based on the latest {op-system}
configurations.
* **CRI-O container runtime**: Although {op-system} contains features for running the
OCI- and libcontainer-formatted containers that Docker requires, it incorporates
the CRI-O container engine
instead of the Docker container engine. By focusing on features needed by
Kubernetes platforms, such as {product-title}, CRI-O can offer specific
compatibility with different Kubernetes versions. CRI-O also offers a smaller
footprint and reduced attack surface than is possible with container engines
that offer a larger feature set. At the moment, CRI-O is only available as a
container engine within {product-title} clusters.
* **Set of container tools**: For tasks such as building, copying, and otherwise
managing containers, {op-system} replaces the Docker CLI tool with a compatible
set of container tools. The podman CLI tool supports many container runtime
features, such as running, starting, stopping, listing, and removing containers
and container images. The skopeo CLI tool can copy, authenticate, and sign
images. You can use the `crictl` CLI tool to work with containers and pods from the
CRI-O container engine. While direct use of these tools in {op-system} is
discouraged, you can use them for debugging purposes.
* **rpm-ostree upgrades**: {op-system} features transactional upgrades using
the `rpm-ostree` system.
Updates are delivered by means of container images and are part of the
{product-title} update process. When deployed, the container image is pulled,
extracted, and written to disk, then the bootloader is modified to boot into
the new version. The machine will reboot into the update in a rolling manner to
ensure cluster capacity is minimally impacted.
* **Updated through the Machine Config Operator**:
In {product-title}, the Machine Config Operator handles operating system upgrades.
Instead of upgrading individual packages, as is done with `yum`
upgrades, `rpm-ostree` delivers upgrades of the OS as an atomic unit. The
new OS deployment is staged during upgrades and goes into effect on the next reboot.
If something goes wrong with the upgrade, a single rollback and reboot returns the
system to the previous state. {op-system} upgrades in {product-title} are performed
during cluster updates.
For {op-system} systems, the layout of the `rpm-ostree` file system has the
following characteristics:
* `/usr` is where the operating system binaries and libraries are stored and is
read-only. We do not support altering this.
* `/etc`, `/boot`, `/var` are writable on the system but only intended to be altered
by the Machine Config Operator.
* `/var/lib/containers` is the graph storage location for storing container
images.
[id="rhcos-configured_{context}"]
== Choosing how to configure {op-system}
{op-system} is designed to deploy on an {product-title} cluster with a minimal amount
of user configuration. In its most basic form, this consists of:
* Starting with a provisioned infrastructure, such as on AWS, or provisioning
the infrastructure yourself.
* Supplying a few pieces of information, such as credentials and cluster name,
in an `install-config.yaml` file when running `openshift-install`.
Because {op-system} systems in {product-title} are designed to be fully managed
from the {product-title} cluster after that, directly changing an {op-system} machine is
discouraged. Although limited direct access to {op-system} machines
cluster can be accomplished for debugging purposes, you should not directly configure
{op-system} systems.
Instead, if you need to add or change features on your {product-title} nodes,
consider making changes in the following ways:
* **Kubernetes workload objects (`DaemonSet`, `Deployment`, etc.)**: If you need to
add services or other user-level features to your cluster, consider adding them as
Kubernetes workload objects. Keeping those features outside of specific node
configurations is the best way to reduce the risk of breaking the cluster on
subsequent upgrades.
* **Day-2 customizations**: If possible, bring up a cluster without making any
customizations to cluster nodes and make necessary node changes after the cluster is up.
Those changes are easier to track later and less likely to break updates.
Creating machine configs or modifying Operator custom resources
are ways of making these customizations.
* **Day-1 customizations**: For customizations that you must implement when the
cluster first comes up, there are ways of modifying your cluster so changes are
implemented on first boot.
Day-1 customizations can be done through Ignition configs and manifest files
during `openshift-install` or by adding boot options during ISO installs
provisioned by the user.
Here are examples of customizations you could do on day-1:
* **Kernel arguments**: If particular kernel features or tuning is needed on nodes when the cluster first boots.
* **Disk encryption**: If your security needs require that the root file system on the nodes are encrypted, such as with FIPS support.
* **Kernel modules**: If a particular hardware device, such as a network card or video card, does not have a usable module available by default in the Linux kernel.
* **Chronyd**: If you want to provide specific clock settings to your nodes,
such as the location of time servers.
To accomplish these tasks, you can augment the `openshift-install` process to include additional
objects such as `MachineConfig` objects.
Those procedures that result in creating machine configs can be passed to the Machine Config Operator
after the cluster is up.
[NOTE]
====
The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must complete your cluster installation and keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
====
[id="rhcos-deployed_{context}"]
== Choosing how to configure {op-system}
Differences between {op-system} installations for {product-title} are based on
whether you are deploying on an infrastructure provisioned by the installer or by the user:
* **Installer provisioned**: Some cloud environments offer pre-configured infrastructures
that allow you to bring up an {product-title} cluster with minimal configuration.
For these types of installations, you can supply Ignition configs
that place content on each node so it is there when the cluster first boots.
* **User provisioned**: If you are provisioning your own infrastructure, you have more flexibility
in how you add content to a {op-system} node. For example, you could add kernel
arguments when you boot the {op-system} ISO installer to install each system.
However, in most cases where configuration is required on the operating system
itself, it is best to provide that configuration through an Ignition config.
The Ignition facility runs only when the {op-system} system is first set up.
After that, Ignition configs can be supplied later using the machine config.
[id="rhcos-about-ignition_{context}"]
== About Ignition
Ignition is the utility that is used by {op-system} to manipulate disks during
initial configuration. It completes common disk tasks, including partitioning
disks, formatting partitions, writing files, and configuring users. On first
boot, Ignition reads its configuration from the installation media or the
location that you specify and applies the configuration to the machines.
Whether you are installing your cluster or adding machines to it, Ignition
always performs the initial configuration of the {product-title}
cluster machines. Most of the actual system setup happens on each machine
itself. For each machine,
Ignition takes the {op-system} image and boots the {op-system} kernel. Options
on the kernel command line, identify the type of deployment and the location of
the Ignition-enabled initial Ram disk (initramfs).
////
////
[id="about-ignition_{context}"]
=== How Ignition works
To create machines by using Ignition, you need Ignition config files. The
{product-title} installation program creates the Ignition config files that you
need to deploy your cluster. These files are based on the information that you
provide to the installation program directly or through an `install-config.yaml`
file.
The way that Ignition configures machines is similar to how tools like
https://cloud-init.io/[cloud-init] or Linux Anaconda
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/installation_guide/index#chap-kickstart-installations[kickstart]
configure systems, but with some important differences:
////
The order
of information in those files does not matter. For example, if a file needs a
directory several levels deep, if another file needs a directory along that
path, the later file is created first. Ignition sorts and creates all files,
directories, and links by depth.
////
* Ignition runs from an initial RAM disk that is separate
from the system you are installing to. Because of that, Ignition can
repartition disks, set up file systems, and perform other changes to the
machines permanent file system. In contrast, cloud-init runs as part of a
machines init system when
the system boots, so making foundational changes to things like disk partitions
cannot be done as easily. With cloud-init, it is also difficult to reconfigure
the boot process while you are in the middle of the node's boot process.
* Ignition is meant to initialize systems, not change existing systems. After a
machine initializes and the kernel is running from the installed system, the
Machine Config Operator from the {product-title} cluster completes all future
machine configuration.
* Instead of completing a defined set of actions, Ignition implements
a declarative configuration. It checks that all partitions, files, services,
and other items are in place before the new machine starts. It then makes the
changes, like copying files to disk that are necessary for the new machine to
meet the specified configuration.
* After Ignition finishes configuring a machine, the kernel keeps running but
discards the initial RAM disk and pivots to the installed system on disk. All of
the new system services and other features start without requiring a system
reboot.
* Because Ignition confirms that all new machines meet the declared configuration,
you cannot have a partially-configured machine. If a machines setup fails,
the initialization process does not finish, and Ignition does not start the new
machine. Your cluster will never contain partially-configured machines. If
Ignition cannot complete, the machine is not added to the cluster. You must add
a new machine instead. This behavior prevents the difficult case of debugging a machine when the results of a
failed configuration task are not known until something that depended on it
fails at a later date.
* If there is a problem with an
Ignition config that causes the setup of a machine to fail, Ignition will not try
to use the same config to set up another machine. For example, a failure could
result from an Ignition config made up of a parent and child config that both
want to create the same file. A failure in such a case would prevent that
Ignition config from being used again to set up an other machines, until the
problem is resolved.
* If you have multiple Ignition config files, you get a union of that set of
configs. Because Ignition is declarative, conflicts between the configs could
cause Ignition to fail to set up the machine. The order of information in those
files does not matter. Ignition will sort and implement each setting in ways that
make the most sense. For example, if a file needs a directory several levels
deep, if another file needs a directory along that path, the later file is
created first. Ignition sorts and creates all files, directories, and
links by depth.
* Because Ignition can start with a completely empty hard disk, it can do
something cloud-init cannot do: set up systems on bare metal from scratch
(using features such as PXE boot). In the bare metal case, the Ignition config
is injected into the boot partition so Ignition can find it and configure
the system correctly.
[id="ignition-sequence_{context}"]
=== The Ignition sequence
The Ignition process for an {op-system} machine in an {product-title} cluster
involves the following steps:
* The machine gets its Ignition config file. Master machines get their Ignition
config files from the bootstrap machine, and worker machines get Ignition config
files from a master.
* Ignition creates disk partitions, file systems, directories, and links on the
machine. It supports RAID arrays but does not support LVM volumes
* Ignition mounts the root of the permanent file system to the `/sysroot`
directory in the
initramfs and starts working in that `/sysroot` directory.
* Ignition configures all defined file systems and sets them up to mount appropriately
at runtime.
* Ignition runs `systemd` temporary files to populate required files in the
`/var` directory.
* Ignition runs the Ignition config files to set up users, systemd unit files,
and other configuration files.
* Ignition unmounts all components in the permanent system that were mounted in
the initramfs.
* Ignition starts up new machines init process which, in turn, starts up all other
services on the machine that run during system boot.
The machine is then ready to join the cluster and does not require a reboot.
////
After Ignition finishes its work on an individual machine, the kernel pivots to the
installed system. The initial RAM disk is no longer used and the kernel goes on
to run the init service to start up everything on the host from the installed
disk. When the last machine under the bootstrap machines control is completed, and
the services on those machines come up, the work of the bootstrap machine is over.
////