1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/installation-special-config-storage.adoc

492 lines
22 KiB
Plaintext

// Module included in the following assemblies:
//
// * installing/install_config/installing-customizing.adoc
:_mod-docs-content-type: PROCEDURE
[id="installation-special-config-storage_{context}"]
= Encrypting and mirroring disks during installation
During an {product-title} installation, you can enable boot disk encryption and mirroring on the cluster nodes.
[id="installation-special-config-encrypt-disk_{context}"]
== About disk encryption
You can enable encryption for the boot disks on the control plane and compute nodes at installation time.
{product-title} supports the Trusted Platform Module (TPM) v2 and Tang encryption modes.
TPM v2:: This is the preferred mode.
TPM v2 stores passphrases in a secure cryptoprocessor on the server.
You can use this mode to prevent decryption of the boot disk data on a cluster node if the disk is removed from the server.
Tang:: Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE).
You can bind the boot disk data on your cluster nodes to one or more Tang servers.
This prevents decryption of the data unless the nodes are on a secure network where the Tang servers are accessible.
Clevis is an automated decryption framework used to implement decryption on the client side.
[IMPORTANT]
====
The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure.
====
In earlier versions of {op-system-first}, disk encryption was configured by specifying `/etc/clevis.json` in the Ignition config.
That file is not supported in clusters created with {product-title} 4.7 or later.
Configure disk encryption by using the following procedure.
When the TPM v2 or Tang encryption modes are enabled, the {op-system} boot disks are encrypted using the LUKS2 format.
This feature:
* Is available for installer-provisioned infrastructure, user-provisioned infrastructure, and Assisted Installer deployments
* For Assisted installer deployments:
- Each cluster can only have a single encryption method, Tang or TPM
- Encryption can be enabled on some or all nodes
- There is no Tang threshold; all servers must be valid and operational
- Encryption applies to the installation disks only, not to the workload disks
* Is supported on {op-system-first} systems only
* Sets up disk encryption during the manifest installation phase, encrypting all data written to disk, from first boot forward
* Requires no user intervention for providing passphrases
* Uses AES-256-XTS encryption
[id="installation-special-config-encryption-threshold_{context}"]
=== Configuring an encryption threshold
In {product-title}, you can specify a requirement for more than one Tang server.
You can also configure the TPM v2 and Tang encryption modes simultaneously.
This enables boot disk data decryption only if the TPM secure cryptoprocessor is present and the Tang servers are accessible over a secure network.
You can use the `threshold` attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur.
The threshold is met when the stated value is reached through any combination of the declared conditions. In the case of offline provisioning, the offline server is accessed using an included advertisement, and only uses that supplied advertisement if the number of online servers do not meet the set threshold.
For example, the `threshold` value of `2` in the following configuration can be reached by accessing two Tang servers, with the offline server available as a backup, or by accessing the TPM secure cryptoprocessor and one of the Tang servers:
.Example Butane configuration for disk encryption
[source,yaml,subs="attributes+"]
----
variant: openshift
version: {product-version}.0
metadata:
name: worker-storage
labels:
machineconfiguration.openshift.io/role: worker
boot_device:
layout: x86_64 <1>
luks:
tpm2: true <2>
tang: <3>
- url: http://tang1.example.com:7500
thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX
- url: http://tang2.example.com:7500
thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF
- url: http://tang3.example.com:7500
thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9
advertisement: "{\"payload\": \"...\", \"protected\": \"...\", \"signature\": \"...\"}" <4>
threshold: 2 <5>
openshift:
fips: true
----
<1> Set this field to the instruction set architecture of the cluster nodes.
Some examples include, `x86_64`, `aarch64`, or `ppc64le`.
<2> Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system.
<3> Include this section if you want to use one or more Tang servers.
<4> Optional: Include this field for offline provisioning. Ignition will provision the Tang server binding rather than fetching the advertisement from the server at runtime. This lets the server be unavailable at provisioning time.
<5> Specify the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur.
[IMPORTANT]
====
The default `threshold` value is `1`.
If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met.
====
[NOTE]
====
If you require TPM v2 _and_ Tang for decryption, the value of the `threshold` attribute must equal the total number of stated Tang servers plus one.
If the `threshold` value is lower, it is possible to reach the threshold value by using a single encryption mode.
For example, if you set `tpm2` to `true` and specify two Tang servers, a threshold of `2` can be met by accessing the two Tang servers, even if the TPM secure cryptoprocessor is not available.
====
[id="installation-special-config-mirrored-disk_{context}"]
== About disk mirroring
During {product-title} installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices.
A node continues to function after storage device failure provided one device remains available.
Mirroring does not support replacement of a failed disk.
Reprovision the node to restore the mirror to a pristine, non-degraded state.
[NOTE]
====
For user-provisioned infrastructure deployments, mirroring is available only on {op-system} systems.
Support for mirroring is available on `x86_64` nodes booted with BIOS or UEFI and on `ppc64le` nodes.
====
[id="installation-special-config-storage-procedure_{context}"]
== Configuring disk encryption and mirroring
You can enable and configure encryption and mirroring during an {product-title} installation.
.Prerequisites
* You have downloaded the {product-title} installation program on your installation node.
* You installed Butane on your installation node.
+
[NOTE]
====
Butane is a command-line utility that {product-title} uses to offer convenient, short-hand syntax for writing and validating machine configs.
For more information, see "Creating machine configs with Butane".
====
+
* You have access to a {op-system-base-full} 8 machine that can be used to generate a thumbprint of the Tang exchange key.
.Procedure
. If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the host firmware for each node.
This is required on most Dell systems.
Check the manual for your specific system.
. If you want to use Tang to encrypt your cluster, follow these preparatory steps:
.. Set up a Tang server or access an existing one.
See link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/configuring-automated-unlocking-of-encrypted-volumes-using-policy-based-decryption_security-hardening#network-bound-disk-encryption_configuring-automated-unlocking-of-encrypted-volumes-using-policy-based-decryption[Network-bound disk encryption] for instructions.
.. Install the `clevis` package on a {op-system-base} 8 machine, if it is not already installed:
+
[source,terminal]
----
$ sudo yum install clevis
----
.. On the {op-system-base} 8 machine, run the following command to generate a thumbprint of the exchange key.
Replace `\http://tang1.example.com:7500` with the URL of your Tang server:
+
[source,terminal]
----
$ clevis-encrypt-tang '{"url":"http://tang1.example.com:7500"}' < /dev/null > /dev/null <1>
----
<1> In this example, `tangd.socket` is listening on port `7500` on the Tang server.
+
[NOTE]
====
The `clevis-encrypt-tang` command generates a thumbprint of the exchange key.
No data passes to the encryption command during this step; `/dev/null` exists here as an input instead of plain text.
The encrypted output is also sent to `/dev/null`, because it is not required for this procedure.
====
+
.Example output
[source,terminal]
----
The advertisement contains the following signing keys:
PLjNyRdGw03zlRoGjQYMahSZGu9 <1>
----
<1> The thumbprint of the exchange key.
+
When the `Do you wish to trust these keys? [ynYN]` prompt displays, type `Y`.
.. Optional: For offline Tang provisioning:
... Obtain the advertisement from the server using the `curl` command. Replace `\http://tang2.example.com:7500` with the URL of your Tang server:
+
[source,terminal]
----
$ curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws
----
+
.Expected output
[source,text]
----
{"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"}
----
... Provide the advertisement file to Clevis for encryption:
+
[source,terminal]
----
$ clevis-encrypt-tang '{"url":"http://tang2.example.com:7500","adv":"adv.jws"}' < /dev/null > /dev/null
----
.. If the nodes are configured with static IP addressing, run `coreos-installer iso customize --dest-karg-append` or use the `coreos-installer` `--append-karg` option when installing {op-system} nodes to set the IP address of the installed system.
Append the `ip=` and other arguments needed for your network.
+
[IMPORTANT]
====
Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption.
These include the `coreos-installer` `--copy-network` option, the `coreos-installer iso customize` `--network-keyfile` option, and the `coreos-installer pxe customize` `--network-keyfile` option, as well as adding `ip=` arguments to the kernel command line of the live ISO or PXE image during installation.
Incorrect static IP configuration causes the second boot of the node to fail.
====
. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster:
+
[source,terminal]
----
$ ./openshift-install create manifests --dir <installation_directory> <1>
----
<1> Replace `<installation_directory>` with the path to the directory that you want to store the installation files in.
. Create a Butane config that configures disk encryption, mirroring, or both.
For example, to configure storage for compute nodes, create a `$HOME/clusterconfig/worker-storage.bu` file.
+
[source,yaml,subs="attributes+"]
.Butane config example for a boot device
----
variant: openshift
version: {product-version}.0
metadata:
name: worker-storage <1>
labels:
machineconfiguration.openshift.io/role: worker <1>
boot_device:
layout: x86_64 <2>
luks: <3>
tpm2: true <4>
tang: <5>
- url: http://tang1.example.com:7500 <6>
thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 <7>
- url: http://tang2.example.com:7500
thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF
advertisement: "{"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"}" <8>
threshold: 1 <9>
mirror: <10>
devices: <11>
- /dev/sda
- /dev/sdb
openshift:
fips: true <12>
----
+
<1> For control plane configurations, replace `worker` with `master` in both of these locations.
<2> Set this field to the instruction set architecture of the cluster nodes.
Some examples include, `x86_64`, `aarch64`, or `ppc64le`.
<3> Include this section if you want to encrypt the root file system.
For more details, see "About disk encryption".
<4> Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system.
<5> Include this section if you want to use one or more Tang servers.
<6> Specify the URL of a Tang server.
In this example, `tangd.socket` is listening on port `7500` on the Tang server.
<7> Specify the exchange key thumbprint, which was generated in a preceding step.
<8> Optional: Specify the advertisement for your offline Tang server in valid JSON format.
<9> Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur.
The default value is `1`.
For more information about this topic, see "Configuring an encryption threshold".
<10> Include this section if you want to mirror the boot disk.
For more details, see "About disk mirroring".
<11> List all disk devices that should be included in the boot disk mirror, including the disk that {op-system} will be installed onto.
<12> Include this directive to enable FIPS mode on your cluster.
+
[IMPORTANT]
====
To enable FIPS mode for your cluster, you must run the installation program from a {op-system-base-full} computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/security_hardening/assembly_installing-the-system-in-fips-mode_security-hardening[Installing the system in FIPS mode]. If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane configuration file.
If you are configuring disk encryption on a node with FIPS mode enabled, you must include the `fips` directive in the same Butane configuration file, even if FIPS mode is also enabled in a separate manifest.
====
. Create a control plane or compute node manifest from the corresponding Butane configuration file and save it to the `<installation_directory>/openshift` directory.
For example, to create a manifest for the compute nodes, run the following command:
+
[source,terminal]
----
$ butane $HOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml
----
+
Repeat this step for each node type that requires disk encryption or mirroring.
. If you enable encryption, edit the manifest that was produced by the previous step and replace the cipher `aes-cbc-essiv:sha256` with `aes-xts-plain64`.
The following excerpt shows a sample encryption configuration after this change:
+
[source,yaml]
----
# ...
luks:
# ...
options:
- --cipher
- aes-xts-plain64
----
. Save the Butane configuration file in case you need to update the manifests in the future.
. Continue with the remainder of the {product-title} installation.
+
[TIP]
====
You can monitor the console log on the {op-system} nodes during installation for error messages relating to disk encryption or mirroring.
====
+
[IMPORTANT]
====
If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested.
====
.Verification
After installing {product-title}, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes.
. From the installation host, access a cluster node by using a debug pod:
.. Start a debug pod for the node, for example:
+
[source,terminal]
----
$ oc debug node/compute-1
----
+
.. Set `/host` as the root directory within the debug shell.
The debug pod mounts the root file system of the node in `/host` within the pod.
By changing the root directory to `/host`, you can run binaries contained in the executable paths on the node:
+
[source,terminal]
----
# chroot /host
----
+
[NOTE]
====
{product-title} cluster nodes running {op-system-first} are immutable and rely on Operators to apply cluster changes.
Accessing cluster nodes using SSH is not recommended.
However, if the {product-title} API is not available, or `kubelet` is not properly functioning on the target node, `oc` operations will be impacted.
In such situations, it is possible to access nodes using `ssh core@<node>.<cluster_name>.<base_domain>` instead.
====
. If you configured boot disk encryption, verify if it is enabled:
.. From the debug shell, review the status of the root mapping on the node:
+
[source,terminal]
----
# cryptsetup status root
----
+
.Example output
[source,terminal]
----
/dev/mapper/root is active and is in use.
type: LUKS2 <1>
cipher: aes-xts-plain64 <2>
keysize: 512 bits
key location: keyring
device: /dev/sda4 <3>
sector size: 512
offset: 32768 sectors
size: 15683456 sectors
mode: read/write
----
<1> The encryption format.
When the TPM v2 or Tang encryption modes are enabled, the {op-system} boot disks are encrypted using the LUKS2 format.
<2> The encryption algorithm used to encrypt the LUKS2 volume.
<3> The device that contains the encrypted LUKS2 volume.
If mirroring is enabled, the value will represent a software mirror device, for example `/dev/md126`.
+
.. List the Clevis plugins that are bound to the encrypted device:
+
[source,terminal]
----
# clevis luks list -d /dev/sda4 <1>
----
<1> Specify the device that is listed in the `device` field in the output of the preceding step.
+
.Example output
[source,terminal]
----
1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' <1>
----
<1> In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the `/dev/sda4` device.
. If you configured mirroring, verify if it is enabled:
.. From the debug shell, list the software RAID devices on the node:
+
[source,terminal]
----
# cat /proc/mdstat
----
+
.Example output
[source,terminal]
----
Personalities : [raid1]
md126 : active raid1 sdb3[1] sda3[0] <1>
393152 blocks super 1.0 [2/2] [UU]
md127 : active raid1 sda4[0] sdb4[1] <2>
51869632 blocks super 1.2 [2/2] [UU]
unused devices: <none>
----
<1> The `/dev/md126` software RAID mirror device uses the `/dev/sda3` and `/dev/sdb3` disk devices on the cluster node.
<2> The `/dev/md127` software RAID mirror device uses the `/dev/sda4` and `/dev/sdb4` disk devices on the cluster node.
+
.. Review the details of each of the software RAID devices listed in the output of the preceding command.
The following example lists the details of the `/dev/md126` device:
+
[source,terminal]
----
# mdadm --detail /dev/md126
----
+
.Example output
[source,terminal]
----
/dev/md126:
Version : 1.0
Creation Time : Wed Jul 7 11:07:36 2021
Raid Level : raid1 <1>
Array Size : 393152 (383.94 MiB 402.59 MB)
Used Dev Size : 393152 (383.94 MiB 402.59 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 7 11:18:24 2021
State : clean <2>
Active Devices : 2 <3>
Working Devices : 2 <3>
Failed Devices : 0 <4>
Spare Devices : 0
Consistency Policy : resync
Name : any:md-boot <5>
UUID : ccfa3801:c520e0b5:2bee2755:69043055
Events : 19
Number Major Minor RaidDevice State
0 252 3 0 active sync /dev/sda3 <6>
1 252 19 1 active sync /dev/sdb3 <6>
----
<1> Specifies the RAID level of the device.
`raid1` indicates RAID 1 disk mirroring.
<2> Specifies the state of the RAID device.
<3> States the number of underlying disk devices that are active and working.
<4> States the number of underlying disk devices that are in a failed state.
<5> The name of the software RAID device.
<6> Provides information about the underlying disk devices used by the software RAID device.
+
.. List the file systems mounted on the software RAID devices:
+
[source,terminal]
----
# mount | grep /dev/md
----
+
.Example output
[source,terminal]
----
/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota)
/dev/md126 on /boot type ext4 (rw,relatime,seclabel)
----
+
In the example output, the `/boot` file system is mounted on the `/dev/md126` software RAID device and the root file system is mounted on `/dev/md127`.
. Repeat the verification steps for each {product-title} node type.
[role="_additional-resources"]
.Additional resources
* For more information about the TPM v2 and Tang encryption modes, see link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/configuring-automated-unlocking-of-encrypted-volumes-using-policy-based-decryption_security-hardening[Configuring automated unlocking of encrypted volumes using policy-based decryption].