From 909685131658a8d24af4d74f29eb5be0a40cc917 Mon Sep 17 00:00:00 2001 From: Laura Hinson Date: Tue, 1 Apr 2025 15:34:51 -0400 Subject: [PATCH] Isolation details for HCP --- .../hcp-prepare/hcp-distribute-workloads.adoc | 15 ++++--- modules/hcp-isolation.adoc | 43 +++++++++++++++++++ 2 files changed, 52 insertions(+), 6 deletions(-) create mode 100644 modules/hcp-isolation.adoc diff --git a/hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc b/hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc index 4bc6b4ca15..4d7255708c 100644 --- a/hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc +++ b/hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc @@ -8,13 +8,15 @@ toc::[] Before you get started with {hcp} for {product-title}, you must properly label nodes so that the pods of hosted clusters can be scheduled into infrastructure nodes. Node labeling is also important for the following reasons: -* To ensure high availability and proper workload deployment. For example, you can set the `node-role.kubernetes.io/infra` label to avoid having the control-plane workload count toward your {product-title} subscription. +* To ensure high availability and proper workload deployment. For example, to avoid having the control plane workload count toward your {product-title} subscription, you can set the `node-role.kubernetes.io/infra` label. * To ensure that control plane workloads are separate from other workloads in the management cluster. -//lahinson - sept. 2023 - commenting out the following lines until those levels are supported for self-managed hypershift -//* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels: -//** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes. -//** Request serving isolation: Serving pods are requested in their own dedicated nodes. -//** Nothing shared: Every control plane has its own dedicated nodes. +* To ensure that control plane workloads are configured at the correct multi-tenancy distribution level for your deployment. The distribution levels are as follows: + +** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes. +** Request serving isolation: Serving pods are requested in their own dedicated nodes. +** Nothing shared: Every control plane has its own dedicated nodes. + +For more information about dedicating a node to a single hosted cluster, see "Labeling management cluster nodes". [IMPORTANT] ==== @@ -24,3 +26,4 @@ Do not use the management cluster for your workload. Workloads must not run on n include::modules/hcp-labels-taints.adoc[leveloffset=+1] include::modules/hcp-priority-classes.adoc[leveloffset=+1] include::modules/hcp-virt-taints-tolerations.adoc[leveloffset=+1] +include::modules/hcp-isolation.adoc[leveloffset=+1] \ No newline at end of file diff --git a/modules/hcp-isolation.adoc b/modules/hcp-isolation.adoc new file mode 100644 index 0000000000..ec54b1e14e --- /dev/null +++ b/modules/hcp-isolation.adoc @@ -0,0 +1,43 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc + +:_mod-docs-content-type: CONCEPT +[id="hcp-isolation_{context}"] += Control plane isolation + +You can configure {hcp} to isolate network traffic or control plane pods. + +== Network policy isolation + +Each hosted control plane is assigned to run in a dedicated Kubernetes namespace. By default, the Kubernetes namespace denies all network traffic. + +The following network traffic is allowed through the network policy that is enforced by the Kubernetes Container Network Interface (CNI): + +* Ingress pod-to-pod communication in the same namespace (intra-tenant) +* Ingress on port 6443 to the hosted `kube-apiserver` pod for the tenant +* Metric scraping from the management cluster Kubernetes namespace with the `network.openshift.io/policy-group: monitoring` label is allowed for monitoring + +== Control plane pod isolation + +In addition to network policies, each hosted control plane pod is run with the `restricted` security context constraint. This policy denies access to all host features and requires pods to be run with a UID and with SELinux context that is allocated uniquely to each namespace that hosts a customer control plane. + +The policy ensures the following constraints: + +* Pods cannot run as privileged. +* Pods cannot mount host directory volumes. +* Pods must run as a user in a pre-allocated range of UIDs. +* Pods must run with a pre-allocated MCS label. +* Pods cannot access the host network namespace. +* Pods cannot expose host network ports. +* Pods cannot access the host PID namespace. +* By default, pods drop the following Linux capabilities: `KILL`, `MKNOD`, `SETUID`, and `SETGID`. + +The management components, such as `kubelet` and `crio`, on each management cluster worker node are protected by an SELinux label that is not accessible to the SELinux context for pods that support {hcp}. + +The following SELinux labels are used for key processes and sockets: + +* *kubelet*: `system_u:system_r:unconfined_service_t:s0` +* *crio*: `system_u:system_r:container_runtime_t:s0` +* *crio.sock*: `system_u:object_r:container_var_run_t:s0` +* **: `system_u:system_r:container_t:s0:c14,c24` \ No newline at end of file