mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Isolation details for HCP
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
007c830623
commit
9096851316
@@ -8,13 +8,15 @@ toc::[]
|
||||
|
||||
Before you get started with {hcp} for {product-title}, you must properly label nodes so that the pods of hosted clusters can be scheduled into infrastructure nodes. Node labeling is also important for the following reasons:
|
||||
|
||||
* To ensure high availability and proper workload deployment. For example, you can set the `node-role.kubernetes.io/infra` label to avoid having the control-plane workload count toward your {product-title} subscription.
|
||||
* To ensure high availability and proper workload deployment. For example, to avoid having the control plane workload count toward your {product-title} subscription, you can set the `node-role.kubernetes.io/infra` label.
|
||||
* To ensure that control plane workloads are separate from other workloads in the management cluster.
|
||||
//lahinson - sept. 2023 - commenting out the following lines until those levels are supported for self-managed hypershift
|
||||
//* To ensure that control plane workloads are configured at one of the following multi-tenancy distribution levels:
|
||||
//** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
|
||||
//** Request serving isolation: Serving pods are requested in their own dedicated nodes.
|
||||
//** Nothing shared: Every control plane has its own dedicated nodes.
|
||||
* To ensure that control plane workloads are configured at the correct multi-tenancy distribution level for your deployment. The distribution levels are as follows:
|
||||
|
||||
** Everything shared: Control planes for hosted clusters can run on any node that is designated for control planes.
|
||||
** Request serving isolation: Serving pods are requested in their own dedicated nodes.
|
||||
** Nothing shared: Every control plane has its own dedicated nodes.
|
||||
|
||||
For more information about dedicating a node to a single hosted cluster, see "Labeling management cluster nodes".
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -24,3 +26,4 @@ Do not use the management cluster for your workload. Workloads must not run on n
|
||||
include::modules/hcp-labels-taints.adoc[leveloffset=+1]
|
||||
include::modules/hcp-priority-classes.adoc[leveloffset=+1]
|
||||
include::modules/hcp-virt-taints-tolerations.adoc[leveloffset=+1]
|
||||
include::modules/hcp-isolation.adoc[leveloffset=+1]
|
||||
43
modules/hcp-isolation.adoc
Normal file
43
modules/hcp-isolation.adoc
Normal file
@@ -0,0 +1,43 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hosted_control_planes/hcp-prepare/hcp-distribute-workloads.adoc
|
||||
|
||||
:_mod-docs-content-type: CONCEPT
|
||||
[id="hcp-isolation_{context}"]
|
||||
= Control plane isolation
|
||||
|
||||
You can configure {hcp} to isolate network traffic or control plane pods.
|
||||
|
||||
== Network policy isolation
|
||||
|
||||
Each hosted control plane is assigned to run in a dedicated Kubernetes namespace. By default, the Kubernetes namespace denies all network traffic.
|
||||
|
||||
The following network traffic is allowed through the network policy that is enforced by the Kubernetes Container Network Interface (CNI):
|
||||
|
||||
* Ingress pod-to-pod communication in the same namespace (intra-tenant)
|
||||
* Ingress on port 6443 to the hosted `kube-apiserver` pod for the tenant
|
||||
* Metric scraping from the management cluster Kubernetes namespace with the `network.openshift.io/policy-group: monitoring` label is allowed for monitoring
|
||||
|
||||
== Control plane pod isolation
|
||||
|
||||
In addition to network policies, each hosted control plane pod is run with the `restricted` security context constraint. This policy denies access to all host features and requires pods to be run with a UID and with SELinux context that is allocated uniquely to each namespace that hosts a customer control plane.
|
||||
|
||||
The policy ensures the following constraints:
|
||||
|
||||
* Pods cannot run as privileged.
|
||||
* Pods cannot mount host directory volumes.
|
||||
* Pods must run as a user in a pre-allocated range of UIDs.
|
||||
* Pods must run with a pre-allocated MCS label.
|
||||
* Pods cannot access the host network namespace.
|
||||
* Pods cannot expose host network ports.
|
||||
* Pods cannot access the host PID namespace.
|
||||
* By default, pods drop the following Linux capabilities: `KILL`, `MKNOD`, `SETUID`, and `SETGID`.
|
||||
|
||||
The management components, such as `kubelet` and `crio`, on each management cluster worker node are protected by an SELinux label that is not accessible to the SELinux context for pods that support {hcp}.
|
||||
|
||||
The following SELinux labels are used for key processes and sockets:
|
||||
|
||||
* *kubelet*: `system_u:system_r:unconfined_service_t:s0`
|
||||
* *crio*: `system_u:system_r:container_runtime_t:s0`
|
||||
* *crio.sock*: `system_u:object_r:container_var_run_t:s0`
|
||||
* *<example user container processes>*: `system_u:system_r:container_t:s0:c14,c24`
|
||||
Reference in New Issue
Block a user