1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

Compliance Operator v0.1.55 release notes and enhancements added

engineering feedback applied

peer review addressed

Changed version number to 0.1.56

Updated to 0.1.57

QE feedback applied
This commit is contained in:
Andrew Taylor
2022-09-22 16:23:04 -04:00
committed by openshift-cherrypick-robot
parent dd2079cc2f
commit e247646f0d
15 changed files with 416 additions and 6 deletions

View File

@@ -0,0 +1,22 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-scans.adoc
:_content-type: CONCEPT
[id="compliance-applying-resource-requests-and-limits_{context}"]
= Applying resource requests and limits
When the kubelet starts a container as part of a Pod, the kubelet passes that container's requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined.
The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution.
If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set `memory.min` and `memory.low` values.
If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir.
The kubelet tracks `tmpfs` `emptyDir` volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod's container might be evicted.
[IMPORTANT]
====
A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see _Troubleshooting the Compliance Operator_.
====

View File

@@ -0,0 +1,94 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-remediation.adoc
:_content-type: PROCEDURE
[id="compliance-custom-node-pools_{context}"]
= Scanning custom node pools
The Compliance Operator does not maintain a copy of each node pool configuration. The Compliance Operator aggregates consistent configuration options for all nodes within a single node pool into one copy of the configuration file. The Compliance Operator then uses the configuration file for a particular node pool to evaluate rules against nodes within that pool.
If your cluster uses custom node pools outside the default `worker` and `master` node pools, you must supply additional variables to ensure the Compliance Operator aggregates a configuration file for that node pool.
.Procedure
. To check the configuration against all pools in an example cluster containing `master`, `worker`, and custom `example` node pools, set the value of the `ocp-var-role-master` and `opc-var-role-worker` fields to `example` in the `TailoredProfile` object:
+
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
name: cis-example-tp
spec:
extends: ocp4-cis
title: My modified NIST profile to scan example nodes
setValues:
- name: ocp4-var-role-master
value: example
rationale: test for example nodes
- name: ocp4-var-role-worker
value: example
rationale: test for example nodes
description: cis-example-scan
----
. Add the `example` role to the `ScanSetting` object that will be stored in the `ScanSettingBinding` CR:
+
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSetting
metadata:
name: default
namespace: openshift-compliance
rawResultStorage:
rotation: 3
size: 1Gi
roles:
- worker
- master
- example
scanTolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
schedule: '0 1 * * *'
----
. Create a scan that uses the `ScanSettingBinding` CR:
+
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
name: cis
namespace: openshift-compliance
profiles:
- apiGroup: compliance.openshift.io/v1alpha1
kind: Profile
name: ocp4-cis
- apiGroup: compliance.openshift.io/v1alpha1
kind: Profile
name: ocp4-cis-node
- apiGroup: compliance.openshift.io/v1alpha1
kind: TailoredProfile
name: cis-example-tp
settingsRef:
apiGroup: compliance.openshift.io/v1alpha1
kind: ScanSetting
name: default
----
The Compliance Operator checks the runtime `KubeletConfig` through the `Node/Proxy` API object and then uses variables such as `ocp-var-role-master` and `ocp-var-role-worker` to determine the nodes it performs the check against. In the `ComplianceCheckResult`, the `KubeletConfig` rules are shown as `ocp4-cis-kubelet-*`. The scan passes only if all selected nodes pass this check.
.Verification
* The Platform KubeletConfig rules are checked through the `Node/Proxy` object. You can find those rules by running the following command:
+
[source,terminal]
----
$ oc get rules -o json | jq '.items[] | select(.checkType == "Platform") | select(.metadata.name | contains("ocp4-kubelet-")) | .metadata.name'
----

View File

@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-remediation.adoc
:_content-type: CONCEPT
[id="compliance-evaluate-kubeletconfig-rules_{context}"]
= Evaluating KubeletConfig rules against default configuration values
{product-title} infrastructure might contain incomplete configuration files at run time, and nodes assume default configuration values for missing configuration options. Some configuration options can be passed as command line arguments. As a result, the Compliance Operator cannot verify if the configuration file on the node is complete because it might be missing options used in the rule checks.
To prevent false negative results where the default configuration value passes a check, the Compliance Operator uses the Node/Proxy API to fetch the configuration for each node in a node pool, then all configuration options that are consistent across nodes in the node pool are stored in a file that represents the configuration for all nodes within that node pool. This increases the accuracy of the scan results.
No additional configuration changes are required to use this feature with default `master` and `worker` node pools configurations.

View File

@@ -0,0 +1,31 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-troubleshooting.adoc
:_content-type: PROCEDURE
[id="compliance-increasing-operator-limits_{context}"]
= Increasing Compliance Operator resource limits
In some cases, the Compliance Operator might require more memory than the default limits allow. The best way to mitigate this issue is to set custom resource limits.
To increase the default memory and CPU limits of scanner pods, see _`ScanSetting` Custom resource_.
.Procedure
. To increase the Operator's memory limits to 500 Mi, create the following patch file named `co-memlimit-patch.yaml`:
+
[source,yaml]
----
spec:
config:
resources:
limits:
memory: 500Mi
----
. Apply the patch file:
+
[source,terminal]
----
$ oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge
----

View File

@@ -0,0 +1,18 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-remediation.adoc
:_content-type: PROCEDURE
[id="compliance-kubeletconfig-sub-pool-remediation_{context}"]
= Remediating `KubeletConfig` sub pools
`KubeletConfig` remediation labels can be applied to `MachineConfigPool` sub-pools.
.Procedure
* Add a label to the sub-pool `MachineConfigPool` CR:
+
[source,terminal]
----
$ oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=
----

View File

@@ -0,0 +1,54 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-advanced.adoc
:_content-type: PROCEDURE
[id="compliance-priorityclass_{context}"]
= Setting `PriorityClass` for `ScanSetting` scans
In large scale environments, the default `PriorityClass` object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the `PriorityClass` variable to ensure the Compliance Operator is always given priority in resource constrained situations.
.Procedure
* Set the `PriorityClass` variable:
+
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
strictNodeScan: true
metadata:
name: default
namespace: openshift-compliance
priorityClass: compliance-high-priority <1>
kind: ScanSetting
showNotApplicable: false
rawResultStorage:
nodeSelector:
node-role.kubernetes.io/master: ''
pvAccessModes:
- ReadWriteOnce
rotation: 3
size: 1Gi
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
schedule: 0 1 * * *
roles:
- master
- worker
scanTolerations:
- operator: Exists
----
<1> If the `PriorityClass` referenced in the `ScanSetting` cannot be found, the Operator will leave the `PriorityClass` empty, issue a warning, and continue scheduling scans without a `PriorityClass`.

View File

@@ -0,0 +1,16 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-scans.adoc
:_content-type: CONCEPT
[id="compliance-scansetting-cr_{context}"]
= `ScanSetting` Custom Resource
The `ScanSetting` Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the `api-resource-collector` container. To set the memory limits of the Operator, modify the `Subscription` object if installed through OLM or the Operator deployment itself.
To increase the default CPU and memory limits of the Compliance Operator, see _Increasing Compliance Operator resource limits_.
[IMPORTANT]
====
Increasing the memory limit for the Compliance Operator or the scanner pods is needed if the default limits are not sufficient and the Operator or scanner pods are ended by the Out Of Memory (OOM) process.
====

View File

@@ -0,0 +1,59 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-scans.adoc
:_content-type: CONCEPT
[id="compliance-scheduling-pods-with-resource-requests_{context}"]
= Scheduling Pods with resource requests
When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type.
Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node.
For each container, you can specify the following resource limits and request:
[source,terminal]
----
spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.limits.hugepages-<size>
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory
spec.containers[].resources.requests.hugepages-<size>
----
[NOTE]
====
Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a pod resource request or limit is the sum of the resource requests or limits of that type for each container in the pod.
====
.Example Pod resource requests and limits
[source,yaml]
----
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests: <1>
memory: "64Mi"
cpu: "250m"
limits: <2>
memory: "128Mi"
cpu: "500m"
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
----
<1> The Pod is requesting 64 Mi of memory and 250 m CPU.
<2> The Pod's limits are defined as 128 Mi of memory and 500 m CPU.

View File

@@ -86,12 +86,14 @@ The Compliance Operator provides the following compliance profiles:
|0.1.47+
|link:https://www.pcisecuritystandards.org/document_library?document=pci_dss[PCI Security Standards &#174; Council Document Library]
|`x86_64`
`ppc64le`
|ocp4-pci-dss-node
|PCI-DSS v3.2.1 Control Baseline for Red Hat OpenShift Container Platform 4
|0.1.47+
|link:https://www.pcisecuritystandards.org/document_library?document=pci_dss[PCI Security Standards &#174; Council Document Library]
|`x86_64`
`ppc64le`
|ocp4-high
|NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level

View File

@@ -0,0 +1,36 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-troubleshooting.adoc
:_content-type: REFERENCE
[id="operator-resource-constraints_{context}"]
= Configuring Operator resource constraints
The `resources` field defines Resource Constraints for all the containers in the Pod created by the Operator Lifecycle Manager (OLM).
[NOTE]
====
Resource Constraints applied in this process overwrites the existing resource constraints.
====
.Procedure
* Inject a request of 0.25 cpu and 64 Mi of memory, and a limit of 0.5 cpu and 128 Mi of memory in each container by editing the `Subscription` object:
+
[source,yaml]
----
kind: Subscription
metadata:
name: custom-operator
spec:
package: etcd
channel: alpha
config:
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
----

View File

@@ -10,6 +10,8 @@ The Compliance Operator includes options for advanced users for the purpose of d
include::modules/compliance-objects.adoc[leveloffset=+1]
include::modules/compliance-priorityclass.adoc[leveloffset=+1]
include::modules/compliance-raw-tailored.adoc[leveloffset=+1]
include::modules/compliance-rescan.adoc[leveloffset=+1]

View File

@@ -13,6 +13,52 @@ These release notes track the development of the Compliance Operator in the {pro
For an overview of the Compliance Operator, see xref:../../security/compliance_operator/compliance-operator-understanding.adoc#understanding-compliance-operator[Understanding the Compliance Operator].
[id="compliance-operator-release-notes-0-1-57"]
== OpenShift Compliance Operator 0.1.57
The following advisory is available for the OpenShift Compliance Operator 0.1.57:
* link:https://access.redhat.com/errata/RHBA-2022:6657[RHBA-2022:6657 - OpenShift Compliance Operator bug fix update]
[id="compliance-operator-0-1-57-new-features-and-enhancements"]
=== New features and enhancements
* `KubeletConfig` checks changed from `Node` to `Platform` type. `KubeletConfig` checks the default configuration of the `KubeletConfig`. The configuration files are aggregated from all nodes into a single location per node pool. See xref:../../security/compliance_operator/compliance-operator-remediation.adoc#compliance-evaluate-kubeletconfig-rules_compliance-remediation[Evaluating `KubeletConfig` rules against default configuration values].
* The `ScanSetting` Custom Resource now allows users to override the default CPU and memory limits of scanner pods through the `scanLimits` attribute. For more information, see xref:../../security/compliance_operator/compliance-operator-troubleshooting.adoc#compliance-increasing-operator-limits_compliance-troubleshooting[Increasing Compliance Operator resource limits].
* A `PriorityClass` object can now be set through `ScanSetting`. This ensures the Compliance Operator is prioritized and minimizes the chance that the cluster falls out of compliance. For more information, see xref:../../security/compliance_operator/compliance-operator-advanced.adoc#compliance-priorityclass_compliance-advanced[Setting `PriorityClass` for `ScanSetting` scans].
[id="compliance-operator-0-1-57-bug-fixes"]
=== Bug fixes
* Previously, the Compliance Operator hard-coded notifications to the default `openshift-compliance` namespace. If the Operator were installed in a non-default namespace, the notifications would not work as expected. Now, notifications work in non-default `openshift-compliance` namespaces. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2060726[*BZ#2060726*])
* Previously, the Compliance Operator was unable to evaluate default configurations used by kubelet objects, resulting in inaccurate results and false positives. xref:../../security/compliance_operator/compliance-operator-remediation.adoc#compliance-evaluate-kubeletconfig-rules_compliance-remediation[This new feature] evaluates the kubelet configuration and now reports accurately. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2075041[*BZ#2075041*])
* Previously, the Compliance Operator reported the `ocp4-kubelet-configure-event-creation` rule in a `FAIL` state after applying an automatic remediation because the `eventRecordQPS` value was set higher than the default value. Now, the `ocp4-kubelet-configure-event-creation` rule remediation sets the default value, and the rule applies correctly. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2082416[*BZ#2082416*])
* The `ocp4-configure-network-policies` rule requires manual intervention to perform effectively. New descriptive instructions and rule updates increase applicability of the `ocp4-configure-network-policies` rule for clusters using Calico CNIs. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2091794[*BZ#2091794*])
* Previously, the Compliance Operator would not clean up pods used to scan infrastructure when using the `debug=true` option in the scan settings. This caused pods to be left on the cluster even after deleting the `ScanSettingBinding`. Now, pods are always deleted when a `ScanSettingBinding` is deleted.(link:https://bugzilla.redhat.com/show_bug.cgi?id=2092913[*BZ#2092913*])
* Previously, the Compliance Operator used an older version of the `operator-sdk` command that caused alerts about deprecated functionality. Now, an updated version of the `operator-sdk` command is included and there are no more alerts for deprecated functionality. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2098581[*BZ#2098581*])
* Previously, the Compliance Operator would fail to apply remediations if it could not determine the relationship between kubelet and machine configurations. Now, the Compliance Operator has improved handling of the machine configurations and is able to determine if a kubelet configuration is a subset of a machine configuration. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2102511[*BZ#2102511*])
* Previously, the rule for `ocp4-cis-node-master-kubelet-enable-cert-rotation` did not properly describe success criteria. As a result, the requirements for `RotateKubeletClientCertificate` were unclear. Now, the rule for `ocp4-cis-node-master-kubelet-enable-cert-rotation` reports accurately regardless of the configuration present in the kubelet configuration file. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2105153[*BZ#2105153*])
* Previously, the rule for checking idle streaming timeouts did not consider default values, resulting in inaccurate rule reporting. Now, more robust checks ensure increased accuracy in results based on default configuration values. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2105878[*BZ#2105878*])
* Previously, the Compliance Operator would fail to fetch API resources when parsing machine configurations without Ignition specifications, which caused the `api-check-pods` processes to crash loop. Now, the Compliance Operator handles Machine Config Pools that do not have Ignition specifications correctly. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2117268[*BZ#2117268*])
* Previously, rules evaluating the `modprobe` configuration would fail even after applying remediations due to a mismatch in values for the `modprobe` configuration. Now, the same values are used for the `modprobe` configuration in checks and remediations, ensuring consistent results. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2117747[*BZ#2117747*])
[id="compliance-operator-0-1-57-deprecations"]
=== Deprecations
* Specifying *Install into all namespaces in the cluster* or setting the `WATCH_NAMESPACES` environment variable to `""` no longer affects all namespaces. Any API resources installed in namespaces not specified at the time of Compliance Operator installation is no longer be operational. API resources might require creation in the selected namespace, or the `openshift-compliance` namespace by default. This change improves the Compliance Operator's memory usage.
[id="compliance-operator-release-notes-0-1-53"]
== OpenShift Compliance Operator 0.1.53
@@ -118,17 +164,17 @@ The following advisory is available for the OpenShift Compliance Operator 0.1.49
* Previously, the `openshift-compliance` content did not include platform-specific checks for network types. As a result, OVN- and SDN-specific checks would show as `failed` instead of `not-applicable` based on the network configuration. Now, new rules contain platform checks for networking rules, resulting in a more accurate assessment of network-specific checks. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1994609[*BZ#1994609*])
* Previously, the `ocp4-moderate-routes-protected-by-tls` rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL TLS protocol. Now, the check will properly evaluate TLS settings that are consistent with the networking guidance and profile recommendations. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2002695[*BZ#2002695*])
* Previously, the `ocp4-moderate-routes-protected-by-tls` rule incorrectly checked TLS settings that results in the rule failing the check, even if the connection secure SSL TLS protocol. Now, the check properly evaluates TLS settings that are consistent with the networking guidance and profile recommendations. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2002695[*BZ#2002695*])
* Previously, `ocp-cis-configure-network-policies-namespace` used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies will work for deployments with more than 500 namespaces. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2038909[*BZ#2038909*])
* Previously, `ocp-cis-configure-network-policies-namespace` used pagination when requesting namespaces. This caused the rule to fail because the deployments truncated lists of more than 500 namespaces. Now, the entire namespace list is requested, and the rule for checking configured network policies works for deployments with more than 500 namespaces. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2038909[*BZ#2038909*])
* Previously, remediations using the `sshd jinja` macros were hard-coded to specific sshd configurations. As a result, the configurations were inconsistent with the content the rules were checking for and the check would fail. Now, the sshd configuration is parameterized and the rules apply successfully. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2049141[*BZ#2049141*])
* Previously, the `ocp4-cluster-version-operator-verify-integrity` always checked the first entry in the Cluter Version Operator (CVO) history. As a result, the upgrade would fail in situations where subsequent versions of {product-name} would be verified. Now, the compliance check result for `ocp4-cluster-version-operator-verify-integrity` is able to detect verified versions and is accurate with the CVO history. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2053602[*BZ#2053602*])
* Previously, the `ocp4-api-server-no-adm-ctrl-plugins-disabled` rule did not check for a list of empty admission controller plug-ins. As a result, the rule would always fail, even if all admission plug-ins were enabled. Now, more robust checking of the `ocp4-api-server-no-adm-ctrl-plugins-disabled` rule will accurately pass with all admission controller plug-ins enabled. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2058631[*BZ#2058631*])
* Previously, the `ocp4-api-server-no-adm-ctrl-plugins-disabled` rule did not check for a list of empty admission controller plug-ins. As a result, the rule would always fail, even if all admission plug-ins were enabled. Now, more robust checking of the `ocp4-api-server-no-adm-ctrl-plugins-disabled` rule accurately passes with all admission controller plug-ins enabled. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2058631[*BZ#2058631*])
* Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan will schedule appropriately based on platform type and labels and will completely successfully. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2056911[*BZ#2056911*])
* Previously, scans did not contain platform checks for running against Linux worker nodes. As a result, running scans against worker nodes that were not Linux-based resulted in a never ending scan loop. Now, the scan schedules appropriately based on platform type and labels complete successfully. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2056911[*BZ#2056911*])
[id="compliance-operator-release-notes-0-1-48"]
== OpenShift Compliance Operator 0.1.48

View File

@@ -14,6 +14,12 @@ include::modules/compliance-review.adoc[leveloffset=+1]
include::modules/compliance-apply-remediation-for-customized-mcp.adoc[leveloffset=+1]
include::modules/compliance-evaluate-kubeletconfig-rules.adoc[leveloffset=+1]
include::modules/compliance-custom-node-pools.adoc[leveloffset=+1]
include::modules/compliance-kubeletconfig-sub-pool-remediation.adoc[leveloffset=+1]
include::modules/compliance-applying.adoc[leveloffset=+1]
include::modules/compliance-manual.adoc[leveloffset=+1]

View File

@@ -43,4 +43,8 @@ $ date -d @1596184628.955853 --utc
include::modules/compliance-anatomy.adoc[leveloffset=+1]
include::modules/support.adoc[leveloffset=+1]
include::modules/compliance-increasing-operator-limits.adoc[leveloffset=+1]
include::modules/operator-resource-constraints.adoc[leveloffset=+1]
include::modules/support.adoc[leveloffset=+1]

View File

@@ -21,4 +21,11 @@ $ oc explain scansettingbindings
----
include::modules/running-compliance-scans.adoc[leveloffset=+1]
include::modules/running-compliance-scans-worker-node.adoc[leveloffset=+1]
include::modules/running-compliance-scans-worker-node.adoc[leveloffset=+1]
include::modules/compliance-scansetting-cr.adoc[leveloffset=+1]
include::modules/compliance-applying-resource-requests-and-limits.adoc[leveloffset=+1]
include::modules/compliance-scheduling-pods-with-resource-requests.adoc[leveloffset=+1]