1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-06 15:46:57 +01:00

CMP-1618 update Compliance Operator docs

QE Feedback applied

Peer review feedback applied
This commit is contained in:
Andrew Taylor
2022-10-18 16:50:03 -04:00
parent d787f409c3
commit 37796aa38d
31 changed files with 590 additions and 333 deletions

View File

@@ -13,34 +13,34 @@ The compliance content is stored in `Profile` objects that are generated from a
[source,terminal]
----
$ oc get profilebundle.compliance
$ oc get -n openshift-compliance profilebundle.compliance
----
[source,terminal]
----
$ oc get profile.compliance
$ oc get -n openshift-compliance profile.compliance
----
The `ProfileBundle` objects are processed by deployments labeled with the `Bundle` name. To troubleshoot an issue with the `Bundle`, you can find the deployment and view logs of the pods in a deployment:
[source,terminal]
----
$ oc logs -lprofile-bundle=ocp4 -c profileparser
$ oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser
----
[source,terminal]
----
$ oc get deployments,pods -lprofile-bundle=ocp4
$ oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4
----
[source,terminal]
----
$ oc logs pods/<pod-name>
$ oc logs -n openshift-compliance pods/<pod-name>
----
[source,terminal]
----
$ oc describe pod/<pod-name> -c profileparser
$ oc describe -n openshift-compliance pod/<pod-name> -c profileparser
----
[id="compliance-anatomy-scan-setting-scan-binding-lifecycle_{context}"]
@@ -123,14 +123,15 @@ In this phase, several config maps that contain either environment for the scann
[source,terminal]
----
$ oc get cm -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=
$ oc -n openshift-compliance get cm \
-l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=
----
These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan to store the raw ARF results:
[source,terminal]
----
$ oc get pvc -lcompliance.openshift.io/scan-name=<scan_name>
$ oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
----
The PVCs are mounted by a per-scan `ResultServer` deployment. A `ResultServer` is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the `ResultServer` deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the `ResultServer` is protected by mutual TLS protocols.
@@ -147,8 +148,9 @@ $ oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scan
----
NAME READY STATUS RESTARTS AGE LABELS
rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner
At this point, the scan proceeds to the Running phase.
----
+
The scan then proceeds to the Running phase.
[id="compliance-scan-running-phase_{context}"]
=== Running phase
@@ -158,7 +160,7 @@ The running phase waits until the scanner pods finish. The following terms and p
* *scanner*: This container runs the scan. For node scans, the container mounts the node filesystem as `/host` and mounts the content delivered by the init container. The container also mounts the `entrypoint` `ConfigMap` created in the Launching phase and executes it. The default script in the entrypoint `ConfigMap` executes OpenSCAP and stores the result files in the `/results` directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the `debug` flag.
* *logcollector*: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the `ResultServer` and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a `ConfigMap.` These result config maps are labeled with the scan name (`compliance.openshift.io/scan-name=<scan_name>`):
* *logcollector*: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the `ResultServer` and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a `ConfigMap.` These result config maps are labeled with the scan name (`compliance.openshift.io/scan-name=rhcos4-e8-worker`):
+
[source,terminal]
----
@@ -243,7 +245,8 @@ It is also possible to trigger a re-run of a scan in the Done phase by annotatin
[source,terminal]
----
$ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
$ oc -n openshift-compliance \
annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
----
After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with `autoApplyRemediations: true`. The {product-title} administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the `ComplianceSuite` controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the `ComplianceRemediation` controller takes over.
@@ -299,14 +302,16 @@ The remediation loop ends once the rendered machine config is updated, if needed
[source,terminal]
----
$ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
$ oc -n openshift-compliance \
annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
----
The scan will run and finish. Check for the remediation to pass:
[source,terminal]
----
$ oc get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod
$ oc -n openshift-compliance \
get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod
----
.Example output

View File

@@ -15,7 +15,7 @@ Do not set `protectKernelDefaults: false` in the `KubeletConfig` file, because t
+
[source,terminal]
----
$ oc get nodes
$ oc get nodes -n openshift-compliance
----
+
.Example output
@@ -34,7 +34,9 @@ ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.25.0
+
[source,terminal]
----
$ oc label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=
$ oc -n openshift-compliance \
label node ip-10-0-166-81.us-east-2.compute.internal \
node-role.kubernetes.io/<machine_config_pool_name>=
----
+
.Example output

View File

@@ -14,5 +14,6 @@ Although you can use the `autoApplyRemediations` boolean parameter in a `Complia
[source,terminal]
----
$ oc annotate compliancesuites/<suite-_name> compliance.openshift.io/apply-remediations=
$ oc -n openshift-compliance \
annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=
----

View File

@@ -9,7 +9,9 @@ The boolean attribute `spec.apply` controls whether the remediation should be ap
[source,terminal]
----
$ oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{"spec":{"apply":true}}' --type=merge
$ oc -n openshift-compliance \
patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects \
--patch '{"spec":{"apply":true}}' --type=merge
----
After the Compliance Operator processes the applied remediation, the `status.ApplicationState` attribute would change to *Applied* or to *Error* if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a `MachineConfig` object named `75-$scan-name-$suite-name`. That `MachineConfig` object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node.

View File

@@ -14,7 +14,8 @@ In some cases, a scan with newer content might mark remediations as `OUTDATED`.
[source,terminal]
----
$ oc annotate compliancesuites/<suite_name> compliance.openshift.io/remove-outdated=
$ oc -n openshift-compliance \
annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=
----
Alternatively, set the `autoUpdateRemediations` flag in a `ScanSetting` or `ComplianceSuite` object to update the remediations automatically.

View File

@@ -46,5 +46,6 @@ status: FAIL <2>
To get all the check results from a suite, run the following command:
[source,terminal]
----
oc get compliancecheckresults -l compliance.openshift.io/suite=<suit name>
oc get compliancecheckresults \
-l compliance.openshift.io/suite=workers-compliancesuite
----

View File

@@ -53,17 +53,20 @@ spec:
To get all the remediations from a suite, run the following command:
[source,terminal]
----
oc get complianceremediations -l compliance.openshift.io/suite=<suite name>
oc get complianceremediations \
-l compliance.openshift.io/suite=workers-compliancesuite
----
To list all failing checks that can be remediated automatically, run the following command:
[source,terminal]
----
oc get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'
oc get compliancecheckresults \
-l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'
----
To list all failing checks that can be remediated manually, run the following command:
[source,terminal]
----
oc get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'
oc get compliancecheckresults \
-l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'
----

View File

@@ -5,7 +5,7 @@
:_content-type: CONCEPT
[id="profile-bundle-object_{context}"]
= ProfileBundle object
When you install the Compliance Operator, it includes ready-to-run `ProfileBundle` object. The Compliance Operator parses the `ProfileBundle` object and creates a `Profile` object for each profile in the bundle. It also parses `Rule` and `Variable` objects, which are used by the `Profile` object.
When you install the Compliance Operator, it includes ready-to-run `ProfileBundle` objects. The Compliance Operator parses the `ProfileBundle` object and creates a `Profile` object for each profile in the bundle. It also parses `Rule` and `Variable` objects, which are used by the `Profile` object.
.Example `ProfileBundle` object
@@ -15,15 +15,10 @@ apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
name: <profile bundle name>
namespace: openshift-compliance
spec:
contentFile: ssg-ocp4-ds.xml <1>
contentImage: quay.io/complianceascode/ocp4:latest <2>
status:
dataStreamStatus: VALID <3>
dataStreamStatus: VALID <1>
----
<1> Specify a path from the root directory (/) where the profile file is located.
<2> Specify the container image that encapsulates the profile files.
<3> Indicates whether the Compliance Operator was able to parse the content files.
<1> Indicates whether the Compliance Operator was able to parse the content files.
[NOTE]
====

View File

@@ -14,26 +14,79 @@ By default, the Compliance Operator creates the following `ScanSetting` objects:
.Example `ScanSetting` object
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSetting
metadata:
name: <name of the scan>
autoApplyRemediations: false <1>
autoUpdateRemediations: false <2>
schedule: "0 1 * * *" <3>
rawResultStorage:
size: "2Gi" <4>
rotation: 10 <5>
roles: <6>
- worker
- master
Name: default-auto-apply
Namespace: openshift-compliance
Labels: <none>
Annotations: <none>
API Version: compliance.openshift.io/v1alpha1
Auto Apply Remediations: true
Auto Update Remediations: true
Kind: ScanSetting
Metadata:
Creation Timestamp: 2022-10-18T20:21:00Z
Generation: 1
Managed Fields:
API Version: compliance.openshift.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:autoApplyRemediations: <1>
f:autoUpdateRemediations: <2>
f:rawResultStorage:
.:
f:nodeSelector:
.:
f:node-role.kubernetes.io/master:
f:pvAccessModes:
f:rotation:
f:size:
f:tolerations:
f:roles:
f:scanTolerations:
f:schedule:
f:showNotApplicable:
f:strictNodeScan:
Manager: compliance-operator
Operation: Update
Time: 2022-10-18T20:21:00Z
Resource Version: 38840
UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84
Raw Result Storage:
Node Selector:
node-role.kubernetes.io/master:
Pv Access Modes:
ReadWriteOnce
Rotation: 3 <3>
Size: 1Gi <4>
Tolerations:
Effect: NoSchedule
Key: node-role.kubernetes.io/master
Operator: Exists
Effect: NoExecute
Key: node.kubernetes.io/not-ready
Operator: Exists
Toleration Seconds: 300
Effect: NoExecute
Key: node.kubernetes.io/unreachable
Operator: Exists
Toleration Seconds: 300
Effect: NoSchedule
Key: node.kubernetes.io/memory-pressure
Operator: Exists
Roles: <6>
master
worker
Scan Tolerations:
Operator: Exists
Schedule: "0 1 * * *" <5>
Show Not Applicable: false
Strict Node Scan: true
Events: <none>
----
<1> Set to `true` to enable auto remediations. Set to `false` to disable auto remediations.
<2> Set to `true` to enable auto remediations for content updates. Set to `false` to disable auto remediations for content updates.
<3> Specify how often the scan should be run in cron format.
<3> Specify the number of stored scans in the raw result format. The default value is `3`. As the older results get rotated, the administrator must store the results elsewhere before the rotation happens.
<4> Specify the storage size that should be created for the scan to store the raw results. The default value is `1Gi`
<5> Specify the amount of scans for which the raw results will be stored. The default value is `3`. As the older results get rotated, the administrator has to store the results elsewhere before the rotation happens.
<5> Specify how often the scan should be run in cron format.
+
[NOTE]
====

View File

@@ -47,7 +47,7 @@ With the `TailoredProfile` object, it is possible to create a new `Profile` obje
+
[source,yaml]
----
compliance.openshift.io/product-type: <scan type>
compliance.openshift.io/product-type: Platform/Node
----
+
[NOTE]

View File

@@ -63,7 +63,7 @@ In some environments, you must create a custom Security Context Constraints (SCC
+
[source,terminal]
----
$ oc create -f restricted-adjusted-compliance.yaml
$ oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml
----
+
.Example output
@@ -77,7 +77,7 @@ securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance
+
[source,terminal]
----
$ oc get scc restricted-adjusted-compliance
$ oc get -n openshift-compliance scc restricted-adjusted-compliance
----
+
.Example output

View File

@@ -12,14 +12,16 @@ List checks that belong to a specific suite:
[source,terminal]
----
$ oc get compliancecheckresults -l compliance.openshift.io/suite=example-compliancesuite
$ oc get -n openshift-compliance compliancecheckresults \
-l compliance.openshift.io/suite=workers-compliancesuite
----
List checks that belong to a specific scan:
[source,terminal]
----
$ oc get compliancecheckresults -l compliance.openshift.io/scan=example-compliancescan
$ oc get -n openshift-compliance compliancecheckresults \
-l compliance.openshift.io/scan=workers-scan
----
Not all `ComplianceCheckResult` objects create `ComplianceRemediation` objects. Only `ComplianceCheckResult` objects that can be remediated automatically do. A `ComplianceCheckResult` object has a related remediation if it is labeled with the `compliance.openshift.io/automated-remediation` label. The name of the remediation is the same as the name of the check.
@@ -28,14 +30,47 @@ List all failing checks that can be remediated automatically:
[source,terminal]
----
$ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'
$ oc get -n openshift-compliance compliancecheckresults \
-l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'
----
List all failing checks sorted by severity:
[source,terminal]
----
$ oc get compliancecheckresults -n openshift-compliance \
-l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'
----
.Example output
[source,terminal]
----
NAME STATUS SEVERITY
nist-moderate-modified-master-configure-crypto-policy FAIL high
nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high
nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high
nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high
nist-moderate-modified-master-enable-fips-mode FAIL high
nist-moderate-modified-master-no-empty-passwords FAIL high
nist-moderate-modified-master-selinux-state FAIL high
nist-moderate-modified-worker-configure-crypto-policy FAIL high
nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high
nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high
nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high
nist-moderate-modified-worker-enable-fips-mode FAIL high
nist-moderate-modified-worker-no-empty-passwords FAIL high
nist-moderate-modified-worker-selinux-state FAIL high
ocp4-moderate-configure-network-policies-namespaces FAIL high
ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high
----
List all failing checks that must be remediated manually:
[source,terminal]
----
$ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'
$ oc get -n openshift-compliance compliancecheckresults \
-l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'
----
The manual remediation steps are typically stored in the `description` attribute in the `ComplianceCheckResult` object.

View File

@@ -20,5 +20,6 @@ If possible, a remediation is still created so that the cluster can converge to
[source,terminal]
----
$ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
$ oc -n openshift-compliance \
annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
----

View File

@@ -47,5 +47,6 @@ status:
+
[source,terminal]
----
$ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
$ oc -n openshift-compliance \
annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
----

View File

@@ -0,0 +1,39 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-tailor.adoc
:_content-type: PROCEDURE
[id="compliance-new-tailored-profiles_{context}"]
= Creating a new tailored profile
You can write a tailored profile from scratch using the `TailoredProfile` object. Set an appropriate `title` and `description` and leave the `extends` field empty. Indicate to the Compliance Operator what type of scan will this custom profile generate:
* Node scan: Scans the Operating System.
* Platform scan: Scans the OpenShift configuration.
.Procedure
Set the following annotation on the `TailoredProfile` object:
+
.Example `new-profile.yaml`
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
name: new-profile
annotations:
compliance.openshift.io/product-type: Node <1>
spec:
extends:
description: My custom profile <2>
title: Custom profile <3>
----
<1> Set `Node` or `Platform` accordingly.
<2> Use the `description` field to describe the function of the new `TailoredProfile` object.
<3> Give your `TailoredProfile` object a title with the `title` field.
+
[NOTE]
====
Adding the `-node` suffix to the `name` field of the `TailoredProfile` object is similar to adding the `Node` product type annotation and generates an Operating System scan.
====

View File

@@ -17,33 +17,18 @@ To remove the Compliance Operator, you must first delete the Compliance Operator
To remove the Compliance Operator by using the {product-title} web console:
. Remove CRDs that were installed by the Compliance Operator:
. Navigate to the *Operators* -> *Installed Operators* page.
.. Switch to the *Administration* -> *CustomResourceDefinitions* page.
. Delete all ScanSettingBinding, ComplainceSuite, ComplianceScan, and ProfileBundle objects.
.. Search for `compliance.openshift.io` in the *Name* field.
. Switch to the *Administration* -> *Operators* -> *Installed Operators* page.
.. Click the Options menu {kebab} next to each of the following CRDs, and select *Delete Custom Resource Definition*:
. Click the Options menu {kebab} on the *Compliance Operator* entry and select *Uninstall Operator*.
* `ComplianceCheckResult`
* `ComplianceRemediation`
* `ComplianceScan`
* `ComplianceSuite`
* `ProfileBundle`
* `Profile`
* `Rule`
* `ScanSettingBinding`
* `ScanSetting`
* `TailoredProfile`
* `Variable`
+
. Remove the OpenShift Compliance project:
.. Switch to the *Home* -> *Projects* page.
.. Click the Options menu {kebab} next to the *openshift-compliance* project, and select *Delete Project*.
.. Confirm the deletion by typing `openshift-compliance` in the dialog box, and click *Delete*.
. Switch to the *Home* -> *Projects* page.
. Search for 'compliance'.
. Click the Options menu {kebab} next to the *openshift-compliance* project, and select *Delete Project*.
.. Confirm the deletion by typing `openshift-compliance` in the dialog box, and click *Delete*.

View File

@@ -2,24 +2,40 @@
//
// * security/compliance_operator/compliance-operator-manage.adoc
:_content-type: CONCEPT
[id="compliance-profilebundle_{context}"]
= ProfileBundle CR example
The bundle object needs two pieces of information: the URL of a container image that contains the `contentImage` and the file that contains the compliance content. The `contentFile` parameter is relative to the root of the file system. The built-in `rhcos4` `ProfileBundle` object can be defined in the example below:
The `ProfileBundle` object requires two pieces of information: the URL of a container image that contains the `contentImage` and the file that contains the compliance content. The `contentFile` parameter is relative to the root of the file system. You can define the built-in `rhcos4` `ProfileBundle` object as shown in the following example:
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
metadata:
name: rhcos4
spec:
contentImage: quay.io/complianceascode/ocp4:latest <1>
contentFile: ssg-rhcos4-ds.xml <2>
apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
metadata:
creationTimestamp: "2022-10-19T12:06:30Z"
finalizers:
- profilebundle.finalizers.compliance.openshift.io
generation: 1
name: rhcos4
namespace: openshift-compliance
resourceVersion: "46741"
uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d
spec:
contentFile: ssg-rhcos4-ds.xml <1>
contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... <2>
status:
conditions:
- lastTransitionTime: "2022-10-19T12:07:51Z"
message: Profile bundle successfully parsed
reason: Valid
status: "True"
type: Ready
dataStreamStatus: VALID
----
<1> Content image location.
<2> Location of the file containing the compliance content.
<1> Location of the file containing the compliance content.
<2> Content image location.
+
[IMPORTANT]
====
The base image used for the content images must include `coreutils`.

View File

@@ -2,6 +2,7 @@
//
// * security/compliance_operator/compliance-operator-understanding.adoc
:_content-type: CONCEPT
[id="compliance_profiles_{context}"]
= Compliance Operator profiles
@@ -11,13 +12,6 @@ There are several profiles available as part of the Compliance Operator installa
+
[source,terminal]
----
$ oc get -n <namespace> profiles.compliance
----
+
This example displays the profiles in the default `openshift-compliance` namespace:
+
[source,terminal]
----
$ oc get -n openshift-compliance profiles.compliance
----
+
@@ -25,30 +19,26 @@ $ oc get -n openshift-compliance profiles.compliance
[source,terminal]
----
NAME AGE
ocp4-cis 32m
ocp4-cis-node 32m
ocp4-e8 32m
ocp4-moderate 32m
ocp4-moderate-node 32m
ocp4-nerc-cip 32m
ocp4-nerc-cip-node 32m
ocp4-pci-dss 32m
ocp4-pci-dss-node 32m
rhcos4-e8 32m
rhcos4-moderate 32m
rhcos4-nerc-cip 32m
ocp4-cis 94m
ocp4-cis-node 94m
ocp4-e8 94m
ocp4-high 94m
ocp4-high-node 94m
ocp4-moderate 94m
ocp4-moderate-node 94m
ocp4-nerc-cip 94m
ocp4-nerc-cip-node 94m
ocp4-pci-dss 94m
ocp4-pci-dss-node 94m
rhcos4-e8 94m
rhcos4-high 94m
rhcos4-moderate 94m
rhcos4-nerc-cip 94m
----
+
These profiles represent different compliance benchmarks. Each profile has the product name that it applies to added as a prefix to the profiles name. `ocp4-e8` applies the Essential 8 benchmark to the {product-title} product, while `rhcos4-e8` applies the Essential 8 benchmark to the {op-system-first} product.
* View the details of a profile:
+
[source,terminal]
----
$ oc get -n <namespace> -oyaml profiles.compliance <profile name>
----
+
This example displays the details of the `rhcos4-e8` profile:
* Run the following command to view the details of the `rhcos4-e8` profile:
+
[source,terminal]
----
@@ -59,139 +49,141 @@ $ oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
description: |-
This profile contains configuration checks for Red Hat
Enterprise Linux CoreOS that align to the Australian
Cyber Security Centre (ACSC) Essential Eight.
A copy of the Essential Eight in Linux Environments guide can
be found at the ACSC website: ...
id: xccdf_org.ssgproject.content_profile_e8
kind: Profile
metadata:
annotations:
compliance.openshift.io/image-digest: pb-rhcos426smj
compliance.openshift.io/product: redhat_enterprise_linux_coreos_4
compliance.openshift.io/product-type: Node
labels:
compliance.openshift.io/profile-bundle: rhcos4
name: rhcos4-e8
namespace: openshift-compliance
ownerReferences:
- apiVersion: compliance.openshift.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ProfileBundle
name: rhcos4
rules:
- rhcos4-accounts-no-uid-except-zero
- rhcos4-audit-rules-dac-modification-chmod
- rhcos4-audit-rules-dac-modification-chown
- rhcos4-audit-rules-execution-chcon
- rhcos4-audit-rules-execution-restorecon
- rhcos4-audit-rules-execution-semanage
- rhcos4-audit-rules-execution-setfiles
- rhcos4-audit-rules-execution-setsebool
- rhcos4-audit-rules-execution-seunshare
- rhcos4-audit-rules-kernel-module-loading-delete
- rhcos4-audit-rules-kernel-module-loading-finit
- rhcos4-audit-rules-kernel-module-loading-init
- rhcos4-audit-rules-login-events
- rhcos4-audit-rules-login-events-faillock
- rhcos4-audit-rules-login-events-lastlog
- rhcos4-audit-rules-login-events-tallylog
- rhcos4-audit-rules-networkconfig-modification
- rhcos4-audit-rules-sysadmin-actions
- rhcos4-audit-rules-time-adjtimex
- rhcos4-audit-rules-time-clock-settime
- rhcos4-audit-rules-time-settimeofday
- rhcos4-audit-rules-time-stime
- rhcos4-audit-rules-time-watch-localtime
- rhcos4-audit-rules-usergroup-modification
- rhcos4-auditd-data-retention-flush
- rhcos4-auditd-freq
- rhcos4-auditd-local-events
- rhcos4-auditd-log-format
- rhcos4-auditd-name-format
- rhcos4-auditd-write-logs
- rhcos4-configure-crypto-policy
- rhcos4-configure-ssh-crypto-policy
- rhcos4-no-empty-passwords
- rhcos4-selinux-policytype
- rhcos4-selinux-state
- rhcos4-service-auditd-enabled
- rhcos4-sshd-disable-empty-passwords
- rhcos4-sshd-disable-gssapi-auth
- rhcos4-sshd-disable-rhosts
- rhcos4-sshd-disable-root-login
- rhcos4-sshd-disable-user-known-hosts
- rhcos4-sshd-do-not-permit-user-env
- rhcos4-sshd-enable-strictmodes
- rhcos4-sshd-print-last-log
- rhcos4-sshd-set-loglevel-info
- rhcos4-sysctl-kernel-dmesg-restrict
- rhcos4-sysctl-kernel-kptr-restrict
- rhcos4-sysctl-kernel-randomize-va-space
- rhcos4-sysctl-kernel-unprivileged-bpf-disabled
- rhcos4-sysctl-kernel-yama-ptrace-scope
- rhcos4-sysctl-net-core-bpf-jit-harden
title: Australian Cyber Security Centre (ACSC) Essential Eight
description: 'This profile contains configuration checks for Red Hat Enterprise Linux
CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight.
A copy of the Essential Eight in Linux Environments guide can be found at the ACSC
website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers'
id: xccdf_org.ssgproject.content_profile_e8
kind: Profile
metadata:
annotations:
compliance.openshift.io/image-digest: pb-rhcos4hrdkm
compliance.openshift.io/product: redhat_enterprise_linux_coreos_4
compliance.openshift.io/product-type: Node
creationTimestamp: "2022-10-19T12:06:49Z"
generation: 1
labels:
compliance.openshift.io/profile-bundle: rhcos4
name: rhcos4-e8
namespace: openshift-compliance
ownerReferences:
- apiVersion: compliance.openshift.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ProfileBundle
name: rhcos4
uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d
resourceVersion: "43699"
uid: 86353f70-28f7-40b4-bf0e-6289ec33675b
rules:
- rhcos4-accounts-no-uid-except-zero
- rhcos4-audit-rules-dac-modification-chmod
- rhcos4-audit-rules-dac-modification-chown
- rhcos4-audit-rules-execution-chcon
- rhcos4-audit-rules-execution-restorecon
- rhcos4-audit-rules-execution-semanage
- rhcos4-audit-rules-execution-setfiles
- rhcos4-audit-rules-execution-setsebool
- rhcos4-audit-rules-execution-seunshare
- rhcos4-audit-rules-kernel-module-loading-delete
- rhcos4-audit-rules-kernel-module-loading-finit
- rhcos4-audit-rules-kernel-module-loading-init
- rhcos4-audit-rules-login-events
- rhcos4-audit-rules-login-events-faillock
- rhcos4-audit-rules-login-events-lastlog
- rhcos4-audit-rules-login-events-tallylog
- rhcos4-audit-rules-networkconfig-modification
- rhcos4-audit-rules-sysadmin-actions
- rhcos4-audit-rules-time-adjtimex
- rhcos4-audit-rules-time-clock-settime
- rhcos4-audit-rules-time-settimeofday
- rhcos4-audit-rules-time-stime
- rhcos4-audit-rules-time-watch-localtime
- rhcos4-audit-rules-usergroup-modification
- rhcos4-auditd-data-retention-flush
- rhcos4-auditd-freq
- rhcos4-auditd-local-events
- rhcos4-auditd-log-format
- rhcos4-auditd-name-format
- rhcos4-auditd-write-logs
- rhcos4-configure-crypto-policy
- rhcos4-configure-ssh-crypto-policy
- rhcos4-no-empty-passwords
- rhcos4-selinux-policytype
- rhcos4-selinux-state
- rhcos4-service-auditd-enabled
- rhcos4-sshd-disable-empty-passwords
- rhcos4-sshd-disable-gssapi-auth
- rhcos4-sshd-disable-rhosts
- rhcos4-sshd-disable-root-login
- rhcos4-sshd-disable-user-known-hosts
- rhcos4-sshd-do-not-permit-user-env
- rhcos4-sshd-enable-strictmodes
- rhcos4-sshd-print-last-log
- rhcos4-sshd-set-loglevel-info
- rhcos4-sysctl-kernel-dmesg-restrict
- rhcos4-sysctl-kernel-kptr-restrict
- rhcos4-sysctl-kernel-randomize-va-space
- rhcos4-sysctl-kernel-unprivileged-bpf-disabled
- rhcos4-sysctl-kernel-yama-ptrace-scope
- rhcos4-sysctl-net-core-bpf-jit-harden
title: Australian Cyber Security Centre (ACSC) Essential Eight
----
* View the rules within a desired profile:
* Run the following command to view the details of the `rhcos4-audit-rules-login-events` rule:
+
[source,terminal]
----
$ oc get -n <namespace> -oyaml rules.compliance <rule_name>
----
+
This example displays the `rhcos4-audit-rules-login-events` rule in the `rhcos4` profile:
+
[source,terminal]
----
$ oc get -n openshift-compliance -oyaml rules.compliance rhcos4-audit-rules-login-events
$ oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events
----
+
.Example output
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
checkType: Node
description: |-
The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events:
apiVersion: compliance.openshift.io/v1alpha1
checkType: Node
description: |-
The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events:
-w /var/log/tallylog -p wa -k logins
-w /var/run/faillock -p wa -k logins
-w /var/log/lastlog -p wa -k logins
-w /var/log/tallylog -p wa -k logins
-w /var/run/faillock -p wa -k logins
-w /var/log/lastlog -p wa -k logins
If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events:
If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events:
-w /var/log/tallylog -p wa -k logins
-w /var/run/faillock -p wa -k logins
-w /var/log/lastlog -p wa -k logins
id: xccdf_org.ssgproject.content_rule_audit_rules_login_events
kind: Rule
metadata:
annotations:
compliance.openshift.io/image-digest: pb-rhcos426smj
compliance.openshift.io/rule: audit-rules-login-events
control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a)
control.compliance.openshift.io/PCI-DSS: Req-10.2.3
policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3
policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS
labels:
compliance.openshift.io/profile-bundle: rhcos4
name: rhcos4-audit-rules-login-events
namespace: openshift-compliance
ownerReferences:
- apiVersion: compliance.openshift.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ProfileBundle
name: rhcos4
rationale: Manual editing of these files may indicate nefarious activity, such as
an attacker attempting to remove evidence of an intrusion.
severity: medium
title: Record Attempts to Alter Logon and Logout Events
warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.
-w /var/log/tallylog -p wa -k logins
-w /var/run/faillock -p wa -k logins
-w /var/log/lastlog -p wa -k logins
id: xccdf_org.ssgproject.content_rule_audit_rules_login_events
kind: Rule
metadata:
annotations:
compliance.openshift.io/image-digest: pb-rhcos4hrdkm
compliance.openshift.io/rule: audit-rules-login-events
control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a)
control.compliance.openshift.io/PCI-DSS: Req-10.2.3
policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3
policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS
creationTimestamp: "2022-10-19T12:07:08Z"
generation: 1
labels:
compliance.openshift.io/profile-bundle: rhcos4
name: rhcos4-audit-rules-login-events
namespace: openshift-compliance
ownerReferences:
- apiVersion: compliance.openshift.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ProfileBundle
name: rhcos4
uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d
resourceVersion: "44819"
uid: 75872f1f-3c93-40ca-a69d-44e5438824a4
rationale: Manual editing of these files may indicate nefarious activity, such as
an attacker attempting to remove evidence of an intrusion.
severity: medium
title: Record Attempts to Alter Logon and Logout Events
warning: Manual editing of these files may indicate nefarious activity, such as an
attacker attempting to remove evidence of an intrusion.
----

View File

@@ -14,7 +14,9 @@ The `ComplianceSuite` object contains an optional `TailoringConfigMap` attribute
+
[source,terminal]
----
$ oc create configmap <scan_name> --from-file=tailoring.xml=/path/to/the/tailoringFile.xml
$ oc -n openshift-compliance \
create configmap nist-moderate-modified \
--from-file=tailoring.xml=/path/to/the/tailoringFile.xml
----
. Reference the tailoring file in a scan that belongs to a suite:
@@ -34,7 +36,7 @@ spec:
contentImage: quay.io/complianceascode/ocp4:latest
debug: true
tailoringConfigMap:
name: <scan_name>
name: nist-moderate-modified
nodeSelector:
node-role.kubernetes.io/worker: ""
----

View File

@@ -9,11 +9,11 @@
.Procedure
. Locate the `scan-name` and compliance check for the `one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available` remediation:
. Locate the `scan-name` and compliance check for the `one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available` remediation:
+
[source,terminal]
----
$ oc get remediation one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml
$ oc -n openshift-compliance get remediation \ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml
----
+
.Example output
@@ -69,14 +69,17 @@ If the remediation invokes an `evictionHard` kubelet configuration, you must spe
+
[source,terminal]
----
$ oc patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{"spec":{"apply":false}}' --type=merge
$ oc -n openshift-compliance patch \
complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available \
-p '{"spec":{"apply":false}}' --type=merge
----
+
.. Using the `scan-name`, find the `KubeletConfig` object that the remediation was applied to:
.. Using the `scan-name`, find the `KubeletConfig` object that the remediation was applied to:
+
[source,terminal]
----
$ oc get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master
$ oc -n openshift-compliance get kubeletconfig \
--selector compliance.openshift.io/scan-name=one-rule-tp-node-master
----
+
.Example output
@@ -89,7 +92,7 @@ compliance-operator-kubelet-master 2m34s
+
[source,terminal]
----
$ oc edit KubeletConfig compliance-operator-kubelet-master
$ oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master
----
+
[IMPORTANT]

View File

@@ -8,7 +8,8 @@ Typically you will want to re-run a scan on a defined schedule, like every Monda
[source,terminal]
----
$ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
$ oc -n openshift-compliance \
annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=
----
A rescan generates four additional `mc` for `rhcos-moderate` profile:

View File

@@ -14,16 +14,25 @@ The Compliance Operator generates and stores the raw results in a persistent vol
+
[source,terminal]
----
$ oc get compliancesuites nist-moderate-modified -o json \
| jq '.status.scanStatuses[].resultsStorage'
{
"name": "rhcos4-moderate-worker",
"namespace": "openshift-compliance"
}
{
"name": "rhcos4-moderate-master",
"namespace": "openshift-compliance"
}
$ oc get compliancesuites nist-moderate-modified \
-o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'
----
+
.Example output
[source,json]
----
{
"name": "ocp4-moderate",
"namespace": "openshift-compliance"
}
{
"name": "nist-moderate-modified-master",
"namespace": "openshift-compliance"
}
{
"name": "nist-moderate-modified-worker",
"namespace": "openshift-compliance"
}
----
+
This shows the persistent volume claims where the raw results are accessible.
@@ -44,7 +53,12 @@ rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi
. Fetch the raw results by spawning a pod that mounts the volume and copying the results:
+
.Example pod
[source,terminal]
----
$ oc create -n openshift-compliance -f pod.yaml
----
+
.Example pod.yaml
[source,yaml]
----
apiVersion: "v1"
@@ -69,7 +83,7 @@ spec:
+
[source,terminal]
----
$ oc cp pv-extract:/workers-scan-results .
$ oc cp pv-extract:/workers-scan-results -n openshift-compliance .
----
+
[IMPORTANT]
@@ -81,5 +95,5 @@ Spawning a pod that mounts the persistent volume will keep the claim as `Bound`.
+
[source,terminal]
----
$ oc delete pod pv-extract
$ oc delete pod pv-extract -n openshift-compliance
----

View File

@@ -4,7 +4,7 @@
:_content-type: PROCEDURE
[id="compliance-tailored-profiles_{context}"]
= Using tailored profiles
= Using tailored profiles to extend existing ProfileBundles
While the `TailoredProfile` CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.
The `ComplianceSuite` object contains an optional `TailoringConfigMap` attribute that you can point to a custom tailoring file. The value of the `TailoringConfigMap` attribute is a name of a config map, which must contain a key called `tailoring.xml` and the value of this key is the tailoring contents.

View File

@@ -12,8 +12,9 @@ It might be required to unapply a remediation that was previously applied.
+
[source,terminal]
----
$ oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects -p '{"spec":{"apply":false}}' --type=merge
$ oc -n openshift-compliance \
patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects \
--patch '{"spec":{"apply":false}}' --type=merge
----
. The remediation status will change to `NotApplied` and the composite `MachineConfig` object would be re-rendered to not include the remediation.

View File

@@ -2,23 +2,43 @@
//
// * security/compliance_operator/compliance-operator-manage.adoc
:_content-type: CONCEPT
[id="compliance-update_{context}"]
= Updating security content
Security content is shipped as container images that the `ProfileBundle` objects refer to. To accurately track updates to `ProfileBundles` and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag:
Security content is included as container images that the `ProfileBundle` objects refer to. To accurately track updates to `ProfileBundles` and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag:
[source,terminal]
----
$ oc -n openshift-compliance get profilebundles rhcos4 -oyaml
----
.Example output
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
metadata:
name: rhcos4
spec:
contentImage: quay.io/user/ocp4-openscap-content@sha256:a1749f5150b19a9560a5732fe48a89f07bffc79c0832aa8c49ee5504590ae687 <1>
contentFile: ssg-rhcos4-ds.xml
apiVersion: compliance.openshift.io/v1alpha1
kind: ProfileBundle
metadata:
creationTimestamp: "2022-10-19T12:06:30Z"
finalizers:
- profilebundle.finalizers.compliance.openshift.io
generation: 1
name: rhcos4
namespace: openshift-compliance
resourceVersion: "46741"
uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d
spec:
contentFile: ssg-rhcos4-ds.xml
contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... <1>
status:
conditions:
- lastTransitionTime: "2022-10-19T12:07:51Z"
message: Profile bundle successfully parsed
reason: Valid
status: "True"
type: Ready
dataStreamStatus: VALID
----
<1> Security container image.
Each `ProfileBundle` is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles.

View File

@@ -16,7 +16,8 @@ The previously applied remediation contents would then be stored in the `spec.ou
+
[source,terminal]
----
$ oc get complianceremediations -lcomplianceoperator.openshift.io/outdated-remediation=
$ oc -n openshift-compliance get complianceremediations \
-l complianceoperator.openshift.io/outdated-remediation=
----
+
.Example output
@@ -32,14 +33,15 @@ The currently applied remediation is stored in the `Outdated` attribute and the
+
[source,terminal]
----
$ oc patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{"op":"remove", "path":/spec/outdated}]'
$ oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords \
--type json -p '[{"op":"remove", "path":/spec/outdated}]'
----
. The remediation state will switch from `Outdated` to `Applied`:
+
[source,terminal]
----
$ oc get complianceremediations workers-scan-no-empty-passwords
$ oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords
----
+
.Example output

View File

@@ -10,17 +10,7 @@ Although it is possible to run scans as scheduled jobs, you must often re-run a
.Procedure
* Triggering a re-scan with the Compliance Operator requires use of an annotation on the scan object. However, with the `oc-compliance` plug-in you can re-scan with a single command:
+
[source,terminal]
----
$ oc compliance rerun-now <scan-object> <object-name>
----
+
* `<scan-object>` can be `compliancescan`, `compliancesuite`, or `scansettingbinding`.
* `<object-name>` is the name of the given `scan-object`.
+
For example, to re-run the scans for the `ScanSettingBinding` object named `my-binding`:
* Rerunning a scan with the Compliance Operator requires use of an annotation on the scan object. However, with the `oc-compliance` plug-in you can rerun a scan with a single command. Enter the following command to rerun the scans for the `ScanSettingBinding` object named `my-binding`:
+
[source,terminal]
----

View File

@@ -23,25 +23,71 @@ $ oc describe scansettings default -n openshift-compliance
----
+
.Example output
[source,terminal]
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSetting
metadata:
name: default
namespace: openshift-compliance
rawResultStorage:
pvAccessModes:
- ReadWriteOnce <1>
rotation: 3 <2>
size: 1Gi <3>
roles:
- worker <4>
- master <4>
scanTolerations: <5>
default:
- operator: Exists
schedule: 0 1 * * * <6>
Name: default
Namespace: openshift-compliance
Labels: <none>
Annotations: <none>
API Version: compliance.openshift.io/v1alpha1
Kind: ScanSetting
Metadata:
Creation Timestamp: 2022-10-10T14:07:29Z
Generation: 1
Managed Fields:
API Version: compliance.openshift.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:rawResultStorage:
.:
f:nodeSelector:
.:
f:node-role.kubernetes.io/master:
f:pvAccessModes:
f:rotation:
f:size:
f:tolerations:
f:roles:
f:scanTolerations:
f:schedule:
f:showNotApplicable:
f:strictNodeScan:
Manager: compliance-operator
Operation: Update
Time: 2022-10-10T14:07:29Z
Resource Version: 56111
UID: c21d1d14-3472-47d7-a450-b924287aec90
Raw Result Storage:
Node Selector:
node-role.kubernetes.io/master:
Pv Access Modes:
ReadWriteOnce <1>
Rotation: 3 <2>
Size: 1Gi <3>
Tolerations:
Effect: NoSchedule
Key: node-role.kubernetes.io/master
Operator: Exists
Effect: NoExecute
Key: node.kubernetes.io/not-ready
Operator: Exists
Toleration Seconds: 300
Effect: NoExecute
Key: node.kubernetes.io/unreachable
Operator: Exists
Toleration Seconds: 300
Effect: NoSchedule
Key: node.kubernetes.io/memory-pressure
Operator: Exists
Roles:
master <4>
worker <4>
Scan Tolerations: <5>
Operator: Exists
Schedule: 0 1 * * * <6>
Show Not Applicable: false
Strict Node Scan: true
Events: <none>
----
<1> The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode `ReadWriteOnce` because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, `ReadWriteOnce` access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the `ReadWriteOnce` access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans.
<2> The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated.
@@ -54,25 +100,73 @@ As an alternative to the default scan setting, you can use `default-auto-apply`,
+
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSetting
metadata:
name: default-auto-apply
namespace: openshift-compliance
autoUpdateRemediations: true <1>
autoApplyRemediations: true <1>
rawResultStorage:
pvAccessModes:
- ReadWriteOnce
rotation: 3
size: 1Gi
schedule: 0 1 * * *
roles:
- worker
- master
scanTolerations:
default:
- operator: Exists
Name: default-auto-apply
Namespace: openshift-compliance
Labels: <none>
Annotations: <none>
API Version: compliance.openshift.io/v1alpha1
Auto Apply Remediations: true
Auto Update Remediations: true
Kind: ScanSetting
Metadata:
Creation Timestamp: 2022-10-18T20:21:00Z
Generation: 1
Managed Fields:
API Version: compliance.openshift.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:autoApplyRemediations: <1>
f:autoUpdateRemediations: <1>
f:rawResultStorage:
.:
f:nodeSelector:
.:
f:node-role.kubernetes.io/master:
f:pvAccessModes:
f:rotation:
f:size:
f:tolerations:
f:roles:
f:scanTolerations:
f:schedule:
f:showNotApplicable:
f:strictNodeScan:
Manager: compliance-operator
Operation: Update
Time: 2022-10-18T20:21:00Z
Resource Version: 38840
UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84
Raw Result Storage:
Node Selector:
node-role.kubernetes.io/master:
Pv Access Modes:
ReadWriteOnce
Rotation: 3
Size: 1Gi
Tolerations:
Effect: NoSchedule
Key: node-role.kubernetes.io/master
Operator: Exists
Effect: NoExecute
Key: node.kubernetes.io/not-ready
Operator: Exists
Toleration Seconds: 300
Effect: NoExecute
Key: node.kubernetes.io/unreachable
Operator: Exists
Toleration Seconds: 300
Effect: NoSchedule
Key: node.kubernetes.io/memory-pressure
Operator: Exists
Roles:
master
worker
Scan Tolerations:
Operator: Exists
Schedule: 0 1 * * *
Show Not Applicable: false
Strict Node Scan: true
Events: <none>
----
<1> Setting `autoUpdateRemediations` and `autoApplyRemediations` flags to `true` allows you to easily create `ScanSetting` objects that auto-remediate without extra steps.

View File

@@ -8,12 +8,10 @@ toc::[]
This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom `ProfileBundle` object.
include::modules/compliance-update.adoc[leveloffset=+1]
include::modules/compliance-imagestreams.adoc[leveloffset=+1]
include::modules/compliance-profilebundle.adoc[leveloffset=+1]
include::modules/compliance-update.adoc[leveloffset=+1]
[id="additional-resources_managing-the-compliance-operator"]
[role="_additional-resources"]
== Additional resources

View File

@@ -4,13 +4,12 @@
include::_attributes/common-attributes.adoc[]
:context: compliance-tailor
toc::[]
While the Compliance Operator comes with ready-to-use profiles, they must be modified to fit the organizations needs and requirements. The process of modifying a profile is called _tailoring_.
The Compliance Operator provides an object to easily tailor profiles called a `TailoredProfile`. This assumes that you are extending a pre-existing profile, and allows you to enable and disable rules and values which come from the `ProfileBundle`.
The Compliance Operator provides the `TailoredProfile` object to help tailor profiles.
[NOTE]
====
You will only be able to use rules and variables that are available as part of the `ProfileBundle` that the profile you want to extend belongs to.
====
include::modules/compliance-new-tailored-profiles.adoc[leveloffset=+1]
include::modules/compliance-tailored-profiles.adoc[leveloffset=+1]

View File

@@ -19,14 +19,15 @@ Or view events for an object like a scan using the command:
+
[source,terminal]
----
$ oc describe compliancescan/<scan_name>
$ oc describe -n openshift-compliance compliancescan/cis-compliance
----
* The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a `ComplianceRemediation` cannot be applied, view the messages from the `remediationctrl` controller. You can filter the messages from a single controller by parsing with `jq`:
+
[source,terminal]
----
$ oc logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == "profilebundlectrl")'
$ oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f \
| jq -c 'select(.logger == "profilebundlectrl")'
----
* The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use `date -d @timestamp --utc`, for example: