mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
osdocs-1466 Compliance Operator docs
typo1 typo2 syntax corrections typo3 feedback1 morecorrections correctionss syntax fix peer review feedback applied further peer review feedback final qe requested changes
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
7ba56e2fbf
commit
2b61d4e038
@@ -390,10 +390,27 @@ Topics:
|
||||
File: disabling-web-console
|
||||
Distros: openshift-enterprise,openshift-webscale,openshift-origin
|
||||
---
|
||||
Name: Security
|
||||
Name: Security and compliance
|
||||
Dir: security
|
||||
Distros: openshift-enterprise,openshift-webscale,openshift-origin,openshift-aro
|
||||
Topics:
|
||||
- Name: Compliance Operator
|
||||
Dir: compliance_operator
|
||||
Topics:
|
||||
- Name: Understanding the Compliance Operator
|
||||
File: compliance-operator-understanding
|
||||
- Name: Managing the Compliance Operator
|
||||
File: compliance-operator-manage
|
||||
- Name: Managing Compliance Operator remediation
|
||||
File: compliance-operator-remediation
|
||||
- Name: Tailoring the Compliance Operator
|
||||
File: compliance-operator-tailor
|
||||
- Name: Retrieving Compliance Operator raw results
|
||||
File: compliance-operator-raw-results
|
||||
- Name: Performing advanced Compliance Operator tasks
|
||||
File: compliance-operator-advanced
|
||||
- Name: Troubleshooting the Compliance Operator
|
||||
File: compliance-operator-troubleshooting
|
||||
- Name: Container security
|
||||
Dir: container_security
|
||||
Topics:
|
||||
|
||||
284
modules/compliance-anatomy.adoc
Normal file
284
modules/compliance-anatomy.adoc
Normal file
@@ -0,0 +1,284 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-troubleshooting.adoc
|
||||
|
||||
[id="compliance_anatomy_{context}"]
|
||||
= Anatomy of a scan
|
||||
|
||||
The following sections outline the components and stages of Compliance Operator scans.
|
||||
|
||||
== Compliance sources
|
||||
The compliance content is stored in `Profile` objects that are generated from a `ProfileBundle`. The Compliance Operator creates a `ProfileBundle` for the cluster and another for the cluster nodes.
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get profilebundle.compliance
|
||||
$ oc get profile.compliance
|
||||
----
|
||||
|
||||
The `ProfileBundle` objects are processed by deployments labeled with the `Bundle` name. To troubleshoot an issue with the `Bundle`, you can find the deployment and check out logs of the pods in a deployment:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs -lprofile-bundle=ocp4
|
||||
$ oc get deployments,pods -lprofile-bundle=ocp4
|
||||
$ oc logs pods/...
|
||||
----
|
||||
|
||||
== The ScanSetting and ScanSettingBinding lifecycle and debugging
|
||||
With valid compliance content sources, the high-level `ScanSetting` and `ScanSettingBinding` objects can be used to generate `ComplianceSuite` and `ComplianceScan` objects:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: ScanSetting
|
||||
metadata:
|
||||
name: my-companys-constraints
|
||||
debug: true
|
||||
# For each role, a separate scan will be created pointing
|
||||
# to a node-role specified in roles
|
||||
roles:
|
||||
- worker
|
||||
---
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: ScanSettingBinding
|
||||
metadata:
|
||||
name: my-companys-compliance-requirements
|
||||
profiles:
|
||||
# Node checks
|
||||
- name: rhcos4-e8
|
||||
kind: Profile
|
||||
apiGroup: compliance.openshift.io/v1alpha1
|
||||
# Cluster checks
|
||||
- name: ocp4-e8
|
||||
kind: Profile
|
||||
apiGroup: compliance.openshift.io/v1alpha1
|
||||
settingsRef:
|
||||
name: my-companys-constraints
|
||||
kind: ScanSetting
|
||||
apiGroup: compliance.openshift.io/v1alpha1
|
||||
----
|
||||
|
||||
Both `ScanSetting` and `ScanSettingBinding` objects are handled by the same controller tagged with `logger=scansettingbindingctrl`. These objects have no status. Any issues are communicated in form of events:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
Events:
|
||||
Type Reason Age From Message
|
||||
---- ------ ---- ---- -------
|
||||
Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created
|
||||
----
|
||||
|
||||
Now a `ComplianceSuite` object is created. The flow continues to reconcile the newly created `ComplianceSuite`.
|
||||
|
||||
== ComplianceSuite lifecycle and debugging
|
||||
The `ComplianceSuite` CR is a wrapper around `ComplianceScan` CRs. The `ComplianceSuite` CR is handled by controller tagged with `logger=suitectrl`.
|
||||
This controller handles creating Scans from a Suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a Suite is set to execute periodically, the `suitectrl` also handles creating a `CronJob` CR that re-runs the Scans in the Suite after the initial run is done:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get cronjobs
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
|
||||
my-companys-compliance-requirements-rerunner 0 1 * * * False 0 <none> 151m
|
||||
----
|
||||
|
||||
For the most important issues, Events are emitted. View them with `oc describe compliancesuites/$name`. The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller.
|
||||
|
||||
== ComplianceScan lifecycle and debugging
|
||||
The `ComplianceScan` CRs are handled by the `scanctrl` controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases:
|
||||
|
||||
=== Pending phase
|
||||
The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase.
|
||||
|
||||
=== Launching phase
|
||||
In this phase, several `ConfigMaps` that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the ConfigMaps:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get cm -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=
|
||||
----
|
||||
|
||||
These `ConfigMaps` will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the `ConfigMaps` is the way to go. Afterwards, a `PersistentVolumeClaim` is created per scan in order to store the raw ARF results:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pvc -lcompliance.openshift.io/scan-name=<scan_name>
|
||||
----
|
||||
|
||||
The PVCs are mounted by a per-scan `ResultServer` deployment. A `ResultServer` is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the `ResultServer` deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the `ResultServer` is protected by mutual TLS protocols.
|
||||
|
||||
Finally, the scanner pods are launched in this phase; one scanner pod for a `Platform` scan instance and one scanner pod per matching node for a `node` scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the `ComplianceScan` name:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME READY STATUS RESTARTS AGE LABELS
|
||||
rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner
|
||||
At this point, the scan proceeds to the Running phase.
|
||||
----
|
||||
|
||||
=== Running phase
|
||||
The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase:
|
||||
|
||||
* *init container*: There is one init container called `content-container`. It runs the *contentImage* container and executes a single command that copies the *contentFile* to the `/content` directory shared with the other containers in this pod.
|
||||
|
||||
* *scanner*: This container runs the scan. For node scans, the container mounts the node filesystem as `/host` and mounts the content delivered by the init container. The container also mounts the `entrypoint` `ConfigMap` created in the Launching phase and executes it. The default script in the entrypoint `ConfigMap` executes OpenSCAP and stores the result files in the `/results` directory shared between the pod's containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the `debug` flag.
|
||||
|
||||
* *logcollector*: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the `ResultServer` and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a `ConfigMap.` These result Configaps are labeled with the scan name (`compliance.openshift.io/scan-name=$scan_name`):
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
|
||||
Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
|
||||
Namespace: openshift-compliance
|
||||
Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker
|
||||
complianceoperator.openshift.io/scan-result=
|
||||
Annotations: compliance-remediations/processed:
|
||||
compliance.openshift.io/scan-error-msg:
|
||||
compliance.openshift.io/scan-result: NON-COMPLIANT
|
||||
OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal
|
||||
|
||||
Data
|
||||
====
|
||||
exit-code:
|
||||
----
|
||||
2
|
||||
results:
|
||||
----
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
...
|
||||
|
||||
----
|
||||
|
||||
Scanner pods for `Platform` scans are similar, except:
|
||||
|
||||
* There is one extra init container called `api-resource-collector` that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the `scanner` container would read them from.
|
||||
|
||||
* The `scanner` container does not need to mount the host filesystem.
|
||||
|
||||
When the scanner pods are done, the scans move on to the Aggregating phase.
|
||||
|
||||
=== Aggregating phase
|
||||
In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result `ConfigMap` objects, read the results and for each check result create the corresponding k8s object. If the check failure can be automatically remediated, a `ComplianceRemediation` object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container.
|
||||
|
||||
When a `ConfigMap` is processed by an aggregator pod,it is labeled the `compliance-remediations/processed` label. The result of this phase are `ComplianceCheckResult` objects:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
|
||||
NAME STATUS SEVERITY
|
||||
rhcos4-e8-worker-accounts-no-uid-except-zero PASS high
|
||||
rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium
|
||||
----
|
||||
and `ComplianceRemediation` objects:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
|
||||
NAME STATE
|
||||
rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied
|
||||
rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied
|
||||
rhcos4-e8-worker-audit-rules-execution-chcon NotApplied
|
||||
rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied
|
||||
rhcos4-e8-worker-audit-rules-execution-semanage NotApplied
|
||||
rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied
|
||||
----
|
||||
|
||||
After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase.
|
||||
|
||||
=== Done phase
|
||||
In the final scan phase, the scan resources are cleaned up if needed and the `ResultServer` deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the next scan instance would then recreate the deployment again.
|
||||
|
||||
It is also possible to trigger a re-run of a scan in the Done phase by annotating it:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
|
||||
----
|
||||
|
||||
After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with `autoApplyRemediations=true`. The {product-title} administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the `ComplianceSuite` controller takes over in the Done phase, pauses the `MachineConfigPool` to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the `ComplianceRemediation` controller takes over.
|
||||
|
||||
= ComplianceRemediation lifecycle and debugging
|
||||
The example scan has reported some findings. One of the remediations can be enabled by toggling it's `apply` attribute to `true`:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge
|
||||
----
|
||||
|
||||
The ComplianceRemediation controller (`logger=remediationctrl`) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite `MachineConfig` object that contains all the applied remediations.
|
||||
|
||||
The `MachineConfig` object always begins with `75-` and is named after the scan and the suite:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get mc | grep 75-
|
||||
75-rhcos4-e8-worker-my-companys-compliance-requirements 2.2.0 2m46s
|
||||
----
|
||||
|
||||
The remediations the `mc` currently consists of are listed in the MachineConfig's annotations:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements
|
||||
Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements
|
||||
Labels: machineconfiguration.openshift.io/role=worker
|
||||
Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:
|
||||
----
|
||||
|
||||
The ComplianceRemediation controller's algorithm works like this:
|
||||
|
||||
* All currently applied remediations are read into an initial remediation set.
|
||||
* If the reconciled remediation is supposed to be applied, it is added to the set.
|
||||
* A `MachineConfig` object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered MachineConfig object is removed.
|
||||
* If and only if the rendered `MachineConfig` is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted).
|
||||
* Creating or modifying a MachineConfig object triggers a reboot of nodes that match the `machineconfiguration.openshift.io/role` label - see the MachineConfig Operator documentation for more details.
|
||||
|
||||
The remediation loop ends once the rendered MachineConfig is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
|
||||
----
|
||||
|
||||
The scan will run and finish. Check for the remediation to pass:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod
|
||||
NAME STATUS SEVERITY
|
||||
rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium
|
||||
----
|
||||
|
||||
= Useful labels
|
||||
|
||||
Each pod that is spawned by the compliance-operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the `compliance.openshift.io/scan-name` label. The workload identifier is labeled with the `workload` label.
|
||||
|
||||
The compliance-operator schedules the following workloads:
|
||||
|
||||
* *scanner*: Performs the compliance scan.
|
||||
|
||||
* *resultserver*: Stores the raw results for the compliance scan.
|
||||
|
||||
* *aggregator*: Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations).
|
||||
|
||||
* *suitererunner*: Will tag a suite to be re-run (when a schedule is set).
|
||||
|
||||
* *profileparser*: Parses a datastream and creates the appropriate profiles, rules and variables.
|
||||
|
||||
When debugging and logs are required for a certain workload, run:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs -l workload=<workload_name>
|
||||
----
|
||||
24
modules/compliance-applying.adoc
Normal file
24
modules/compliance-applying.adoc
Normal file
@@ -0,0 +1,24 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-remediation.adoc
|
||||
|
||||
[id="compliance-applying_{context}"]
|
||||
= Applying a remediation
|
||||
|
||||
The boolean attribute `spec.apply` controls whether the remediation should be applied by the Compliance Operator. We can apply the remediation by setting the attribute to true:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{"spec":{"apply":true}}' --type=merge
|
||||
----
|
||||
|
||||
After the Compliance Operator processes the applied remediation, the `status.ApplicationState` attribute would change to *Applied* or to *Error* if incorrect. When a MachineConfig remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named `75-$scan-name-$suite-name`. That MachineConfig object is subsequently rendered by the MachineConfigOperator and finally applied to all the nodes in a MachineConfigPool by an instance of the MachineControlDaemon running on each node.
|
||||
|
||||
Note that when the MachineConfigOperator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite `75-$scan-name-$suite-name` MachineConfig object. To prevent applying the remediation immediately, you can pause the MachineConfigPool by setting the `.spec.paused` attribute of a MachineConfigPool to `true`.
|
||||
|
||||
The Compliance Operator can apply remediations automatically. Set `autoApplyRemediations: true` in the ScanSetting top-level object.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
Applying remediations automatically should only be done with careful consideration.
|
||||
====
|
||||
43
modules/compliance-custom-storage.adoc
Normal file
43
modules/compliance-custom-storage.adoc
Normal file
@@ -0,0 +1,43 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-advanced.adoc
|
||||
|
||||
[id="compliance-custom-storage_{context}"]
|
||||
= Setting custom storage size for results
|
||||
While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the `etcd` key-value store. Instead, every scan creates a PersistentVolume which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the `rawResultStorage.size` attribute that is exposed in both the ScanSetting and ComplianceScan resources.
|
||||
|
||||
A related parameter is `rawResultStorage.rotation` which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.
|
||||
|
||||
|
||||
== Using custom result storage values
|
||||
Because {product-title} can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the `rawResultStorage.StorageClassName` attribute.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
If your cluster does not specify a default storage class, this attribute must be set.
|
||||
====
|
||||
|
||||
Configure the ScanSetting CustomResource to use a standard storage class and create PersistentVolumes that are 10GB in size and keep the last 10 results:
|
||||
|
||||
.Example ScanSetting CR
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: ScanSetting
|
||||
metadata:
|
||||
name: default
|
||||
namespace: openshift-compliance
|
||||
rawResultStorage:
|
||||
storageClassName: standard
|
||||
rotation: 10
|
||||
size: 10Gi
|
||||
roles:
|
||||
- worker
|
||||
- master
|
||||
scanTolerations:
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
schedule: '0 1 * * *'
|
||||
----
|
||||
63
modules/compliance-imagestreams.adoc
Normal file
63
modules/compliance-imagestreams.adoc
Normal file
@@ -0,0 +1,63 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-manage.adoc
|
||||
|
||||
[id="compliance-imagestreams_{context}"]
|
||||
= Using ImageStreams
|
||||
|
||||
The `contentImage` reference points to a valid `ImageStreamTag`, and the Compliance Operator ensures that the content stays up to date automatically.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
`ProfileBundle` objects also accept `ImageStream` references.
|
||||
====
|
||||
|
||||
.Example ImageStream
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get is
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME IMAGE REPOSITORY TAGS UPDATED
|
||||
openscap-ocp4-ds image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds latest 32 seconds ago
|
||||
----
|
||||
|
||||
.Procedure
|
||||
. Ensure that the lookup policy is set to local:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch is openscap-ocp4-ds \
|
||||
-p '{"spec":{"lookupPolicy":{"local":true}}}' \
|
||||
--type=merge
|
||||
imagestream.image.openshift.io/openscap-ocp4-ds patched
|
||||
----
|
||||
|
||||
. Use the name of the `ImageStreamTag` for the `ProfileBundle` bye retrieving the `istag` name:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get istag
|
||||
NAME IMAGE REFERENCE UPDATED
|
||||
openscap-ocp4-ds:latest image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds@sha256:46d7ca9b7055fe56ade818ec3e62882cfcc2d27b9bf0d1cbae9f4b6df2710c96 3 minutes ago
|
||||
----
|
||||
|
||||
. Create the `ProfileBundle`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat << EOF | oc create -f -
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: ProfileBundle
|
||||
metadata:
|
||||
name: mybundle
|
||||
spec:
|
||||
contentImage: openscap-ocp4-ds:latest
|
||||
contentFile: ssg-rhcos4-ds.xml
|
||||
EOF
|
||||
----
|
||||
|
||||
This `ProfileBundle` will track the image and any changes that are applied to it, such as updating the tag to point to a different hash, will immediately be reflected in the `ProfileBundle`.
|
||||
24
modules/compliance-inconsistent.adoc
Normal file
24
modules/compliance-inconsistent.adoc
Normal file
@@ -0,0 +1,24 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-remediation.adoc
|
||||
|
||||
[id="compliance-inconsistent_{context}"]
|
||||
= Inconsistent remediations
|
||||
The ScanSetting lists the node roles that the ComplianceScans generated from the ScanSetting or ScanSettingBinding would scan. Each node role usually maps to a MachineConfigPool.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
It is expected that all machines in a MachineConfigPool are identical and all scan results from the nodes in a Pool should be identical.
|
||||
====
|
||||
|
||||
If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as `INCONSISTENT`. All ComplianceCheckResult objects are also labelled with `compliance.openshift.io/inconsistent-check`.
|
||||
|
||||
Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the `compliance.openshift.io/most-common-status` annotation and the annotation `compliance.openshift.io/inconsistent-source` contains pairs of `hostname:status` of check statuses that differ from the most common status. If no common state can be found, all the `hostname:status` pairs are listed in the `compliance.openshift.io/inconsistent-source annotation`.
|
||||
|
||||
If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The ComplianceScan must be re-run to get a consistent result by annotating the scan with the `compliance.openshift.io/rescan=` option:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc annotate compliancescans/<scan_name> \
|
||||
compliance.openshift.io/rescan=
|
||||
----
|
||||
51
modules/compliance-manual.adoc
Normal file
51
modules/compliance-manual.adoc
Normal file
@@ -0,0 +1,51 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-remediation.adoc
|
||||
|
||||
[id="compliance-manual_{context}"]
|
||||
= Remediating a platform check manually
|
||||
|
||||
Checks for Platform scans typically have to be remediated manually by the administrator for two reasons:
|
||||
|
||||
* It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow.
|
||||
|
||||
* Different checks modify different API objects, requiring automated remediation to possess `root` or superuser access to modify objects in the cluster, which is not advised.
|
||||
|
||||
.Procedure
|
||||
. The example below uses the `ocp4-ocp-allowed-registries-for-import` rule, which would fail on a default {product-title} installation. Inspect the rule `oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml`, the rule is to limit the registries the users are allowed to import images from by setting the `allowedRegistriesForImport` attribute, The _warning_ attribute of the rule also shows the API object checked, so it can be modified and remediate the issue:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc edit image.config.openshift.io/cluster
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: config.openshift.io/v1
|
||||
kind: Image
|
||||
metadata:
|
||||
annotations:
|
||||
release.openshift.io/create-only: "true"
|
||||
creationTimestamp: "2020-09-10T10:12:54Z"
|
||||
generation: 2
|
||||
name: cluster
|
||||
resourceVersion: "363096"
|
||||
selfLink: /apis/config.openshift.io/v1/images/cluster
|
||||
uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e
|
||||
spec:
|
||||
allowedRegistriesForImport:
|
||||
- domainName: registry.redhat.io
|
||||
status:
|
||||
externalRegistryHostnames:
|
||||
- default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com
|
||||
internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
|
||||
----
|
||||
|
||||
. Re-run the scan:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc annotate compliancescans/<scan_name> \
|
||||
compliance.openshift.io/rescan=
|
||||
----
|
||||
40
modules/compliance-objects.adoc
Normal file
40
modules/compliance-objects.adoc
Normal file
@@ -0,0 +1,40 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-advanced.adoc
|
||||
|
||||
[id="compliance-objects_{context}"]
|
||||
= Using the ComplianceSuite and ComplianceScan objects directly
|
||||
|
||||
While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the Suites and Scans, there are valid use cases to define the ComplianceSuites directly:
|
||||
|
||||
* Specifying only a single rule to scan. This can be useful for debugging together with the `debug: true` attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information.
|
||||
|
||||
* Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool.
|
||||
|
||||
* Pointing the Scan to a bespoke ConfigMap with a tailoring file.
|
||||
|
||||
* For testing or development when the overhead of parsing profiles from bundles is not required.
|
||||
|
||||
The following example shows a ComplianceSuite that scans the worker machines with only a single rule:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: ComplianceSuite
|
||||
metadata:
|
||||
name: workers-compliancesuite
|
||||
spec:
|
||||
scans:
|
||||
- name: workers-scan
|
||||
profile: xccdf_org.ssgproject.content_profile_moderate
|
||||
content: ssg-rhcos4-ds.xml
|
||||
contentImage: quay.io/complianceascode/ocp4:latest
|
||||
debug: true
|
||||
rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/worker: ""
|
||||
----
|
||||
|
||||
The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects.
|
||||
|
||||
To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundles like Rules or Profiles. Those objects contain the `xccdf_org` identifiers you can use to refer to them from a ComplianceSuite.
|
||||
26
modules/compliance-profilebundle.adoc
Normal file
26
modules/compliance-profilebundle.adoc
Normal file
@@ -0,0 +1,26 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-manage.adoc
|
||||
|
||||
[id="compliance-profilebundle_{context}"]
|
||||
= ProfileBundle example
|
||||
|
||||
The bundle object needs two pieces of information: the URL of a container image that contains the `contentImage` and the file that contains the compliance content. The `contentFile` parameter is relative to the root of the filesystem. The built-in `rhcos4` `ProfileBundle` can be defined in the example below:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: ProfileBundle
|
||||
metadata:
|
||||
name: rhcos4
|
||||
spec:
|
||||
contentImage: quay.io/complianceascode/ocp4:latest <1>
|
||||
contentFile: ssg-rhcos4-ds.xml <2>
|
||||
----
|
||||
<1> Content image location.
|
||||
<2> File containing the compliance content location.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The base image used for the content images must include `coreutils`.
|
||||
====
|
||||
159
modules/compliance-profiles.adoc
Normal file
159
modules/compliance-profiles.adoc
Normal file
@@ -0,0 +1,159 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-understanding.adoc
|
||||
|
||||
[id="compliance_profiles_{context}"]
|
||||
= Compliance Operator profiles
|
||||
|
||||
There are several profiles available as part of the Compliance Operator installation.
|
||||
|
||||
View the available profiles:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -n $NAMESPACE profiles.compliance
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME AGE
|
||||
ocp4-cis 4h52m
|
||||
ocp4-cis-node 4h52m
|
||||
ocp4-e8 4h52m
|
||||
ocp4-moderate 4h52m
|
||||
ocp4-ncp 4h52m
|
||||
rhcos4-e8 4h52m
|
||||
rhcos4-moderate 4h52m
|
||||
rhcos4-ncp 4h52m
|
||||
----
|
||||
|
||||
These profiles represent different compliance benchmarks.
|
||||
|
||||
View the details of a profile:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -n $NAMESPACE -oyaml profiles.compliance <profile name>
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
description: |-
|
||||
This profile contains configuration checks for Red Hat
|
||||
Enterprise Linux CoreOS that align to the Australian
|
||||
Cyber Security Centre (ACSC) Essential Eight.
|
||||
A copy of the Essential Eight in Linux Environments guide can
|
||||
be found at the ACSC website: ...
|
||||
id: xccdf_org.ssgproject.content_profile_e8
|
||||
kind: Profile
|
||||
metadata:
|
||||
annotations:
|
||||
compliance.openshift.io/product: redhat_enterprise_linux_coreos_4
|
||||
compliance.openshift.io/product-type: Node
|
||||
creationTimestamp: "2020-09-07T11:42:51Z"
|
||||
generation: 1
|
||||
labels:
|
||||
compliance.openshift.io/profile-bundle: rhcos4
|
||||
name: rhcos4-e8
|
||||
namespace: openshift-compliance
|
||||
rules:
|
||||
- rhcos4-accounts-no-uid-except-zero
|
||||
- rhcos4-audit-rules-dac-modification-chmod
|
||||
- rhcos4-audit-rules-dac-modification-chown
|
||||
- rhcos4-audit-rules-execution-chcon
|
||||
- rhcos4-audit-rules-execution-restorecon
|
||||
- rhcos4-audit-rules-execution-semanage
|
||||
- rhcos4-audit-rules-execution-setfiles
|
||||
- rhcos4-audit-rules-execution-setsebool
|
||||
- rhcos4-audit-rules-execution-seunshare
|
||||
- rhcos4-audit-rules-kernel-module-loading
|
||||
- rhcos4-audit-rules-login-events
|
||||
- rhcos4-audit-rules-login-events-faillock
|
||||
- rhcos4-audit-rules-login-events-lastlog
|
||||
- rhcos4-audit-rules-login-events-tallylog
|
||||
- rhcos4-audit-rules-networkconfig-modification
|
||||
- rhcos4-audit-rules-sysadmin-actions
|
||||
- rhcos4-audit-rules-time-adjtimex
|
||||
- rhcos4-audit-rules-time-clock-settime
|
||||
- rhcos4-audit-rules-time-settimeofday
|
||||
- rhcos4-audit-rules-time-stime
|
||||
- rhcos4-audit-rules-time-watch-localtime
|
||||
- rhcos4-audit-rules-usergroup-modification
|
||||
- rhcos4-auditd-data-retention-flush
|
||||
- rhcos4-auditd-freq
|
||||
- rhcos4-auditd-local-events
|
||||
- rhcos4-auditd-log-format
|
||||
- rhcos4-auditd-name-format
|
||||
- rhcos4-auditd-write-logs
|
||||
- rhcos4-configure-crypto-policy
|
||||
- rhcos4-configure-ssh-crypto-policy
|
||||
- rhcos4-no-empty-passwords
|
||||
- rhcos4-selinux-policytype
|
||||
- rhcos4-selinux-state
|
||||
- rhcos4-service-auditd-enabled
|
||||
- rhcos4-sshd-disable-empty-passwords
|
||||
- rhcos4-sshd-disable-gssapi-auth
|
||||
- rhcos4-sshd-disable-rhosts
|
||||
- rhcos4-sshd-disable-root-login
|
||||
- rhcos4-sshd-disable-user-known-hosts
|
||||
- rhcos4-sshd-do-not-permit-user-env
|
||||
- rhcos4-sshd-enable-strictmodes
|
||||
- rhcos4-sshd-print-last-log
|
||||
- rhcos4-sshd-set-loglevel-info
|
||||
- rhcos4-sshd-use-priv-separation
|
||||
- rhcos4-sysctl-kernel-dmesg-restrict
|
||||
- rhcos4-sysctl-kernel-kexec-load-disabled
|
||||
- rhcos4-sysctl-kernel-kptr-restrict
|
||||
- rhcos4-sysctl-kernel-randomize-va-space
|
||||
- rhcos4-sysctl-kernel-unprivileged-bpf-disabled
|
||||
- rhcos4-sysctl-kernel-yama-ptrace-scope
|
||||
- rhcos4-sysctl-net-core-bpf-jit-harden
|
||||
title: Australian Cyber Security Centre (ACSC) Essential Eight
|
||||
----
|
||||
|
||||
View the rules within a desired profile:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -n $NAMESPACE -oyaml rules.compliance <rule name>
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
description: '<code>auditd</code><code>augenrules</code><code>.rules</code><code>/etc/audit/rules.d</code><pre>-w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins</pre><code>auditd</code><code>auditctl</code><code>/etc/audit/audit.rules</code><pre>-w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins</pre>file in order to watch for unattempted manual edits of files involved in storing logon events:'
|
||||
id: xccdf_org.ssgproject.content_rule_audit_rules_login_events
|
||||
kind: Rule
|
||||
metadata:
|
||||
annotations:
|
||||
compliance.openshift.io/rule: audit-rules-login-events
|
||||
control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a)
|
||||
policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a)
|
||||
policies.open-cluster-management.io/standards: NIST-800-53
|
||||
creationTimestamp: "2020-09-07T11:43:03Z"
|
||||
generation: 1
|
||||
labels:
|
||||
compliance.openshift.io/profile-bundle: rhcos4
|
||||
name: rhcos4-audit-rules-login-events
|
||||
namespace: openshift-compliance
|
||||
rationale: |-
|
||||
Manual editing of these files may indicate nefarious activity,
|
||||
such as an attacker attempting to remove evidence of an
|
||||
intrusion.
|
||||
severity: medium
|
||||
title: Record Attempts to Alter Logon and Logout Events
|
||||
warning: |-
|
||||
<ul><li><code>audit_rules_login_events_tallylog</code></li>
|
||||
<li><code>audit_rules_login_events_faillock</code></li>
|
||||
<li><code>audit_rules_login_events_lastlog</code></li></ul>
|
||||
This rule checks for multiple syscalls related to login
|
||||
events and was written with DISA STIG in mind.
|
||||
Other policies should use separate rule for
|
||||
each syscall that needs to be checked.
|
||||
----
|
||||
|
||||
Each profile has the product name that it applies to added as a prefix to the profile's name. `ocp4-e8` applies the Essential 8 benchmark to the {product-title} product, while `rhcos4-e8` applies the Essential 8 benchmark to the Red Hat CoreOS product.
|
||||
39
modules/compliance-raw-tailored.adoc
Normal file
39
modules/compliance-raw-tailored.adoc
Normal file
@@ -0,0 +1,39 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-advanced.adoc
|
||||
|
||||
[id="compliance-raw-tailored_{context}"]
|
||||
= Using raw tailored profiles
|
||||
While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.
|
||||
|
||||
The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a ConfigMap which must contain a key called `tailoring.xml` and the value of this key is the tailoring contents.
|
||||
|
||||
.Procedure
|
||||
. Create the ConfigMap from a file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create configmap my-scan-tailoring --from-file=tailoring.xml=/path/to/the/tailoringFile.xml
|
||||
----
|
||||
|
||||
. Reference the tailoring file in a Scan that belongs to a Suite:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: ComplianceSuite
|
||||
metadata:
|
||||
name: workers-compliancesuite
|
||||
spec:
|
||||
debug: true
|
||||
scans:
|
||||
- name: workers-scan
|
||||
profile: xccdf_org.ssgproject.content_profile_moderate
|
||||
content: ssg-rhcos4-ds.xml
|
||||
contentImage: quay.io/complianceascode/ocp4:latest
|
||||
debug: true
|
||||
tailoringConfigMap:
|
||||
name: my-scan-tailoring
|
||||
nodeSelector:
|
||||
node-role.kubernetes.io/worker: ""
|
||||
----
|
||||
13
modules/compliance-rescan.adoc
Normal file
13
modules/compliance-rescan.adoc
Normal file
@@ -0,0 +1,13 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-advanced.adoc
|
||||
|
||||
[id="compliance-rescan_{context}"]
|
||||
= Performing a rescan
|
||||
Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the `compliance.openshift.io/rescan=` option:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc annotate compliancescans/<scan_name> \
|
||||
compliance.openshift.io/rescan=
|
||||
----
|
||||
84
modules/compliance-results.adoc
Normal file
84
modules/compliance-results.adoc
Normal file
@@ -0,0 +1,84 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-raw-results.adoc
|
||||
|
||||
[id="compliance-results_{context}"]
|
||||
= Obtaining Compliance Operator raw results from a Persistent Volume
|
||||
|
||||
.Procedure
|
||||
|
||||
The Compliance Operator generates and stores the raw results in a Persistent Volume. These results are in Asset Reporting Format (ARF).
|
||||
|
||||
. Explore the ComplianceSuite object:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get compliancesuites nist-moderate -o json \
|
||||
| jq '.status.scanStatuses[].resultsStorage'
|
||||
{
|
||||
"name": "rhcos4-moderate-worker",
|
||||
"namespace": "openshift-compliance"
|
||||
}
|
||||
{
|
||||
"name": "rhcos4-moderate-master",
|
||||
"namespace": "openshift-compliance"
|
||||
}
|
||||
----
|
||||
+
|
||||
This shows the Persistent Volume Claims where the raw results are accessible.
|
||||
|
||||
. Verify the raw data location by using the name and namespace of one of the results:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get pvc -n openshift-compliance rhcos4-moderate-worker
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m
|
||||
----
|
||||
|
||||
. Fetch the raw results by spawning a pod that mounts the volume and copying the results:
|
||||
+
|
||||
.Example pod
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: "v1"
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pv-extract
|
||||
spec:
|
||||
containers:
|
||||
- name: pv-extract-pod
|
||||
image: registry.access redhat.com/ubi8/ubi
|
||||
command: ["sleep", "3000"]
|
||||
volumeMounts:
|
||||
- mountPath: "/workers-scan-results"
|
||||
name: workers-scan-vol
|
||||
volumes:
|
||||
- name: workers-scan-vol
|
||||
persistentVolumeClaim:
|
||||
claimName: rhcos4-moderate-worker
|
||||
----
|
||||
|
||||
. After the pod is running, download the results:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc cp pv-extract:/workers-scan-results .
|
||||
----
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Spawning a pod that mounts the Persistent Volume will keep the claim as `Bound`. If the volume’s storage class in use has permissions set to `ReadWriteOnce`, the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will be possible for the Operator to schedule a pod and continue storing results in this location.
|
||||
====
|
||||
|
||||
. After the extraction is complete, the pod can be deleted:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete pod pv-extract
|
||||
----
|
||||
56
modules/compliance-review.adoc
Normal file
56
modules/compliance-review.adoc
Normal file
@@ -0,0 +1,56 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-remediation.adoc
|
||||
|
||||
[id="compliance-review_{context}"]
|
||||
= Reviewing a remediation
|
||||
|
||||
Review both the `ComplianceRemediation` object and the `ComplianceCheckResult` object that owns the remediation. The `ComplianceCheckResult` object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other `metadata` like the severity and the associated security controls. The `ComplianceRemediation` object represents a way to fix the problem described in the `ComplianceCheckResult`.
|
||||
|
||||
Below is an example of a check and a remediation called `sysctl-net-ipv4-conf-all-accept-redirects`. This example is redacted to only show `spec` and `status` and omits `metadata`:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
spec:
|
||||
apply: false
|
||||
current:
|
||||
object:
|
||||
apiVersion: machineconfiguration.openshift.io/v1
|
||||
kind: MachineConfig
|
||||
spec:
|
||||
config:
|
||||
ignition:
|
||||
version: 2.2.0
|
||||
storage:
|
||||
files:
|
||||
- contents:
|
||||
source: data:,net.ipv4.conf.all.accept_redirects%3D0
|
||||
filesystem: root
|
||||
mode: 420
|
||||
path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf
|
||||
outdated: {}
|
||||
status:
|
||||
applicationState: NotApplied
|
||||
----
|
||||
|
||||
The remediation payload is stored in the `spec.current` attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or a Secret), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions in order to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text.
|
||||
|
||||
To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. Refer to the link:https://coreos.com/ignition/docs/latest/configuration-v2_2.html[Ignition specification] for further information about the format. In our example, `the spec.config.storage.files[0].path` attribute specifies the file that is being create by this remediation (`/etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf`) and the `spec.config.storage.files[0].contents.source` attribute specifies the contents of that file.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The contents of the files are URL-encoded.
|
||||
====
|
||||
|
||||
Use the following Python script to view the contents:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ echo "net.ipv4.conf.all.accept_redirects" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))"
|
||||
----
|
||||
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
net.ipv4.conf.all.accept_redirects=0
|
||||
----
|
||||
90
modules/compliance-tailored-profiles.adoc
Normal file
90
modules/compliance-tailored-profiles.adoc
Normal file
@@ -0,0 +1,90 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-tailor.adoc
|
||||
|
||||
[id="compliance-tailored-profiles_{context}"]
|
||||
= Using tailored profiles
|
||||
While the `TailoredProfile` CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.
|
||||
|
||||
The `ComplianceSuite` object contains an optional `TailoringConfigMap` attribute that you can point to a custom tailoring file. The value of the `TailoringConfigMap` attribute is a name of a ConfigMap, which must contain a key called `tailoring.xml` and the value of this key is the tailoring contents.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Browse the available rules for the {op-system-first} `ProfileBundle`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
oc get rules.compliance \
|
||||
-l compliance.openshift.io/profile-bundle=rhcos4
|
||||
----
|
||||
|
||||
. Browse the available variables in the same `ProfileBundle`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
oc get variables.compliance \
|
||||
-l compliance.openshift.io/profile-bundle=rhcos4
|
||||
----
|
||||
|
||||
. Choose which rules you want to add to the `TailoredProfile`. This `TailoredProfile` example disables two rules and changes one value. Use the `rationale` value to describe why these changes were made:
|
||||
+
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: TailoredProfile
|
||||
metadata:
|
||||
name: nist-moderate-modified
|
||||
spec:
|
||||
extends: rhcos4-moderate
|
||||
title: My modified NIST moderate profile
|
||||
disableRules:
|
||||
- name: rhcos4-file-permissions-node-config
|
||||
rationale: This breaks X application.
|
||||
- name: rhcos4-account-disable-post-pw-expiration
|
||||
rationale: No need to check this as it comes from the IdP
|
||||
setValues:
|
||||
- name: rhcos4-var-selinux-state
|
||||
rationale: Organizational requirements
|
||||
value: permissive
|
||||
----
|
||||
|
||||
. Add the profile to the `ScanSettingsBinding` object:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cat nist-moderate-modified.yaml
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: ScanSettingBinding
|
||||
metadata:
|
||||
name: nist-moderate-modified
|
||||
profiles:
|
||||
- apiGroup: compliance.openshift.io/v1alpha1
|
||||
kind: Profile
|
||||
name: ocp4-moderate
|
||||
- apiGroup: compliance.openshift.io/v1alpha1
|
||||
kind: TailoredProfile
|
||||
name: nist-moderate-modified
|
||||
settingsRef:
|
||||
apiGroup: compliance.openshift.io/v1alpha1
|
||||
kind: ScanSetting
|
||||
name: default
|
||||
----
|
||||
|
||||
. Create the `TailoredProfile`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc create -n $NAMESPACE -f
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
scansettingbinding.compliance.openshift.io/nist-moderate-modified created
|
||||
----
|
||||
22
modules/compliance-unapplying.adoc
Normal file
22
modules/compliance-unapplying.adoc
Normal file
@@ -0,0 +1,22 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-remediation.adoc
|
||||
|
||||
[id="compliance-unapplying_{context}"]
|
||||
= Unapplying a remediation
|
||||
It might be required to unapply a remediation that was previously applied.
|
||||
|
||||
.Procedure
|
||||
. Toggle the flag to `false`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch complianceremediations/sysctl-net-ipv4-conf-all-accept-redirects --patch '{"spec":{"apply":false}}' --type=merge
|
||||
----
|
||||
|
||||
. The remediation status will change to `NotApplied` and the composite `MachineConfig` object would be re-rendered to not include the remediation.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
All affected nodes with the remediation will be rebooted.
|
||||
====
|
||||
24
modules/compliance-update.adoc
Normal file
24
modules/compliance-update.adoc
Normal file
@@ -0,0 +1,24 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-manage.adoc
|
||||
|
||||
[id="compliance-update_{context}"]
|
||||
= Updating security content
|
||||
|
||||
Security content is shipped as container images that the `ProfileBundle` objects refer to. To accurately track updates to `ProfileBundles` and the CustomResources parsed from the bundles such as Rules or Profiles, identify the container image with the compliance content using a digest instead of a tag:
|
||||
|
||||
.Example output
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: compliance.openshift.io/v1alpha1
|
||||
kind: ProfileBundle
|
||||
metadata:
|
||||
name: rhcos4
|
||||
spec:
|
||||
contentImage: quay.io/user/ocp4-openscap-content@sha256:a1749f5150b19a9560a5732fe48a89f07bffc79c0832aa8c49ee5504590ae687 <1>
|
||||
contentFile: ssg-rhcos4-ds.xml
|
||||
----
|
||||
|
||||
<1> Security container image.
|
||||
|
||||
Each `ProfileBundle` is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles.
|
||||
51
modules/compliance-updating.adoc
Normal file
51
modules/compliance-updating.adoc
Normal file
@@ -0,0 +1,51 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/compliance_operator/compliance-operator-remediation.adoc
|
||||
|
||||
[id="compliance-updating_{context}"]
|
||||
= Updating remediations
|
||||
|
||||
When a new version of compliance content is used, it might deliver a new and different version of a remediation than the previous version. The Compliance Operator will keep the old version of the remediation applied. The {product-title} administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to *Outdated*. The outdated objects are labeled so that they can be searched for easily.
|
||||
|
||||
The previously applied remediation contents would then be stored in the `spec.outdated` attribute of a `ComplianceRemediation` object and the new updated contents would be stored in the `spec.current` attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the `spec.outdated` attribute exists, it would be used to render the resulting MachineConfig object. After the `spec.outdated` attribute is removed, the Compliance Operator re-renders the resulting `MachineConfig` object, which causes the Operator to push the configuration to the nodes.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Search for any outdated remediations:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get complianceremediations -lcomplianceoperator.openshift.io/outdated-remediation=
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATE
|
||||
workers-scan-no-empty-passwords Outdated
|
||||
----
|
||||
+
|
||||
The currently applied remediation is stored in the `Outdated` attribute and the new, unapplied remediation is stored in the `Current` attribute. If you are satisfied with the new version, remove the `Outdated` field. If you want to keep the updated content, remove the `Current` and `Outdated` attributes.
|
||||
|
||||
. Apply the newer version of the remediation:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{"op":"remove", "path":/spec/outdated}]'
|
||||
----
|
||||
|
||||
. The remediation state will switch from `Outdated` to `Applied`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get complianceremediations workers-scan-no-empty-passwords
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME STATE
|
||||
workers-scan-no-empty-passwords Applied
|
||||
----
|
||||
|
||||
. The nodes will apply the newer remediation version and reboot.
|
||||
@@ -0,0 +1,16 @@
|
||||
[id="compliance-operator-advanced"]
|
||||
= Performing advanced Compliance Operator tasks
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: compliance-advanced
|
||||
|
||||
toc::[]
|
||||
|
||||
The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling.
|
||||
|
||||
include::modules/compliance-objects.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-raw-tailored.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-rescan.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-custom-storage.adoc[leveloffset=+1]
|
||||
14
security/compliance_operator/compliance-operator-manage.adoc
Normal file
14
security/compliance_operator/compliance-operator-manage.adoc
Normal file
@@ -0,0 +1,14 @@
|
||||
[id="compliance-operator-understanding"]
|
||||
= Managinging the Compliance Operator
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: managing-compliance
|
||||
|
||||
toc::[]
|
||||
|
||||
This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom `ProfileBundle` object.
|
||||
|
||||
include::modules/compliance-update.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-imagestreams.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-profilebundle.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,8 @@
|
||||
[id="compliance-operator-raw-results"]
|
||||
= Retrieving Compliance Operator raw results
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: compliance-raw-results
|
||||
|
||||
When proving compliance for your {product-title} cluster, you might need to provide the scan results for auditing purposes.
|
||||
|
||||
include::modules/compliance-results.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,20 @@
|
||||
[id="compliance-operator-remediation"]
|
||||
= Managing Compliance Operator remediation
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: compliance-remediation
|
||||
|
||||
toc::[]
|
||||
|
||||
Each `ComplianceCheckResult` represents a result of one compliance rule check. If the rule can be remediated automatically, a `ComplianceRemediation` object with the same name, owned by the `ComplianceCheckResult` is created. Unless requested, the remediations are not applied automatically, which gives an {product-title} administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified.
|
||||
|
||||
include::modules/compliance-review.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-applying.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-manual.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-updating.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-unapplying.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/compliance-inconsistent.adoc[leveloffset=+1]
|
||||
15
security/compliance_operator/compliance-operator-tailor.adoc
Normal file
15
security/compliance_operator/compliance-operator-tailor.adoc
Normal file
@@ -0,0 +1,15 @@
|
||||
[id="compliance-operator-tailor"]
|
||||
= Tailoring the Compliance Operator
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: compliance-tailor
|
||||
|
||||
While the Compliance Operator comes with ready-to-use profiles, they must be modified in order to fit the organizations’ needs and requirements. The process of modifying a profile is called _tailoring_.
|
||||
|
||||
The Compliance Operator provides an object to easily tailor profiles called a `TailoredProfile`. This assumes that you are extending a pre-existing profile, and allows you to enable and disable rules and values which come from the ProfileBundle.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You will only be able to use rules and variables that are available as part of the ProfileBundle that the Profile you want to extend belongs to.
|
||||
====
|
||||
|
||||
include::modules/compliance-tailored-profiles.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,44 @@
|
||||
[id="compliance-operator-troubleshooting"]
|
||||
= Troubleshooting the Compliance Operator
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: compliance-troubleshooting
|
||||
|
||||
toc::[]
|
||||
|
||||
This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips:
|
||||
|
||||
* The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get events -nopenshift-compliance
|
||||
----
|
||||
+
|
||||
Or view events for an object like a scan using the command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe compliancescan/$scan_name
|
||||
----
|
||||
|
||||
* The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a `ComplianceRemediation` can't be applied, view the messages from the `remediationctrl` controller. You can filter the messages from a single controller by parsing with `jq`:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == "profilebundlectrl")'
|
||||
----
|
||||
|
||||
* The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use `date -d @timestamp --utc`, For example:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ date -d @1596184628.955853 --utc
|
||||
----
|
||||
|
||||
* Many CustomResources, most importantly `ComplianceSuite` and `ScanSetting`, allow the `debug` option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods.
|
||||
|
||||
* If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding `ComplianceCheckResult` object and use it as the `rule` attribute value in a Scan CR. Then, together with the `debug` option enabled, the `scanner` container logs in the scanner pod would show the raw OpenSCAP logs.
|
||||
|
||||
include::modules/compliance-anatomy.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/support.adoc[leveloffset=+1]
|
||||
@@ -0,0 +1,15 @@
|
||||
[id="understanding-compliance-operator"]
|
||||
= Understanding the Compliance Operator
|
||||
include::modules/common-attributes.adoc[]
|
||||
:context: understanding-compliance
|
||||
|
||||
toc::[]
|
||||
|
||||
The Compliance Operator lets {product-title} administrators describe the desired compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of {product-title}, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The Compliance Operator is available for Red Hat CoreOS deployments only.
|
||||
====
|
||||
|
||||
include::modules/compliance-profiles.adoc[leveloffset=+1]
|
||||
Reference in New Issue
Block a user