1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Updating Compliance Operator docs

This commit is contained in:
Ashley Hardin
2021-02-22 17:17:17 -05:00
committed by openshift-cherrypick-robot
parent 2324fc887c
commit a6bbfa7ecc
16 changed files with 438 additions and 22 deletions

View File

@@ -576,16 +576,20 @@ Topics:
- Name: Compliance Operator
Dir: compliance_operator
Topics:
- Name: Installing the Compliance Operator
File: compliance-operator-installation
- Name: Compliance Operator scans
File: compliance-scans
- Name: Understanding the Compliance Operator
File: compliance-operator-understanding
- Name: Managing the Compliance Operator
File: compliance-operator-manage
- Name: Managing Compliance Operator remediation
File: compliance-operator-remediation
- Name: Tailoring the Compliance Operator
File: compliance-operator-tailor
- Name: Retrieving Compliance Operator raw results
File: compliance-operator-raw-results
- Name: Managing Compliance Operator remediation
File: compliance-operator-remediation
- Name: Performing advanced Compliance Operator tasks
File: compliance-operator-advanced
- Name: Troubleshooting the Compliance Operator

View File

@@ -2,17 +2,22 @@
//
// * security/compliance_operator/compliance-operator-troubleshooting.adoc
[id="compliance_anatomy_{context}"]
[id="compliance-anatomy_{context}"]
= Anatomy of a scan
The following sections outline the components and stages of Compliance Operator scans.
[id="compliance-anatomy-compliance-sources_{context}"]
== Compliance sources
The compliance content is stored in `Profile` objects that are generated from a `ProfileBundle`. The Compliance Operator creates a `ProfileBundle` for the cluster and another for the cluster nodes.
The compliance content is stored in `Profile` objects that are generated from a `ProfileBundle` object. The Compliance Operator creates a `ProfileBundle` object for the cluster and another for the cluster nodes.
[source,terminal]
----
$ oc get profilebundle.compliance
----
[source,terminal]
----
$ oc get profile.compliance
----
@@ -21,12 +26,25 @@ The `ProfileBundle` objects are processed by deployments labeled with the `Bundl
[source,terminal]
----
$ oc logs -lprofile-bundle=ocp4 -c profileparser
----
[source,terminal]
----
$ oc get deployments,pods -lprofile-bundle=ocp4
----
[source,terminal]
----
$ oc logs pods/<pod-name>
----
[source,terminal]
----
$ oc describe pod/<pod-name> -c profileparser
----
== The `ScanSetting` and `ScanSettingBinding` lifecycle and debugging
[id="compliance-anatomy-scan-setting-scan-binding-lifecycle_{context}"]
== The ScanSetting and ScanSettingBinding objects lifecycle and debugging
With valid compliance content sources, the high-level `ScanSetting` and `ScanSettingBinding` objects can be used to generate `ComplianceSuite` and `ComplianceScan` objects:
[source,yaml]
@@ -72,7 +90,8 @@ Events:
Now a `ComplianceSuite` object is created. The flow continues to reconcile the newly created `ComplianceSuite`.
== `ComplianceSuite` lifecycle and debugging
[id="compliance-suite-lifecycle-debugging_{context}"]
== ComplianceSuite custom resource lifecycle and debugging
The `ComplianceSuite` CR is a wrapper around `ComplianceScan` CRs. The `ComplianceSuite` CR is handled by controller tagged with `logger=suitectrl`.
This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the `suitectrl` also handles creating a `CronJob` CR that re-runs the scans in the suite after the initial run is done:
@@ -90,12 +109,15 @@ NAME SCHEDULE SUSPEND ACTIVE LA
For the most important issues, events are emitted. View them with `oc describe compliancesuites/<name>`. The `Suite` objects also have a `Status` subresource that is updated when any of `Scan` objects that belong to this suite update their `Status` subresource. After all expected scans are created, control is passed to the scan controller.
== `ComplianceScan` lifecycle and debugging
[id="compliance-scan-lifecycle-debugging_{context}"]
== ComplianceScan custom resource lifecycle and debugging
The `ComplianceScan` CRs are handled by the `scanctrl` controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases:
[id="compliance-scan-pending-phase_{context}"]
=== Pending phase
The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase.
[id="compliance-scan-launching-phase_{context}"]
=== Launching phase
In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps:
@@ -128,6 +150,7 @@ rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Comple
At this point, the scan proceeds to the Running phase.
----
[id="compliance-scan-running-phase_{context}"]
=== Running phase
The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase:
@@ -139,7 +162,12 @@ The running phase waits until the scanner pods finish. The following terms and p
+
[source,terminal]
----
$ oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
$ oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
----
+
.Example output
[source,terminal]
----
Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod
Namespace: openshift-compliance
Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker
@@ -169,6 +197,7 @@ Scanner pods for `Platform` scans are similar, except:
When the scanner pods are done, the scans move on to the Aggregating phase.
[id="compliance-scan-aggregating-phase_{context}"]
=== Aggregating phase
In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result `ConfigMap` objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a `ComplianceRemediation` object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container.
@@ -177,6 +206,11 @@ When a config map is processed by an aggregator pod, it is labeled the `complian
[source,terminal]
----
$ oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
----
.Example output
[source,terminal]
----
NAME STATUS SEVERITY
rhcos4-e8-worker-accounts-no-uid-except-zero PASS high
rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium
@@ -186,6 +220,11 @@ and `ComplianceRemediation` objects:
[source,terminal]
----
$ oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker
----
.Example output
[source,terminal]
----
NAME STATE
rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied
rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied
@@ -197,6 +236,7 @@ rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied
After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase.
[id="compliance-scan-done-phase_{context}"]
=== Done phase
In the final scan phase, the scan resources are cleaned up if needed and the `ResultServer` deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the next scan instance would then recreate the deployment again.
@@ -209,7 +249,8 @@ $ oc annotate compliancescans/<scan_name> compliance.openshift.io/rescan=
After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with `autoApplyRemediations: true`. The {product-title} administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the `ComplianceSuite` controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the `ComplianceRemediation` controller takes over.
== `ComplianceRemediation` lifecycle and debugging
[id="compliance-remediation-lifecycle-debugging_{context}"]
== ComplianceRemediation controller lifecycle and debugging
The example scan has reported some findings. One of the remediations can be enabled by toggling its `apply` attribute to `true`:
[source,terminal]
@@ -224,6 +265,11 @@ The `MachineConfig` object always begins with `75-` and is named after the scan
[source,terminal]
----
$ oc get mc | grep 75-
----
.Example output
[source,terminal]
----
75-rhcos4-e8-worker-my-companys-compliance-requirements 2.2.0 2m46s
----
@@ -232,6 +278,11 @@ The remediations the `mc` currently consists of are listed in the machine config
[source,terminal]
----
$ oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements
----
.Example output
[source,terminal]
----
Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements
Labels: machineconfiguration.openshift.io/role=worker
Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:
@@ -257,16 +308,21 @@ The scan will run and finish. Check for the remediation to pass:
[source,terminal]
----
$ oc get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod
----
.Example output
[source,terminal]
----
NAME STATUS SEVERITY
rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium
----
// TODO: This shouldn't be a level one in this module
= Useful labels
[id="compliance-operator-useful-labels_{context}"]
== Useful labels
Each pod that is spawned by the compliance-operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the `compliance.openshift.io/scan-name` label. The workload identifier is labeled with the `workload` label.
Each pod that is spawned by the Compliance Operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the `compliance.openshift.io/scan-name` label. The workload identifier is labeled with the `workload` label.
The compliance-operator schedules the following workloads:
The Compliance Operator schedules the following workloads:
* *scanner*: Performs the compliance scan.
@@ -282,5 +338,5 @@ When debugging and logs are required for a certain workload, run:
[source,terminal]
----
$ oc logs -l workload=<workload_name> -c <container-name>
$ oc logs -l workload=<workload_name> -c <container_name>
----

View File

@@ -0,0 +1,17 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-advanced.adoc
[id="installing-compliance-operator-cli_{context}"]
= Applying remediations generated by suite scans
Although you can use the `autoApplyRemediations` boolean parameter in a `ComplianceSuite` object, you can alternatively annotate the object with `compliance.openshift.io/apply-remediations`. This allows the Operator to apply all of the created remediations.
.Procedure
* Apply the `compliance.openshift.io/apply-remediations` annotation by running:
[source,terminal]
----
$ oc annotate compliancesuites/<suite-_name> compliance.openshift.io/apply-remediations=
----

View File

@@ -0,0 +1,19 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-advanced.adoc
[id="automatically-update-remediations_{context}"]
= Automatically update remediations
In some cases, a scan with newer content might mark remediations as `OUTDATED`. As an administrator, you can apply the `compliance.openshift.io/remove-outdated` annotation to apply new remediations and remove the outdated ones.
.Procedure
* Apply the `compliance.openshift.io/remove-outdated` annotation:
[source,terminal]
----
$ oc annotate compliancesuites/<suite_name> compliance.openshift.io/remove-outdated=
----
Alternatively, set the `autoUpdateRemediations` flag in a `ScanSetting` or `ComplianceSuite` object to update the remediations automatically.

View File

@@ -8,7 +8,7 @@ While the custom resources such as `ComplianceCheckResult` represent an aggregat
A related parameter is `rawResultStorage.rotation` which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.
[id="using-custom-result-storage-values_{context}"]
== Using custom result storage values
Because {product-title} can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the `rawResultStorage.StorageClassName` attribute.

View File

@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-remediation.adoc
[id="filtering-failed-compliance-check-results_{context}"]
= Filters for failed compliance check results
By default, the `ComplianceCheckResult` objects are labeled with several useful labels that allow you to query the checks and decide on the next steps after the results are generated.
List checks that belong to a specific suite:
[source,terminal]
----
$ oc get compliancecheckresults -l compliance.openshift.io/suite=example-compliancesuite
----
List checks that belong to a specific scan:
[source,terminal]
----
$ oc get compliancecheckresults -l compliance.openshift.io/scan=example-compliancescan
----
Not all `ComplianceCheckResult` objects create `ComplianceRemediation` objects. Only `ComplianceCheckResult` objects that can be remediated automatically do. A `ComplianceCheckResult` object has a related remediation if it is labeled with the `compliance.openshift.io/automated-remediation` label. The name of the remediation is the same as the name of the check.
List all failing checks that can be remediated automatically:
[source,terminal]
----
$ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remmediation
----
List all failing checks that must be remediated manually:
[source,terminal]
----
$ oc get compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation
----
The manual remediation steps are typically stored in the `description` attribute in the `ComplianceCheckResult` object.

View File

@@ -4,18 +4,18 @@
[id="compliance-inconsistent_{context}"]
= Inconsistent remediations
The `ScanSetting` lists the node roles that the compliance scans generated from the `ScanSetting` or `ScanSettingBinding` would scan. Each node role usually maps to a machine config pool.
The `ScanSetting` object lists the node roles that the compliance scans generated from the `ScanSetting` or `ScanSettingBinding` objects would scan. Each node role usually maps to a machine config pool.
[IMPORTANT]
====
It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a Pool should be identical.
It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a pool should be identical.
====
If some of the results are different from others, the Compliance Operator flags a `ComplianceCheckResult` object where some of the nodes will report as `INCONSISTENT`. All `ComplianceCheckResult` objects are also labeled with `compliance.openshift.io/inconsistent-check`.
Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the `compliance.openshift.io/most-common-status` annotation and the annotation `compliance.openshift.io/inconsistent-source` contains pairs of `hostname:status` of check statuses that differ from the most common status. If no common state can be found, all the `hostname:status` pairs are listed in the `compliance.openshift.io/inconsistent-source annotation`.
If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The `ComplianceScan` must be re-run to get a consistent result by annotating the scan with the `compliance.openshift.io/rescan=` option:
If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The compliance scan must be re-run to get a consistent result by annotating the scan with the `compliance.openshift.io/rescan=` option:
[source,terminal]
----

View File

@@ -0,0 +1,94 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-installation.adoc
[id="installing-compliance-operator-cli_{context}"]
= Installing the Compliance Operator using the CLI
.Prerequisites
* You must have `admin` privileges.
.Procedure
. Create a `Namespace` object YAML file by running:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
.Example output
[source,yaml]
----
apiVersion: v1
kind: Namespace
metadata:
name: openshift-compliance
----
. Create the `OperatorGroup` object YAML file by running:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
.Example output
[source,yaml]
----
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: compliance-operator
namespace: openshift-compliance
spec:
targetNamespaces:
- openshift-compliance
----
. Set the {product-title} major and minor version as an environment variable, which is used as the channel value in the next step:
+
[source,terminal]
----
$ OC_VERSION=$(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)
----
. Create the `Subscription` object YAML file by running:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
----
+
.Example output
[source,yaml]
----
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: compliance-operator-sub
namespace: openshift-compliance
spec:
channel: "${OC_VERSION}"
installPlanApproval: Automatic
name: compliance-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
----
.Verification
. Verify the installation succeeded by inspecting the CSV file:
+
[source,terminal]
----
$ oc get csv -n openshift-compliance
----
. Verify that the Compliance Operator is up and running:
+
[source,terminal]
----
$ oc get deploy -n openshift-compliance
----

View File

@@ -0,0 +1,29 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-operator-installation.adoc
[id="installing-compliance-operator-web-console_{context}"]
= Installing the Compliance Operator through the web console
.Prerequisites
* You must have `admin` privileges.
.Procedure
. In the {product-title} web console, navigate to *Operators* -> *OperatorHub*.
. Search for the Compliance Operator, then click *Install*.
. Keep the default selection of *Installation mode* and *namespace* to ensure that the Operator will be installed to the `openshift-compliance` namespace.
. Click *Install*.
.Verification
To confirm that the installation is successful:
. Navigate to the *Operators* -> *Installed Operators* page.
. Check that the Compliance Operator is installed in the `openshift-compliance` namespace and its status is `Succeeded`.
If the Operator is not installed successfully:
. Navigate to the *Operators* -> *Installed Operators* page and inspect the `Status` column for any errors or failures.
. Navigate to the *Workloads* -> *Pods* page and check the logs in any pods in the `openshift-compliance` project that are reporting issues.

View File

@@ -3,9 +3,9 @@
// * security/compliance_operator/compliance-operator-manage.adoc
[id="compliance-profilebundle_{context}"]
= `ProfileBundle` example
= ProfileBundle CR example
The bundle object needs two pieces of information: the URL of a container image that contains the `contentImage` and the file that contains the compliance content. The `contentFile` parameter is relative to the root of the file system. The built-in `rhcos4` `ProfileBundle` can be defined in the example below:
The bundle object needs two pieces of information: the URL of a container image that contains the `contentImage` and the file that contains the compliance content. The `contentFile` parameter is relative to the root of the file system. The built-in `rhcos4` `ProfileBundle` object can be defined in the example below:
[source,yaml]
----
@@ -18,7 +18,7 @@ The bundle object needs two pieces of information: the URL of a container image
contentFile: ssg-rhcos4-ds.xml <2>
----
<1> Content image location.
<2> File containing the compliance content location.
<2> Location of the file containing the compliance content.
[IMPORTANT]
====

View File

@@ -0,0 +1,112 @@
// Module included in the following assemblies:
//
// * security/compliance_operator/compliance-scans.adoc
[id="running-compliance-scans_{context}"]
= Running compliance scans
You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a `ScanSetting` object with reasonable defaults on startup. This `ScanSetting` object is named `default`.
.Procedure
. Inspect the `ScanSetting` object by running:
+
[source,terminal]
----
$ oc describe scansettings default -n openshift-compliance
----
+
.Example output
[source,terminal]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSetting
metadata:
name: default
namespace: openshift-compliance
rawResultStorage:
pvAccessModes:
- ReadWriteOnce <1>
rotation: 3 <2>
size: 1Gi <3>
roles:
- worker <4>
- master <4>
scanTolerations: <5>
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
schedule: 0 1 * * * <6>
----
<1> The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode `ReadWriteOnce` because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, `ReadWriteOnce` access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the `ReadWriteOnce` access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans.
<2> The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated.
<3> The Compliance Operator will allocate one GB of storage for the scan results.
<4> If the scan setting uses any profiles that scan cluster nodes, scan these node roles.
<5> The default scan setting object also scans the master nodes.
<6> The default scan setting object runs scans at 01:00 each day.
+
As an alternative to the default scan setting, you can use `default-auto-apply`, which has the following settings:
+
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSetting
metadata:
name: default-auto-apply
namespace: openshift-compliance
autoUpdateRemediations: true <1>
autoApplyRemediations: true <1>
rawResultStorage:
pvAccessModes:
- ReadWriteOnce
rotation: 3
size: 1Gi
schedule: 0 1 * * *
roles:
- worker
- master
scanTolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
----
<1> Setting `autoUpdateRemediations` and `autoApplyRemediations` flags to `true` allows you to easily create `ScanSetting` objects that auto-remediate without extra steps.
. Create a `ScanSettingBinding` object that binds to the default `ScanSetting` object and scans the cluster using the `cis` and `cis-node` profiles. For example:
+
[source,yaml]
----
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
name: cis-compliance
profiles:
- name: ocp4-cis-node
kind: Profile
apiGroup: compliance.openshift.io/v1alpha1
- name: ocp4-cis
kind: Profile
apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
name: default
kind: ScanSetting
apiGroup: compliance.openshift.io/v1alpha1
----
. Create the `ScanSettingBinding` object by running:
+
[source, terminal]
----
$ oc create -f <file-name>.yaml -n openshift-compliance
----
+
At this point in the process, the `ScanSettingBinding` object is reconciled and based on the `Binding` and the `Bound` settings. The Compliance Operator creates a `ComplianceSuite` object and the associated `ComplianceScan` objects.
. Follow the compliance scan progress by running:
+
[source,terminal]
----
$ oc get compliancescan -w -n openshift-compliance
----
+
The scans progress through the scanning phases and eventually reach the `DONE` phase when complete. In most cases, the result of the scan is `NON-COMPLIANT`. You can review the scan results and start applying remediations to make the cluster compliant. See xref:../../security/compliance_operator/compliance-operator-remediation.adoc#compliance-operator-remediation[Managing Compliance Operator remediation] for more information.

View File

@@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]
toc::[]
The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling.
The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling.
include::modules/compliance-objects.adoc[leveloffset=+1]
@@ -14,3 +14,7 @@ include::modules/compliance-raw-tailored.adoc[leveloffset=+1]
include::modules/compliance-rescan.adoc[leveloffset=+1]
include::modules/compliance-custom-storage.adoc[leveloffset=+1]
include::modules/compliance-apply-remediations-from-scans.adoc[leveloffset=+1]
include::modules/compliance-auto-update-remediations.adoc[leveloffset=+1]

View File

@@ -0,0 +1,12 @@
[id="compliance-operator-installation"]
= Installing the Compliance Operator
include::modules/common-attributes.adoc[]
:context: compliance-operator-installation
toc::[]
Before you can use the Compliance Operator, you must ensure it is deployed in the cluster.
include::modules/compliance-operator-console-installation.adoc[leveloffset=+1]
include::modules/compliance-operator-cli-installation.adoc[leveloffset=+1]

View File

@@ -12,3 +12,8 @@ include::modules/compliance-update.adoc[leveloffset=+1]
include::modules/compliance-imagestreams.adoc[leveloffset=+1]
include::modules/compliance-profilebundle.adoc[leveloffset=+1]
[id="additional-resources_managing-the-compliance-operator"]
== Additional resources
* The Compliance Operator is supported in a restricted network environment. For more information, see xref:../../operators/admin/olm-restricted-networks.adoc#olm-restricted-networks[Using Operator Lifecycle Manager on restricted networks].

View File

@@ -18,3 +18,5 @@ include::modules/compliance-updating.adoc[leveloffset=+1]
include::modules/compliance-unapplying.adoc[leveloffset=+1]
include::modules/compliance-inconsistent.adoc[leveloffset=+1]
include::modules/compliance-filtering-failed-results.adoc[leveloffset=+1]

View File

@@ -0,0 +1,22 @@
[id="compliance-operator-scans"]
= Compliance Operator scans
include::modules/common-attributes.adoc[]
:context: compliance-operator-scans
toc::[]
The `ScanSetting` and `ScanSettingBinding` APIs are recommended to run compliance scans with the Compliance Operator. For more information on these API objects, run:
[source,terminal]
----
$ oc explain scansettings
----
or
[source,terminal]
----
$ oc explain scansettingbindings
----
include::modules/running-compliance-scans.adoc[leveloffset=+1]