1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Created Container Security Guide

This commit is contained in:
Chris Negus
2020-06-23 11:17:33 -04:00
committed by openshift-cherrypick-robot
parent 9883331430
commit 76d9d62a7f
65 changed files with 2069 additions and 1 deletions

View File

@@ -335,8 +335,39 @@ Topics:
---
Name: Security
Dir: security
Distros: openshift-enterprise,openshift-webscale,openshift-origin
Distros: openshift-enterprise,openshift-webscale,openshift-origin,openshift-aro
Topics:
- Name: Container security
Dir: container_security
Topics:
- Name: Understanding container security
File: security-understanding
- Name: Understanding host and VM security
File: security-hosts-vms
- Name: Hardening Red Hat Enterprise Linux CoreOS
File: security-hardening
Distros: openshift-enterprise,openshift-webscale,openshift-aro
- Name: Hardening Fedora CoreOS
File: security-hardening
Distros: openshift-origin
- Name: Understanding compliance
File: security-compliance
- Name: Securing container content
File: security-container-content
- Name: Using container registries securely
File: security-registries
- Name: Securing the build process
File: security-build
- Name: Deploying containers
File: security-deploy
- Name: Securing the container platform
File: security-platform
- Name: Securing networks
File: security-network
- Name: Securing attached storage
File: security-storage
- Name: Monitoring cluster events and logs
File: security-monitoring
- Name: Configuring certificates
Dir: certificates
Distros: openshift-enterprise,openshift-webscale,openshift-origin

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

BIN
images/build_process1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

BIN
images/build_process2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

BIN
images/orchestration.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

View File

@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//
// * security/container_security/security-build.adoc
[id="security-build-designing_{context}"]
= Designing your build process
You can design your container image management and build process to use container layers so that you can separate control.
image::build_process2.png["Designing Your Build Process", align="center"]
For example, an operations team manages base images, while architects manage
middleware, runtimes, databases, and other solutions. Developers can then focus
on application layers and focus on writing code.
Because new vulnerabilities are identified daily, you need to proactively check
container content over time. To do this, you should integrate automated security
testing into your build or CI process. For example:
* SAST / DAST Static and Dynamic security testing tools.
* Scanners for real-time checking against known vulnerabilities. Tools like these
catalog the open source packages in your container, notify you of any known
vulnerabilities, and update you when new vulnerabilities are discovered in
previously scanned packages.
Your CI process should include policies that flag builds with issues discovered
by security scans so that your team can take appropriate action to address those
issues. You should sign your custom built containers to ensure that nothing is
tampered with between build and deployment.
Using GitOps methodology, you can use the same CI/CD mechanisms to
manage not only your application configurations, but also your
{product-title} infrastructure.

View File

@@ -0,0 +1,49 @@
// Module included in the following assemblies:
//
// * security/container_security/security-build.adoc
[id="security-build-inputs_{context}"]
= Securing inputs during builds
In some scenarios, build operations require credentials to access dependent
resources, but it is undesirable for those credentials to be available in the
final application image produced by the build. You can define input secrets for
this purpose.
For example, when building a Node.js application, you can set up your private
mirror for Node.js modules. In order to download modules from that private
mirror, you must supply a custom `.npmrc` file for the build that contains
a URL, user name, and password. For security reasons, you do not want to expose
your credentials in the application image.
Using this example scenario, you can add an input secret to a new `BuildConfig`:
. Create the secret, if it does not exist:
+
----
$ oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc
----
+
This creates a new secret named `secret-npmrc`, which contains the base64
encoded content of the `~/.npmrc` file.
. Add the secret to the `source` section in the existing `BuildConfig`:
+
[source,yaml]
----
source:
git:
uri: https://github.com/sclorg/nodejs-ex.git
secrets:
- destinationDir: .
secret:
name: secret-npmrc
----
. To include the secret in a new `BuildConfig`, run the following command:
+
----
$ oc new-build \
openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \
--build-secret secret-npmrc
----

View File

@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// * security/container_security/security-build.adoc
[id="security-build-knative_{context}"]
= Building Knative serverless applications
Relying on Kubernetes and Kourier, you can build, deploy
and manage serverless applications using
link:https://knative.dev/[Knative] in {product-title}.
As with other builds, you can use S2I images to build your containers,
then serve them using Knative services.
View Knative application builds through the
*Topology* view of the {product-title} web console.

View File

@@ -0,0 +1,34 @@
// Module included in the following assemblies:
//
// * security/container_security/security-build.adoc
[id="security-build-management_{context}"]
= Managing builds
You can use Source-to-Image (S2I) to combine source code and base images.
_Builder images_ make use of S2I to enable your development and operations teams
to collaborate on a reproducible build environment.
With Red Hat S2I images available as Universal Base Image (UBI) images,
you can now freely redistribute your software with
base images built from real {op-system-base} RPM packages.
Red Hat has removed subscription restrictions to allow this.
When developers commit code with Git for an application using build images,
{product-title} can perform the following functions:
* Trigger, either by using webhooks on the code repository or other automated
continuous integration (CI) process, to automatically assemble a new image from
available artifacts, the S2I builder image, and the newly committed code.
* Automatically deploy the newly built image for testing.
* Promote the tested image to production where it can be automatically deployed
using a CI process.
image::build_process1.png["Source-to-Image Builds", align="center"]
You can use the integrated OpenShift Container Registry to manage access to final images.
Both S2I and native build images are automatically pushed to your OpenShift Container
Registry.
In addition to the included Jenkins for CI, you can also integrate your own
build and CI environment with {product-title} using RESTful APIs, as well as use
any API-compliant image registry.

View File

@@ -0,0 +1,31 @@
// Module included in the following assemblies:
//
// * security/container_security/security-build.adoc
[id="security-build-once_{context}"]
= Building once, deploying everywhere
Using {product-title} as the standard platform for container builds enables you
to guarantee the security of the build environment. Adhering to a "build once,
deploy everywhere" philosophy ensures that the product of the build process is
exactly what is deployed in production.
It is also important to maintain the immutability of your containers. You should
not patch running containers, but rebuild and redeploy them.
As your software moves through the stages of building, testing, and production, it is
important that the tools making up your software supply chain be trusted. The
following figure illustrates the process and tools that could be incorporated
into a trusted software supply chain for containerized software:
image::trustedsupplychain.png["", align="center"]
{product-title} can be integrated with trusted code repositories (such as GitHub)
and development platforms (such as Che) for creating and managing secure code.
Unit testing could rely on
link:https://cucumber.io/[Cucumber] and link:https://junit.org/[JUnit].
You could inspect your containers for vulnerabilities and compliance issues
with link:https://anchore.com[Anchore] or Twistlock,
and use image scanning tools such as AtomicScan or Clair.
Tools such as link:https://sysdig.com[Sysdig] could provide ongoing monitoring
of your containerized applications.

View File

@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// * security/container_security/security-compliance.adoc
[id="security-compliance-nist_{context}"]
= Understanding compliance and risk management
FIPS compliance is one of the most critical components required in
highly secure environments, to ensure that only supported cryptographic
technologies are allowed on nodes. To understand Red Hat's view of {product-title} compliance frameworks, refer
to the Risk Management and Regulatory Readiness chapter of the
link:https://access.redhat.com/articles/5059881[OpenShift Security Guide Book].

View File

@@ -0,0 +1,262 @@
// Module included in the following assemblies:
//
// * security/container_security/security-container-content.adoc
[id="security-container-content-external-scanning_{context}"]
= Integrating external scanning
{product-title} makes use of link:https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/[object annotations]
to extend functionality. External tools, such as vulnerability scanners, can
annotate image objects with metadata to summarize results and control pod
execution. This section describes the recognized format of this annotation so it
can be reliably used in consoles to display useful data to users.
[id="security-image-metadata_{context}"]
== Image metadata
There are different types of image quality data, including package
vulnerabilities and open source software (OSS) license compliance. Additionally,
there may be more than one provider of this metadata. To that end, the following
annotation format has been reserved:
----
quality.images.openshift.io/<qualityType>.<providerId>: {}
----
.Annotation key format
[option="header"]
|===
|Component |Description |Acceptable values
|`qualityType`
|Metadata type
|`vulnerability` +
`license` +
`operations` +
`policy`
|`providerId`
|Provider ID string
|`openscap` +
`redhatcatalog` +
`redhatinsights` +
`blackduck` +
`jfrog`
|===
[id="security-example-annotation-keys_{context}"]
=== Example annotation keys
----
quality.images.openshift.io/vulnerability.blackduck: {}
quality.images.openshift.io/vulnerability.jfrog: {}
quality.images.openshift.io/license.blackduck: {}
quality.images.openshift.io/vulnerability.openscap: {}
----
The value of the image quality annotation is structured data that must adhere to
the following format:
.Annotation value format
[option="header"]
|===
|Field |Required? |Description |Type
|`name`
|Yes
|Provider display name
|String
|`timestamp`
|Yes
|Scan timestamp
|String
|`description`
|No
|Short description
|String
|`reference`
|Yes
|URL of information source or more details. Required so user may validate the data.
|String
|`scannerVersion`
|No
|Scanner version
|String
|`compliant`
|No
|Compliance pass or fail
|Boolean
|`summary`
|No
|Summary of issues found
|List (see table below)
|===
The `summary` field must adhere to the following format:
.Summary field value format
[option="header"]
|===
|Field |Description |Type
|`label`
|Display label for component (for example, "critical," "important," "moderate,"
"low," or "health")
|String
|`data`
|Data for this component (for example, count of vulnerabilities found or score)
|String
|`severityIndex`
|Component index allowing for ordering and assigning graphical
representation. The value is range `0..3` where `0` = low.
|Integer
|`reference`
|URL of information source or more details. Optional.
|String
|===
[id="security-example-annotation-values_{context}"]
=== Example annotation values
This example shows an OpenSCAP annotation for an image with
vulnerability summary data and a compliance boolean:
.OpenSCAP annotation
[source,json]
----
{
"name": "OpenSCAP",
"description": "OpenSCAP vulnerability score",
"timestamp": "2016-09-08T05:04:46Z",
"reference": "https://www.open-scap.org/930492",
"compliant": true,
"scannerVersion": "1.2",
"summary": [
{ "label": "critical", "data": "4", "severityIndex": 3, "reference": null },
{ "label": "important", "data": "12", "severityIndex": 2, "reference": null },
{ "label": "moderate", "data": "8", "severityIndex": 1, "reference": null },
{ "label": "low", "data": "26", "severityIndex": 0, "reference": null }
]
}
----
This example shows the
link:https://catalog.redhat.com/software/containers/explore[Red Hat Container Catalog]
annotation for an image with health index data
with an external URL for additional details:
.Red Hat Container Catalog annotation
[source,json]
----
{
"name": "Red Hat Container Catalog",
"description": "Container health index",
"timestamp": "2016-09-08T05:04:46Z",
"reference": "https://access.redhat.com/errata/RHBA-2016:1566",
"compliant": null,
"scannerVersion": "1.2",
"summary": [
{ "label": "Health index", "data": "B", "severityIndex": 1, "reference": null }
]
}
----
[id="security-annotating-image-objects_{context}"]
== Annotating image objects
While image stream objects
are what an end user of {product-title} operates against,
image objects are annotated with
security metadata. Image objects are cluster-scoped, pointing to a single image
that may be referenced by many image streams and tags.
[id="security-example-annotate-CLI_{context}"]
=== Example annotate CLI command
Replace `<image>` with an image digest, for example
`sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2`:
----
$ oc annotate image <image> \
quality.images.openshift.io/vulnerability.redhatcatalog='{ \
"name": "Red Hat Container Catalog", \
"description": "Container health index", \
"timestamp": "2020-06-01T05:04:46Z", \
"compliant": null, \
"scannerVersion": "1.2", \
"reference": "https://access.redhat.com/errata/RHBA-2020:2347", \
"summary": "[ \
{ "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }'
----
[id="controlling-pod-execution_{context}"]
== Controlling pod execution
Use the `images.openshift.io/deny-execution` image policy
to programmatically control if an image can be run.
[id="security-controlling-pod-execution-example-annotation_{context}"]
=== Example annotation
[source,yaml]
----
annotations:
images.openshift.io/deny-execution: true
----
[id="security-integration-reference_{context}"]
== Integration reference
In most cases, external tools such as vulnerability scanners develop a
script or plug-in that watches for image updates, performs scanning, and
annotates the associated image object with the results. Typically this
automation calls the {product-title} {product-version} REST APIs to write the annotation. See
{product-title} REST APIs for general
information on the REST APIs.
[id="security-integration-reference-example-api-call_{context}"]
=== Example REST API call
The following example call using `curl` overrides the value of the
annotation. Be sure to replace the values for `<token>`, `<openshift_server>`,
`<image_id>`, and `<image_annotation>`.
.Patch API call
----
$ curl -X PATCH \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/merge-patch+json" \
https://<openshift_server>:8443/oapi/v1/images/<image_id> \
--data '{ <image_annotation> }'
----
The following is an example of `PATCH` payload data:
.Patch call data
----
{
"metadata": {
"annotations": {
"quality.images.openshift.io/vulnerability.redhatcatalog":
"{ 'name': 'Red Hat Container Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }"
}
}
}
----
ifdef::openshift-origin[]
[NOTE]
====
Due to the complexity of this API call and challenges with escaping characters,
an API developer tool such as link:https://www.getpostman.com/[Postman] may
assist in creating API calls.
====
endif::[]

View File

@@ -0,0 +1,34 @@
// Module included in the following assemblies:
//
// * security/container_security/security-container-content.adoc
[id="security-container-content-inside_{context}"]
= Securing inside the container
Applications and infrastructures are composed of readily available components,
many of which are open source packages such as, the Linux operating system,
JBoss Web Server, PostgreSQL, and Node.js.
Containerized versions of these packages are also available. However, you need
to know where the packages originally came from, what versions are used, who built them, and whether
there is any malicious code inside them.
Some questions to answer include:
* Will what is inside the containers compromise your infrastructure?
* Are there known vulnerabilities in the application layer?
* Are the runtime and operating system layers current?
By building your containers from Red Hat
link:https://access.redhat.com/articles/4238681[Universal Base Images] (UBI) you are
assured of a foundation for your container images that consists of
the same RPM-packaged software that is included in Red Hat Enterprise Linux.
No subscriptions are required to either use or redistribute UBI images.
To assure ongoing security of the containers themselves, security
scanning features, used directly from {op-system-base} or added to {product-title},
can alert you when
an image you are using has vulnerabilities. OpenSCAP image scanning is
available in {op-system-base} and the
link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/manage_red_hat_quay/index#container-security-operator-setup[Container Security Operator] can be added
to check container images used in {product-title}.

View File

@@ -0,0 +1,34 @@
// Module included in the following assemblies:
//
// * security/container_security/security-container-content.adoc
[id="security-container-content-scanning_{context}"]
= Security scanning in {op-system-base}
For {op-system-base-full} systems, OpenSCAP scanning is available
from the `openscap-utils` package. In {op-system-base}, you can use the `openscap-podman`
command to scan images for vulnerabilities. See
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/security_hardening/index#scanning-the-system-for-configuration-compliance-and-vulnerabilities_security-hardening[Scanning containers and container images for vulnerabilities] in the Red Hat Enterprise Linux documentation.
{product-title} enables you to leverage {op-system-base} scanners with your CI/CD process.
For example, you can integrate static code analysis tools that test for security
flaws in your source code and software composition analysis tools that identify
open source libraries in order to provide metadata on those libraries such as
known vulnerabilities.
[id="quay-security-scan_{context}"]
== Scanning OpenShift images
For the container images that are running in {product-title}
and are pulled from Red Hat Quay registries, you can use an Operator to list the
vulnerabilities of those images. The
link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/manage_red_hat_quay/index#container-security-operator-setup[Container Security Operator]
can be added to {product-title} to provide vulnerability reporting
for images added to selected namespaces.
Container image scanning for Red Hat Quay is performed by the
link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/manage_red_hat_quay/index#quay-security-scanner[Clair security scanner].
In Red Hat Quay, Clair can search for and report vulnerabilities in
images built from {op-system-base}, CentOS, Oracle, Alpine, Debian, and Ubuntu
operating system software.

View File

@@ -0,0 +1,46 @@
// Module included in the following assemblies:
//
// * security/container_security/security-container-content.adoc
[id="security-container-content-universal_{context}"]
= Creating redistributable images with UBI
To create containerized applications, you typically start with a trusted base
image that offers the components that are usually provided by the operating system.
These include the libraries, utilities, and other features the application
expects to see in the operating system's file system.
Red Hat Universal Base Images (UBI) were created to encourage anyone building their
own containers to start with one that is made entirely from Red Hat Enterprise
Linux rpm packages and other content. These UBI images are updated regularly
to keep up with security patches and free to use and redistribute with
container images built to include your own software.
Search the
link:https://catalog.redhat.com/software/containers/explore[Red Hat Ecosystem Catalog]
to both find and check the health of different UBI images.
As someone creating secure container images, you might
be interested in these two general types of UBI images:
* **UBI**: There are standard UBI images for RHEL 7 and 8 (`ubi7/ubi` and
`ubi8/ubi`), as well as minimal images based on those systems (`ubi7/ubi-minimal`
and `ubi8/ubi-mimimal`). All of these images are preconfigured to point to free
repositories of {op-system-base} software that you can add to the container images you build,
using standard `yum` and `dnf` commands.
Red Hat encourages people to use these images on other distributions,
such as Fedora and Ubuntu.
* **Red Hat Software Collections**: Search the Red Hat Ecosystem Catalog
for `rhscl/` to find images created to use as base images for specific types
of applications. For example, there are Apache httpd ([x-]`rhscl/httpd-*`),
Python ([x-]`rhscl/python-*`), Ruby ([x-]`rhscl/ruby-*`), Node.js
([x-]`rhscl/nodejs-*`) and Perl ([x-]`rhscl/perl-*`) rhscl images.
Keep in mind that while UBI images are freely available and redistributable,
Red Hat support for these images is only available through Red Hat
product subscriptions.
See
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/building_running_and_managing_containers/index#using_red_hat_universal_base_images_standard_minimal_and_runtimes[Using Red Hat Universal Base Images]
in the Red Hat Enterprise Linux documentation for information on how to use and build on
standard, minimal and init UBI images.

View File

@@ -0,0 +1,13 @@
// Module included in the following assemblies:
//
// * security/container_security/security-deploy.adoc
[id="security-deploy-continuous_{context}"]
= Automating continuous deployment
You can integrate your own continuous deployment (CD) tooling with
{product-title}.
By leveraging CI/CD and {product-title}, you can automate the process of
rebuilding the application to incorporate the latest fixes, testing, and
ensuring that it is deployed everywhere within the environment.

View File

@@ -0,0 +1,73 @@
// Module included in the following assemblies:
//
// * security/container_security/security-deploy.adoc
[id="security-deploy-image-sources_{context}"]
= Controlling what image sources can be deployed
It is important that the intended images are actually being deployed, that the
images including the contained content
are from trusted sources, and they have not been altered. Cryptographic signing
provides this assurance. {product-title} enables cluster administrators to apply
security policy that is broad or narrow, reflecting deployment environment and
security requirements. Two parameters define this policy:
* one or more registries, with optional project namespace
* trust type, such as accept, reject, or require public key(s)
You can use these policy parameters to allow, deny, or require a trust
relationship for entire registries, parts of registries, or individual
images. Using trusted public keys, you can ensure that the source is
cryptographically verified.
The policy rules apply to nodes. Policy may be
applied uniformly across all nodes or targeted for different node workloads (for
example, build, zone, or environment).
.Example image signature policy file
[json]
----
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"access.redhat.com": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"
}
]
},
"atomic": {
"172.30.1.1:5000/openshift": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"
}
],
"172.30.1.1:5000/production": [
{
"type": "signedBy",
"keyType": "GPGKeys",
"keyPath": "/etc/pki/example.com/pubkey"
}
],
"172.30.1.1:5000": [{"type": "reject"}]
}
}
}
----
The policy can be saved onto a node as `/etc/containers/policy.json`.
Saving this file to a node is best accomplished using a new
MachineConfig object. This
example enforces the following rules:
* Require images from the Red Hat Registry (`registry.access.redhat.com`) to be
signed by the Red Hat public key.
* Require images from your OpenShift Container Registry in the `openshift`
namespace to be signed by the Red Hat public key.
* Require images from your OpenShift Container Registry in the `production`
namespace to be signed by the public key for `example.com`.
* Reject all other registries not specified by the global `default` definition.

View File

@@ -0,0 +1,33 @@
// Module included in the following assemblies:
//
// * security/container_security/security-deploy.adoc
[id="security-deploy-secrets_{context}"]
= Creating Secrets and ConfigMaps
The Secret object type provides a mechanism to hold sensitive information such
as passwords, {product-title} client configuration files, `dockercfg` files,
and private source repository credentials. Secrets decouple sensitive content
from pods. You can mount secrets into containers using a volume plug-in or the
system can use secrets to perform actions on behalf of a pod.
For example, to add a secret to your deployment configuration
so that it can access a private image repository, do the following:
.Procedure
. Log in to the {product-title} web console.
. Create a new project.
. Navigate to *Resources* -> *Secrets* and create a new secret. Set `Secret Type` to
`Image Secret` and `Authentication Type` to `Image Registry Credentials` to
enter credentials for accessing a private image repository.
. When creating a deployment configuration (for example, from the *Add to Project* ->
*Deploy Image* page), set the `Pull Secret` to your new secret.
ConfigMaps are similar to secrets, but are designed to support working with
strings that do not contain sensitive information. The ConfigMap object holds
key-value pairs of configuration data that can be consumed in pods or used to
store configuration data for system components such as controllers.

View File

@@ -0,0 +1,46 @@
// Module included in the following assemblies:
//
// * security/container_security/security-deploy.adoc
[id="security-deploy-signature_{context}"]
= Using signature transports
A signature transport is a way to store and retrieve the binary signature blob.
There are two types of signature transports.
* `atomic`: Managed by the {product-title} API.
* `docker`: Served as a local file or by a web server.
The {product-title} API manages signatures that use the `atomic` transport type.
You must store the images that use this signature type in
your OpenShift Container Registry. Because the docker/distribution `extensions` API
auto-discovers the image signature endpoint, no additional
configuration is required.
Signatures that use the `docker` transport type are served by local file or web
server. These signatures are more flexible; you can serve images from any
container image registry and use an independent server to deliver binary
signatures.
However, the `docker` transport type requires additional configuration. You must
configure the nodes with the URI of the signature server by placing
arbitrarily-named YAML files into a directory on the host system,
`/etc/containers/registries.d` by default. The YAML configuration files contain a
registry URI and a signature server URI, or _sigstore_:
.Example registries.d file
[source,yaml]
----
docker:
access.redhat.com:
sigstore: https://access.redhat.com/webassets/docker/content/sigstore
----
In this example, the Red Hat Registry, `access.redhat.com`, is the signature
server that provides signatures for the `docker` transport type. Its URI is
defined in the `sigstore` parameter. You might name this file
`/etc/containers/registries.d/redhat.com.yaml` and use the Machine Config
Operator to
automatically place the file on each node in your cluster. No service
restart is required since policy and `registries.d` files are dynamically
loaded by the container runtime.

View File

@@ -0,0 +1,32 @@
// Module included in the following assemblies:
//
// * security/container_security/security-deploy.adoc
[id="security-deploy-trigger_{context}"]
= Controlling container deployments with triggers
If something happens during the build process, or if a vulnerability is
discovered after an image has been deployed, you can use tooling for automated,
policy-based deployment to remediate. You can use triggers to rebuild and replace images,
ensuring the immutable containers process,
instead of patching running containers, which is not recommended.
image::secure_deployments.png["Secure Deployments", align="center"]
For example, you build an application using three container image layers: core,
middleware, and applications. An issue is discovered in the core image and that
image is rebuilt. After the build is complete, the image is pushed to your
OpenShift Container Registry. {product-title} detects that the image has changed
and automatically rebuilds and deploys the application image, based on the
defined triggers. This change incorporates the fixed libraries and ensures that
the production code is identical to the most current image.
You can use the `oc set triggers` command to set a deployment trigger.
For example, to set a trigger for a deployment called
deployment-example:
----
$ oc set triggers deploy/deployment-example \
--from-image=example:latest \
--containers=web
----

View File

@@ -0,0 +1,58 @@
// Module included in the following assemblies:
//
// * security/container_security/security-hardening.adoc
[id="security-hardening-how_{context}"]
= Choosing how to harden {op-system}
Direct modification of {op-system} systems in {product-title} is discouraged.
Instead, you should think of modifying systems in pools of nodes, such
as worker nodes and master nodes. When a new node is needed, in
non-bare metal installs, you can request a new node of the type
you want and it will be created from an {op-system} image plus the
modifications you created earlier.
There are opportunities for modifying {op-system} before installation,
during installation, and after the cluster is up and running.
[id="security-harden-before-installation_{context}"]
== Hardening before installation
For bare metal installations, you can add hardening features to
{op-system} before beginning the {product-title} installation. For example,
you can add kernel options when you boot the {op-system} installer
to turn security features on or off, such as SELinux or various
low-level settings, such as symmetric multithreading.
Although bare metal {op-system} installations are more difficult,
they offer the opportunity of getting operating system
changes in place before starting the {product-title} installation. This can be important when you need to ensure that certain
features, such as disk encryption or special networking settings, be
set up at the earliest possible moment.
[id="security-harden-during-installation_{context}"]
== Hardening during installation
You can interrupt the OpenShift installation process and change
Ignition configs. Through Ignition configs, you can add your own files
and systemd services to the {op-system} nodes.
You can also make some basic security-related changes to the `install-config.yaml` file
used for installation.
Contents added in this way are available at each node's first boot.
[id="security-harden-after-installation_{context}"]
== Hardening after the cluster is running
After the {product-title} cluster is up and running, there are
several ways to apply hardening features to {op-system}:
* DaemonSet: If you need a service to run on every node, you can add
that service with a
link:https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/[Kubernetes DaemonSet].
* MachineConfig: MachineConfig objects contain a subset of Ignition configs in the same format.
By applying MachineConfigs to all worker or control plane nodes,
you can ensure that the next node of the same type that is added
to the cluster has the same changes applied.
All of the features noted here are described in the {product-title}
product documentation.

View File

@@ -0,0 +1,23 @@
// Module included in the following assemblies:
//
// * security/container_security/security-hardening.adoc
[id="security-hardening-what_{context}"]
= Choosing what to harden in {op-system}
ifdef::openshift-origin[]
The link:https://docs.fedoraproject.org/en-US/Fedora/19/html/Security_Guide/chap-Security_Guide-Basic_Hardening.html[{op-system-base} Security Hardening] guide describes how you should approach security for any
{op-system-base} system.
endif::[]
ifdef::openshift-enterprise,openshift-webscale,openshift-aro[]
The link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/security_hardening/index#scanning-container-and-container-images-for-vulnerabilities_scanning-the-system-for-security-compliance-and-vulnerabilities[{op-system-base} 8 Security Hardening] guide describes how you should approach security for any
{op-system-base} system.
endif::[]
Use this guide to learn how to approach cryptography, evaluate
vulnerabilities, and assess threats to various services.
Likewise, you can learn how to scan for compliance standards, check file
integrity, perform auditing, and encrypt storage devices.
With the knowledge of what features you want to harden, you can then
decide how to harden them in {op-system}.

View File

@@ -0,0 +1,38 @@
// Module included in the following assemblies:
//
// * security/container_security/security-hosts-vms.adoc
[id="security-hosts-vms-openshift_{context}"]
= Securing {product-title}
When you deploy {product-title}, you have the choice of an
installer-provisioned infrastructure (there are several available platforms)
or your own user-provisioned infrastructure.
Some low-level security-related configuration, such as enabling FIPS
compliance or adding kernel modules required at first boot, might
benefit from a user-provisioned infrastructure.
Likewise, user-provisioned infrastructure is appropriate for disconnected {product-title} deployments.
Keep in mind that, when it comes to making security enhancements and other
configuration changes to {product-title}, the goals should include:
* Keeping the underlying nodes as generic as possible. You want to be able to
easily throw away and spin up similar nodes quickly and in prescriptive ways.
* Managing modifications to nodes through {product-title} as much as possible,
rather than making direct, one-off changes to the nodes.
In pursuit of those goals, most node changes should be done during installation through Ignition
or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator.
Examples of security-related configuration changes you can do in this way include:
* Adding kernel arguments
* Adding kernel modules
* Enabling support for FIPS cryptography
* Configuring disk encryption
* Configuring the chrony time service
Besides the Machine Config Operator, there are several other Operators available to configure {product-title} infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of
{product-title} cluster updates.

View File

@@ -0,0 +1,70 @@
// Module included in the following assemblies:
//
// * security/container_security/security-hosts-vms.adoc
[id="security-hosts-vms-rhcos_{context}"]
= Securing containers on {op-system-first}
Containers simplify the act of deploying many applications to run on the same host,
using the same kernel and container runtime to spin up each container.
The applications can be owned by many users and, because they are kept
separate, can run different, and even incompatible, versions of those applications
at the same time without issue.
In Linux, containers are just a special type of process, so securing
containers is similar in many ways to securing any other running process.
An environment for running containers starts with an operating system
that can secure the host kernel from
containers and other processes running on the host, as well as
secure containers from each other.
Because {product-title} {product-version} runs on {op-system} hosts,
with the option of using {op-system-base-full} as worker nodes,
the following concepts apply by default to any deployed {product-title}
cluster. These {op-system-base} security features are at the core of what
makes running containers in OpenShift more secure:
* _Linux namespaces_ enable creating an abstraction of a particular global system
resource to make it appear as a separate instance to processes within a
namespace. Consequently, several containers can use the same computing resource
simultaneously without creating a conflict.
Container namespaces that are separate from the host by default include mount table, process table,
network interface, user, control group, UTS, and IPC namespaces.
Those containers that need direct access to host namespaces need to have
elevated permissions to request that access.
ifdef::openshift-enterprise,openshift-webscale,openshift-aro[]
See
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/overview_of_containers_in_red_hat_systems/introduction_to_linux_containers#linux_containers_architecture[Overview of Containers in Red Hat Systems]
from the {op-system-base} 7 container documentation for details on the types of namespaces.
endif::[]
* _SELinux_ provides an additional layer of security to keep containers isolated
from each other and from the host. SELinux allows administrators to enforce
mandatory access controls (MAC) for every user, application, process, and file.
* _CGroups_ (control groups) limit, account for, and isolate the resource usage
(CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are
used to ensure that containers on the same host are not impacted by each other.
* _Secure computing mode (seccomp)_ profiles can be associated with a container to
restrict available system calls. See page 94 of the
link:https://access.redhat.com/articles/5059881[OpenShift Security Guide] for details about seccomp.
* Deploying containers using _{op-system}_ reduces the attack surface by
minimizing the host environment and tuning it for containers.
The link:https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/cri-o_runtime/index[CRI-O container engine] further reduces that attack surface by
implementing only those features required by Kubernetes and OpenShift to
run and manage containers, as opposed to other container engines
that implement desktop-oriented standalone features.
{op-system} is a version of {op-system-base-full} that is specially
configured to work as control plane (master) and worker nodes
on {product-title} clusters.
So {op-system} is tuned to efficiently run container workloads, along with
Kubernetes and OpenShift services.
To further protect {op-system} systems in {product-title} clusters,
most containers, except those managing or monitoring the host system itself,
should run as a non-root user. Dropping the privilege level or
creating containers with the least amount of privileges possible is recommended
best practice for protecting your own {product-title} clusters.

View File

@@ -0,0 +1,35 @@
// Module included in the following assemblies:
//
// * security/container_security/security-hosts-vms.adoc
[id="security-hosts-vms-vs-containers_{context}"]
= Comparing virtualization and containers
Traditional virtualization provides another way to keep application
environments separate on the same physical host. However, virtual machines
work in a different way than containers.
Virtualization relies on a hypervisor spinning up guest
virtual machines (VMs), each of which has its own operating system (OS),
represented by a running kernel, as well as the running application and its dependencies.
With VMs, the hypervisor isolates the guests from each other and from the host
kernel. Fewer individuals and processes have access to the hypervisor, reducing
the attack surface on the physical server. That said, security must still be
monitored: one guest VM might be able to use hypervisor bugs to gain access to
another VM or the host kernel. And, when the OS needs to be patched, it must be
patched on all guest VMs using that OS.
Containers can be run inside guest VMs, and there might be use cases where this is
desirable. For example, you might be deploying a traditional application in a
container, perhaps in order to lift-and-shift an application to the cloud.
Container separation on a single host, however, provides a more lightweight,
flexible, and easier-to-scale deployment solution. This deployment model is
particularly appropriate for cloud-native applications. Containers are
generally much smaller than VMs and consume less memory and CPU.
ifndef::openshift-origin[]
See link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/overview_of_containers_in_red_hat_systems/introduction_to_linux_containers#linux_containers_compared_to_kvm_virtualization[Linux Containers Compared to KVM Virtualization]
in the {op-system-base} 7 container documentation to learn about the differences between container and VMs.
endif::[]

View File

@@ -0,0 +1,10 @@
// Module included in the following assemblies:
//
// * security/container_security/security-monitoring.adoc
[id="security-monitoring-audit-logs_{context}"]
= Audit logs
With _audit logs_, you can follow a sequence of activities associated with how a
user, administrator, or other {product-title} component is behaving.
API audit logging is done on each server.

View File

@@ -0,0 +1,18 @@
// Module included in the following assemblies:
//
// * security/container_security/security-monitoring.adoc
[id="security-monitoring-cluster-logging_{context}"]
= Logging
Using the `oc log` command, you can view container logs, Buildconfigs and
Deploymentconfigs in real time. Different can users have access different
access to logs:
* Users who have access to a project are able to see the logs for that project by default.
* Users with admin roles can access all container logs.
To save your logs for further audit and analysis, you can enable the `cluster-logging` add-on
feature to collect, manage, and view system, container, and audit logs.
You can deploy, manage, and upgrade cluster logging through the Elasticsearch Operator
and Cluster Logging Operator.

View File

@@ -0,0 +1,60 @@
// Module included in the following assemblies:
//
// * security/container_security/security-monitoring.adoc
[id="security-monitoring-events_{context}"]
= Watching cluster events
Cluster administrators are encouraged to familiarize themselves with the _Event_ resource
type and review the list of system events to
determine which events are of interest.
Events are associated with a namespace, either the namespace of the
resource they are related to or, for cluster events, the `default`
namespace. The default namespace holds relevant events for monitoring or auditing a cluster,
such as _Node_ events and resource events related to infrastructure components.
The master API and `oc` command do not provide parameters to scope a listing of events to only those
related to nodes. A simple approach would be to use grep:
----
$ oc get event -n default | grep Node
1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure ...
----
A more flexible approach is to output the events in a form that other
tools can process. For example, the following example uses the `jq`
tool against JSON output to extract only _NodeHasDiskPressure_ events:
----
$ oc get events -n default -o json \
| jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")'
{
"apiVersion": "v1",
"count": 3,
"involvedObject": {
"kind": "Node",
"name": "origin-node-1.example.local",
"uid": "origin-node-1.example.local"
},
"kind": "Event",
"reason": "NodeHasDiskPressure",
...
}
----
Events related to resource creation, modification, or deletion can also be
good candidates for detecting misuse of the cluster. The following query,
for example, can be used to look for excessive pulling of images:
----
$ oc get events --all-namespaces -o json \
| jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length'
4
----
[NOTE]
====
When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent
filling up etcd storage. Events are
not stored as a permanent record and frequent polling is necessary to capture statistics over time.
====

View File

@@ -0,0 +1,20 @@
// Module included in the following assemblies:
//
// * security/container_security/security-network.adoc
[id="security-network-egress_{context}"]
= Securing egress traffic
{product-title} provides the ability to control egress traffic using either
a router or firewall method. For example, you can use IP whitelisting to control
database access.
A cluster administrator can assign one or more egress IP addresses to a project
in an {product-title} SDN network provider.
Likewise, a cluster administrator can prevent egress traffic from
going outside of an {product-title} cluster using an egress firewall.
By assigning a fixed egress IP address, you can have all outgoing traffic
assigned to that IP address for a particular project.
With the egress firewall, you can prevent a pod from connecting to an
external network, prevent a pod from connecting to an internal network,
or limit a pod's access to specific internal subnets.

View File

@@ -0,0 +1,14 @@
// Module included in the following assemblies:
//
// * security/container_security/security-network.adoc
[id="security-network-ingress_{context}"]
= Securing ingress traffic
There are many security implications related to how you configure
access to your Kubernetes services from outside of your {product-title} cluster.
Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up
NodePort or LoadBalancer ingress types. NodePort exposes an application's
service API object from each cluster worker. LoadBalancer lets you assign an
external load balancer to an associated service API object
in your {product-title} cluster.

View File

@@ -0,0 +1,10 @@
// Module included in the following assemblies:
//
// * security/container_security/security-network.adoc
[id="security-network-isolating_{context}"]
= Isolating applications
{product-title} enables you to segment network traffic on a single cluster to
make multitenant clusters that isolate users, teams, applications, and
environments from non-global resources.

View File

@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// * security/container_security/security-network.adoc
[id="security-network-multiple-pod_{context}"]
= Using multiple pod networks
Each running container has only one network interface by default.
The Multus CNI plug-in lets you create multiple CNI networks, and then
attach any of those networks to your pods. In that way, you can do
things like separate private data onto a more restricted network
and have multiple network interfaces on each node.

View File

@@ -0,0 +1,16 @@
// Module included in the following assemblies:
//
// * security/container_security/security-network.adoc
[id="security-network-namespaces_{context}"]
= Using network namespaces
{product-title} uses software-defined networking (SDN) to provide a unified
cluster network that enables communication between containers across the
cluster.
Network policy mode, by default, makes all pods in a project accessible from
other pods and network endpoints.
To isolate one or more pods in a project, you can create NetworkPolicy objects
in that project to indicate the allowed incoming connections.
Using multitenant mode, you can provide project-level isolation for pods and services.

View File

@@ -0,0 +1,11 @@
// Module included in the following assemblies:
//
// * security/container_security/security-network.adoc
[id="security-network-policies_{context}"]
= Isolating pods with network policies
Using _network policies_, you can isolate pods from each other in the same project.
Network policies can deny all network access to a pod,
only allow connections for the ingress controller, reject connections from
pods in other projects, or set similar rules for how networks behave.

View File

@@ -0,0 +1,58 @@
// Module included in the following assemblies:
//
// * security/container_security/security-platform.adoc
[id="security-platform-admission_{context}"]
= Protecting control plane with admission plug-ins
While RBAC controls access rules between users and groups and available projects,
_admission plug-ins_ define access to the {product-title} master API.
Admission plug-ins form a chain of rules that consist of:
* Default admissions plug-ins: These implement a default set of
policies and resources limits that are applied to components of the {product-title}
control plane.
* Mutating admission plug-ins: These plug-ins dynamically extend the admission chain.
They call out to a webhook server and can both authenticate a request and modify the selected resource.
* Validating admission plug-ins: These validate requests for a selected resource
and can both validate the request and ensure that the resource does not change again.
API requests go through admissions plug-ins in a chain, with any failure along
the way causing the request to be rejected. Each admission plug-in is associated with particular resources and only
responds to requests for those resources.
[id="security-deployment-sccs_{context}"]
== Security Context Constraints (SCCs)
You can use _security context constraints_ (SCCs) to define a set of conditions
that a pod must run with in order to be accepted
into the system.
Some aspects that can be managed by SCCs include:
- Running of privileged containers
- Capabilities a container can request to be added
- Use of host directories as volumes
- SELinux context of the container
- Container user ID
If you have the required permissions, you can adjust the default SCC policies to
be more permissive, if required.
[id="security-service-account_{context}"]
== Granting roles to service accounts
You can assign roles to service accounts, in the same way that
users are assigned role-based access.
There are three default service accounts created for each project.
A service account:
* is limited in scope to a particular project
* derives its name from its project
* is automatically assigned an API token and credentials to access the
OpenShift Container Registry
Service accounts associated with platform components automatically
have their keys rotated.

View File

@@ -0,0 +1,63 @@
// Module included in the following assemblies:
//
// * security/container_security/security-platform.adoc
[id="security-platform-authentication_{context}"]
= Authentication and authorization
[id="security-platform-auth-controlling-access_{context}"]
== Controlling access using OAuth
You can use API access control via authentication and authorization for securing
your container platform. The {product-title} master includes a built-in OAuth
server. Users can obtain OAuth access tokens to authenticate themselves to the
API.
As an administrator, you can configure OAuth to authenticate using an _identity
provider_, such as LDAP, GitHub, or Google. The
identity provider is used by default for new {product-title} deployments, but
you can configure this at initial installation time or post-installation.
[id="security-platform-api-access-control_{context}"]
== API access control and management
Applications can have multiple, independent API services which have different
endpoints that require management. {product-title} includes a containerized
version of the 3scale API gateway so that you can manage your APIs and control
access.
3scale gives you a variety of standard options for API authentication and
security, which can be used alone or in combination to issue credentials and
control access: standard API keys, application ID and key pair, and OAuth 2.0.
You can restrict access to specific endpoints, methods, and services and apply
access policy for groups of users. Application plans allow you to set rate
limits for API usage and control traffic flow for groups of developers.
For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see
link:https://support.3scale.net/docs/deployment-options/apicast-openshift[Running APIcast on Red Hat OpenShift]
in the 3scale documentation.
[id="security-platform-red-hat-sso_{context}"]
== Red Hat Single Sign-On
The Red Hat Single Sign-On server enables you to secure your
applications by providing web single sign-on capabilities based on standards, including
SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID
Connectbased identity provider (IdP), mediating with your enterprise user
directory or third-party identity provider for identity information and your
applications using standards-based tokens. You can integrate Red Hat Single Sign-On with
LDAP-based directory services including Microsoft Active Directory and Red Hat
Enterprise Linux Identity Management.
[id="security-platform-auth-secure-self-service-web-console_{context}"]
== Secure self-service web console
{product-title} provides a self-service web console to ensure that teams do not
access other environments without authorization. {product-title} ensures a
secure multitenant master by providing the following:
- Access to the master uses Transport Layer Security (TLS)
- Access to the API Server uses X.509 certificates or OAuth access tokens
- Project quota limits the damage that a rogue token could do
- The etcd service is not exposed directly to the cluster

View File

@@ -0,0 +1,24 @@
// Module included in the following assemblies:
//
// * security/container_security/security-platform.adoc
[id="security-platform-certificates_{context}"]
= Managing certificates for the platform
{product-title} has multiple components within its framework that use REST-based
HTTPS communication leveraging encryption via TLS certificates.
{product-title}'s installer configures these certificates during
installation. There are some primary components that generate this traffic:
* masters (API server and controllers)
* etcd
* nodes
* registry
* router
[id="security-platform-config-custom-certs_{context}"]
== Configuring custom certificates
You can configure custom serving certificates for the public host names of the
API server and web console during initial installation or when redeploying
certificates. You can also use a custom CA.

View File

@@ -0,0 +1,35 @@
// Module included in the following assemblies:
//
// * security/container_security/security-platform.adoc
[id="security-platform-multi-tenancy_{context}"]
= Isolating containers with multitenancy
Multitenancy allows applications on an {product-title} cluster that are owned
by multiple users, and run across multiple hosts and namespaces,
to remain isolated from each other and from outside attacks.
You obtain multitenancy by applying role-based access control (RBAC)
to Kubernetes namespaces.
In Kubernetes, _namespaces_ are areas where applications can run
in ways that are separate from other applications.
{product-title} uses and extends namespaces by adding extra
annotations, including MCS labeling in SELinux, and identifying
these extended namespaces as _projects_. Within the scope of
a project, users can maintain their own cluster resources,
including service accounts, policies, constraints,
and various other objects.
RBAC objects are assigned to projects to authorize selected users
to have access to those projects. That authorization takes the form
of rules, roles, and bindings:
* Rules define what a user can create or access in a project.
* Roles are collections of rules that you can bind to selected users or groups.
* Bindings define the association between users or groups and roles.
Local RBAC roles and bindings attach a user or group to a
particular project. Cluster RBAC can attach cluster-wide roles and bindings
to all projects in a cluster. There are default
cluster roles that can be assigned to provide `admin`, `basic-user`, `cluster-admin`,
and `cluster-status` access.

View File

@@ -0,0 +1,41 @@
// Module included in the following assemblies:
//
// * security/container_security/security-registries.adoc
[id="security-registries-ecosystem_{context}"]
= Getting containers from Red Hat Registry and Ecosystem Catalog
Red Hat lists certified container images for Red Hat products and partner offerings from the
link:https://catalog.redhat.com/software/containers/explore[Container Images]
section of the Red Hat Ecosystem Catalog. From that catalog,
you can see details of each image, including CVE, software packages listings, and health
scores.
Red Hat images are actually stored in what is referred to as the _Red Hat Registry_,
which is represented by a public container registry (`registry.access.redhat.com`)
and an authenticated registry (`registry.redhat.io`).
Both include basically the same set of container images, with
`registry.redhat.io` including some additional images that require authentication
with Red Hat subscription credentials.
Container content is monitored for vulnerabilities by Red Hat and updated
regularly. When Red Hat releases security updates, such as fixes to _glibc_,
link:https://access.redhat.com/security/vulnerabilities/drown[DROWN], or
link:https://access.redhat.com/blogs/766093/posts/2757141[Dirty Cow],
any affected container images are also rebuilt and pushed
to the Red Hat Registry.
Red Hat uses a `health index` to reflect the security risk for each container provided through
the Red Hat Ecosystem Catalog. Because containers consume software provided by Red
Hat and the errata process, old, stale containers are insecure whereas new,
fresh containers are more secure.
To illustrate the age of containers, the Red Hat Container Catalog uses a
grading system. A freshness grade is a measure of the oldest and most severe
security errata available for an image. "A" is more up to date than "F". See
link:https://access.redhat.com/articles/2803031[Container Health Index grades as used inside the Red Hat Container Catalog] for more details on this grading system.
Refer to the link:https://access.redhat.com/security/[Red Hat Product Security Center]
for details on security updates and vulnerabilities related to Red Hat software.
Check out link:https://access.redhat.com/security/security-updates/#/security-advisories[Red Hat Security Advisories]
to search for specific advisories and CVEs.

View File

@@ -0,0 +1,28 @@
// Module included in the following assemblies:
//
// * security/container_security/security-registries.adoc
[id="security-registries-immutable_{context}"]
= Immutable and certified containers
Consuming security updates is particularly important when managing _immutable
containers_. Immutable containers are containers that will never be changed
while running. When you deploy immutable containers, you do not step into the
running container to replace one or more binaries. From an operational
standpoint, you rebuild and redeploy an updated container image
to replace a container instead of changing it.
Red Hat certified images are:
* Free of known vulnerabilities in the platform components or layers
* Compatible across the {op-system-base} platforms, from bare metal to cloud
* Supported by Red Hat
The list of known vulnerabilities is constantly evolving, so you must track the
contents of your deployed container images, as well as newly downloaded images,
over time. You can use
link:https://access.redhat.com/security/security-updates/#/security-advisories[Red Hat Security Advisories (RHSAs)]
to alert you to any newly discovered issues in
Red Hat certified container images, and direct you to the updated image.
Alternatively, you can go to the Red Hat Ecosystem Catalog
to look up that and other security-related issues for each Red Hat image.

View File

@@ -0,0 +1,14 @@
// Module included in the following assemblies:
//
// * security/container_security/security-registries.adoc
[id="security-registries-openshift_{context}"]
= OpenShift Container Registry
{product-title} includes the _OpenShift Container Registry_, a private registry
running as an integrated component of the platform that you can use to manage your container
images. The OpenShift Container Registry provides role-based access controls
that allow you to manage who can pull and push which container images.
{product-title} also supports integration with other private registries that you might
already be using, such as Red Hat Quay.

View File

@@ -0,0 +1,47 @@
// Module included in the following assemblies:
//
// * security/container_security/security-registries.adoc
[id="security-registries-quay_{context}"]
= Storing containers using Red Hat Quay
link:https://access.redhat.com/products/red-hat-quay[Red Hat Quay] is an
enterprise-quality container registry product from Red Hat.
Development for Red Hat Quay is done through the upstream
link:https://docs.projectquay.io/welcome.html[Project Quay].
Red Hat Quay is available to deploy on-premise or through the hosted
version of Red Hat Quay at link:https://quay.io[Quay.io].
Security-related features of Red Hat Quay include:
* *Time Machine*: Allows images with older tags to expire after a set
period of time or based on a user-selected expiration time.
* *link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/manage_red_hat_quay/index#repo-mirroring-in-red-hat-quay[Repository mirroring]*: Lets you mirror
other registries for security reasons, such hosting a public repository
on Red Hat Quay behind a company firewall, or for performance reasons, to
keep registries closer to where they are used.
* *Action log storage*: Save Red Hat Quay logging output to link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/manage_red_hat_quay/index#proc_manage-log-storage[Elasticsearch storage]
to allow for later search and analysis.
* *link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/manage_red_hat_quay/index#quay-security-scanner[Clair security scanning]*: Scan images against a variety of Linux
vulnerability databases, based on the origins of each container image.
* *Internal authentication*: Use the default local database to handle RBAC
authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack),
JWT Custom Authentication, or External Application Token authentication.
* *External authorization (OAuth)*: Allow authorization to Red Hat Quay
from GitHub, GitHub Enterprise, or Google Authentication.
* *Access settings*: Generate tokens to allow access to Red Hat Quay
from docker, rkt, anonymous access, user-created accounts, encrypted
client passwords, or prefix username autocompletion.
Ongoing integration of Red Hat Quay with {product-title} continues,
with several {product-title} Operators of particular interest.
The link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/manage_red_hat_quay/index#quay-bridge-operator[Quay Bridge Operator]
lets you replace the internal {product-title} registry with Red Hat Quay.
The link:https://access.redhat.com/documentation/en-us/red_hat_quay/3/html-single/manage_red_hat_quay/index#container-security-operator-setup[Quay Container Security Operator]
lets you check vulnerabilities of images running in {product-title} that were
pulled from Red Hat Quay registries.

View File

@@ -0,0 +1,11 @@
// Module included in the following assemblies:
//
// * security/container_security/security-registries.adoc
[id="security-registries-where_{context}"]
= Knowing where containers come from?
There are tools you can use to scan and track the contents of your downloaded
and deployed container images. However, there are many public sources of
container images. When using public container registries, you can add a layer of
protection by using trusted sources.

View File

@@ -0,0 +1,11 @@
// Module included in the following assemblies:
//
// * security/container_security/security-storage.adoc
[id="security-network-storage-block_{context}"]
= Block storage
For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent
Disks, and iSCSI, {product-title} uses SELinux capabilities to secure the root
of the mounted volume for non-privileged pods, making the mounted volume owned
by and only visible to the container with which it is associated.

View File

@@ -0,0 +1,40 @@
// Module included in the following assemblies:
//
// * security/container_security/security-storage.adoc
[id="security-network-storage-persistent_{context}"]
= Persistent volume plug-ins
Containers are useful for both stateless and stateful applications.
Protecting attached storage is a key element of securing stateful services.
Using the Container Storage Interface (CSI), {product-title} can
incorporate storage from any storage back end that supports the CSI interface.
{product-title} provides plug-ins for multiple types of storage, including:
* Red Hat OpenShift Container Storage *
* AWS Elastic Block Stores (EBS) *
* AWS Elastic File System (EFS) *
* Azure Disk *
* Azure File *
* OpenStack Cinder *
* GCE Persistent Disks *
* VMware vSphere *
* Network File System (NFS)
* FlexVolume
* Fibre Channel
* iSCSI
Plug-ins for those storage types with dynamic provisioning are marked with
an asterisk (*). Data in transit is encrypted via HTTPS for all
{product-title} components communicating with each other.
You can mount `PersistentVolume` (PV) on a host in any way supported by your
storage type. Different types of storage have different capabilities and each
PV's access modes are set to the specific modes supported by that particular
volume.
For example, NFS can support multiple read/write clients, but a specific NFS PV
might be exported on the server as read-only. Each PV has its own set of access
modes describing that specific PV's capabilities, such as `ReadWriteOnce`,
`ReadOnlyMany`, and `ReadWriteMany`.

View File

@@ -0,0 +1,11 @@
// Module included in the following assemblies:
//
// * security/container_security/security-storage.adoc
[id="security-network-storage-shared_{context}"]
= Shared storage
For shared storage providers like NFS, the PV registers its
group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed
by the pod, the annotated GID is added to the supplemental groups of the pod,
giving that pod access to the contents of the shared storage.

View File

@@ -0,0 +1,36 @@
// Module included in the following assemblies:
//
// * security/container_security/security-understanding.adoc
[id="security-understanding-containers_{context}"]
= What are containers?
Containers package an application and all its dependencies into a single image
that can be promoted from development, to test, to production, without change.
A container might be part of a larger application that works closely with other
containers.
Containers provide consistency across environments and multiple deployment
targets: physical servers, virtual machines (VMs), and private or public cloud.
Some of the benefits of using containers include:
// image::whatarecontainers.png["What Are Containers?", align="center"]
[options="header"]
|===
|Infrastructure |Applications
|Sandboxed application processes on a shared Linux operating system kernel
|Package my application and all of its dependencies
|Simpler, lighter, and denser than virtual machines
|Deploy to any environment in seconds and enable CI/CD
|Portable across different environments
|Easily access and share containerized components
|===
See link:https://www.redhat.com/en/topics/containers[Understanding Linux containers] from the Red Hat customer portal
to find out more about Linux containers. To learn about RHEL container tools, see
link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/building_running_and_managing_containers/index[Building, running, and managing containers] in the RHEL product documentation.

View File

@@ -0,0 +1,29 @@
// Module included in the following assemblies:
//
// * security/container_security/security-understanding.adoc
[id="security-understanding-openshift_{context}"]
= What is {product-title}?
Automating how containerized applications are deployed, run, and managed is the
job of a platform such as {product-title}. At its core, {product-title} relies
on the Kubernetes project to provide the engine for orchestrating containers
across many nodes in scalable data centers.
Kubernetes is a project, which can run using different operating systems
and add-on components that offer no guarantees of supportability from the project.
As a result, the security of different Kubernetes platforms can vary.
{product-title} is designed to lock down Kubernetes security and integrate
the platform with a variety of extended components. To do this,
{product-title} draws on the extensive Red Hat ecosystem of open source
technologies that include the operating systems, authentication, storage,
networking, development tools, base container images, and many other
components.
{product-title} can leverage Red Hat's experience in uncovering
and rapidly deploying fixes for vulnerabilities in the platform itself
as well as the containerized applications running on the platform.
Red Hat's experience also extends to efficiently integrating new
components with {product-title} as they become available and
adapting technologies to individual customer needs.

View File

@@ -0,0 +1 @@
../../images/

View File

@@ -0,0 +1 @@
../../modules/

View File

@@ -0,0 +1,35 @@
[id="security-build"]
= Securing the build process
include::modules/common-attributes.adoc[]
:context: security-build
toc::[]
In a container environment, the software build process is the stage in the life
cycle where application code is integrated with the required runtime libraries.
Managing this build process is key to securing the software stack.
// Build once, deploy everywhere
include::modules/security-build-once.adoc[leveloffset=+1]
// Build management and security
include::modules/security-build-management.adoc[leveloffset=+1]
// Securing inputs during builds
include::modules/security-build-inputs.adoc[leveloffset=+1]
// Designing your build process
include::modules/security-build-designing.adoc[leveloffset=+1]
// Knative builds
include::modules/security-build-knative.adoc[leveloffset=+1]
.Additional resources
* xref:../../builds/understanding-image-builds.adoc#understanding-image-builds[Understanding image builds]
* xref:../../builds/triggering-builds-build-hooks.adoc#triggering-builds-build-hooks[Triggering and modifying builds]
* xref:../../builds/creating-build-inputs.adoc#creating-build-inputs[Creating build inputs]
* xref:../../builds/creating-build-inputs.adoc#builds-input-secrets-configmaps_creating-build-inputs[Input Secrets and ConfigMaps]
ifndef::openshift-origin[]
* xref:../../architecture/cicd_gitops.adoc#cicd_gitops[The CI/CD methodology and practice]
* xref:../../serverless/knative_serving/serverless-knative-serving.adoc#serverless-knative-serving[How Knative Serving works]
endif::[]
* xref:../../applications/application_life_cycle_management/odc-viewing-application-composition-using-topology-view.adoc#odc-viewing-application-composition-using-topology-view[Viewing application composition using the Topology view]

View File

@@ -0,0 +1,18 @@
[id="security-compliance"]
= Understanding compliance
include::modules/common-attributes.adoc[]
:context: security-compliance
toc::[]
For many {product-title} customers, regulatory readiness, or compliance, on
some level is required before any systems can be put into production.
That regulatory readiness can be imposed by national standards, industry
standards or the organization's corporate governance framework.
// Compliance and the NIST risk management model
include::modules/security-compliance-nist.adoc[leveloffset=+1]
ifndef::openshift-origin[]
.Additional resources
* xref:../../installing/installing-fips.adoc#installing-fips-mode_installing-fips[Installing a cluster in FIPS mode]
endif::[]

View File

@@ -0,0 +1,27 @@
[id="security-container-content"]
= Securing container content
include::modules/common-attributes.adoc[]
:context: security-container-content
toc::[]
To ensure the security of the content inside your containers
you need to start with trusted base images, such as Red Hat
Universal Base Images, and add trusted software. To check the
ongoing security of your container images, there are both
Red Hat and third-party tools for scanning images.
// Security inside the container
include::modules/security-container-content-inside.adoc[leveloffset=+1]
// Red Hat Universal Base Images
include::modules/security-container-content-universal.adoc[leveloffset=+1]
// Container content scanning
include::modules/security-container-content-scanning.adoc[leveloffset=+1]
// Integrating external scanning tools with OpenShift
include::modules/security-container-content-external-scanning.adoc[leveloffset=+1]
.Additional resources
* xref:../../openshift_images/images-understand.adoc#images-imagestream-use_images-understand[Image stream objects]
* xref:../../rest_api/index.adoc#rest-api[{product-title} {product-version} REST APIs]

View File

@@ -0,0 +1,29 @@
[id="security-deploy"]
= Deploying containers
include::modules/common-attributes.adoc[]
:context: security-deploy
toc::[]
You can use a variety of techniques to make sure that the containers you
deploy hold the latest production-quality content and that they have not
been tampered with. These techniques include setting up build triggers to
incorporate the latest code and using signatures to ensure that the container
comes from a trusted source and has not been modified.
// Controlling container deployments with triggers
include::modules/security-deploy-trigger.adoc[leveloffset=+1]
// Controlling what image sources can be deployed
include::modules/security-deploy-image-sources.adoc[leveloffset=+1]
// Signature transports
include::modules/security-deploy-signature.adoc[leveloffset=+1]
// Secrets and ConfigMaps
include::modules/security-deploy-secrets.adoc[leveloffset=+1]
// Continuous deployment tooling
include::modules/security-deploy-continuous.adoc[leveloffset=+1]
.Additional resources
* xref:../../builds/creating-build-inputs.adoc#builds-input-secrets-configmaps_creating-build-inputs[Input Secrets and ConfigMaps]

View File

@@ -0,0 +1,47 @@
[id="security-hardening"]
= Hardening {op-system}
include::modules/common-attributes.adoc[]
:context: security-hardening
toc::[]
{op-system} was created and tuned to be deployed in {product-title} with
few if any changes needed to {op-system} nodes.
Every organization adopting {product-title} has its own requirements for
system hardening. As a {op-system-base} system with OpenShift-specific modifications and
features added (such as Ignition, ostree, and a read-only `/usr` to provide
limited immutability),
{op-system} can be hardened just as you would any {op-system-base} system.
Differences lie in the ways you manage the hardening.
A key feature of {product-title} and its Kubernetes engine is to be able
to quickly scale applications and infrastructure up and down as needed.
Unless it is unavoidable, you do not want to make direct changes to {op-system} by
logging into a host and adding software or changing settings. You want
to have the {product-title} installer and control plane manage changes
to {op-system} so new nodes can be spun up without manual intervention.
So, if you are setting out to harden {op-system} nodes in {product-title} to meet
your security needs, you should consider both what to harden
and how to go about doing that hardening.
// Choosing what to harden in {op-system}
include::modules/security-hardening-what.adoc[leveloffset=+1]
// Choosing how to harden {op-system}
include::modules/security-hardening-how.adoc[leveloffset=+1]
.Additional resources
* link:https://access.redhat.com/articles/5059881[OpenShift Security Guide]
* xref:../../architecture/architecture-rhcos.adoc#rhcos-deployed_architecture-rhcos[Choosing how to configure {op-system}]
* xref:../../nodes/nodes/nodes-nodes-managing.adoc#nodes-nodes-managing[Modifying Nodes]
* xref:../../installing/installing_bare_metal/installing-bare-metal.adoc#installation-initializing-manual_installing-bare-metal[Manually creating the installation configuration file]
* xref:../../installing/installing_bare_metal/installing-bare-metal.adoc#installation-user-infra-generate-k8s-manifest-ignition_installing-bare-metal[Creating the Kubernetes manifest and Ignition config files]
* xref:../../installing/installing_bare_metal/installing-bare-metal.adoc#installation-user-infra-machines-iso_installing-bare-metal[Creating {op-system-first} machines using an ISO image]
* xref:../../installing/install_config/installing-customizing.adoc#installing-customizing[Customizing nodes]
* xref:../../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-kernel-arguments_nodes-nodes-working[Adding kernel arguments to Nodes]
* xref:../../installing/installing_aws/installing-aws-customizations.adoc#installation-configuration-parameters_installing-aws-customizations[Installation configuration parameters] - see `fips`
ifndef::openshift-origin[]
* xref:../../installing/installing-fips.adoc#installing-fips[Support for FIPS cryptography]
* link:https://access.redhat.com/articles/3359851[{op-system-base} core crypto components]
endif::[]

View File

@@ -0,0 +1,37 @@
[id="security-hosts-vms"]
= Understanding host and VM security
include::modules/common-attributes.adoc[]
:context: security-hosts-vms
toc::[]
Both containers and virtual machines provide ways of separating
applications running on a host from the operating system itself.
Understanding {op-system}, which is the operating system used by
{product-title}, will help you see how the host
systems protect containers and hosts from each other.
// How containers are secured on {op-system}
include::modules/security-hosts-vms-rhcos.adoc[leveloffset=+1]
.Additional resources
* xref:../../nodes/nodes/nodes-nodes-resources-configuring.adoc#allocate-node-enforcement_nodes-nodes-resources-configuring[How nodes enforce resource constraints]
* xref:../../authentication/managing-security-context-constraints.adoc#managing-pod-security-policies[Managing Security Context Constraints]
* xref:../../architecture/architecture-installation.adoc#available-platforms_architecture-installation[Available platforms]
* xref:../../installing/installing_bare_metal/installing-bare-metal.adoc#installation-requirements-user-infra_installing-bare-metal[Machine requirements for a cluster with user-provisioned infrastructure]
* xref:../../architecture/architecture-rhcos.adoc#rhcos-configured_architecture-rhcos[Choosing how to configure {op-system}]
* xref:../../architecture/architecture-rhcos.adoc#rhcos-about-ignition_architecture-rhcos[Ignition]
* xref:../../installing/install_config/installing-customizing.adoc#installation-special-config-kargs_installing-customizing[Kernel arguments]
* xref:../../installing/install_config/installing-customizing.adoc#installation-special-config-kmod_installing-customizing[Kernel modules]
ifndef::openshift-origin[]
* xref:../../installing/installing-fips.adoc#installing-fips[FIPS cryptography]
endif::[]
* xref:../../installing/install_config/installing-customizing.adoc#installation-special-config-encrypt-disk_installing-customizing[Disk encryption]
* xref:../../installing/install_config/installing-customizing.adoc#installation-special-config-crony_installing-customizing[Chrony time service]
* xref:../../updating/updating-cluster-between-minor.adoc#update-service-overview_updating-cluster-between-minor[{product-title} cluster updates]
// Virtualization versus containers
include::modules/security-hosts-vms-vs-containers.adoc[leveloffset=+1]
// Securing OpenShift
include::modules/security-hosts-vms-openshift.adoc[leveloffset=+1]

View File

@@ -0,0 +1,27 @@
[id="security-monitoring"]
= Monitoring cluster events and logs
include::modules/common-attributes.adoc[]
:context: security-monitoring
toc::[]
The ability to monitor and audit an {product-title} cluster is an
important part of safeguarding the cluster and its users against
inappropriate usage.
There are two main sources of cluster-level information that
are useful for this purpose: events and logging.
// Cluster events
include::modules/security-monitoring-events.adoc[leveloffset=+1]
// Logging
include::modules/security-monitoring-cluster-logging.adoc[leveloffset=+1]
// Audit logging
include::modules/security-monitoring-audit-logging.adoc[leveloffset=+1]
.Additional resources
* xref:../../nodes/clusters/nodes-containers-events.adoc#nodes-containers-events[List of system events]
* xref:../../logging/cluster-logging.adoc#cluster-logging[Understanding cluster logging]
* xref:../../nodes/nodes/nodes-nodes-audit-log.adoc#nodes-nodes-audit-log[Viewing node audit logs]

View File

@@ -0,0 +1,42 @@
[id="security-network"]
= Securing networks
include::modules/common-attributes.adoc[]
:context: security-network
toc::[]
Network security can be managed at several levels. At the pod level,
network namespaces can prevent containers from seeing other pods or
the host system by restricting network access. Network policies
give you control over allowing an rejecting connections.
You can manage ingress and egress traffic to and from your
containerized applications.
// Network namespaces
include::modules/security-network-namespaces.adoc[leveloffset=+1]
// Network policies
include::modules/security-network-policies.adoc[leveloffset=+1]
.Additional resources
* xref:../../networking/configuring-networkpolicy.adoc#configuring-networkpolicy[Configuring network policy]
// Multiple pod networks
include::modules/security-network-multiple-pod.adoc[leveloffset=+1]
.Additional resources
* xref:../../networking/multiple_networks/understanding-multiple-networks.adoc#understanding-multiple-networks[Using multiple networks]
// Isolating applications
include::modules/security-network-isolating.adoc[leveloffset=+1]
.Additional resources
* xref:../../networking/openshift_sdn/multitenant-isolation.adoc#configuring-multitenant-isolation[Configuring network isolation using OpenShiftSDN]
// Ingress traffic
include::modules/security-network-ingress.adoc[leveloffset=+1]
.Additional resources
* xref:../../networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-ingress-controller.adoc#configuring-ingress-cluster-traffic-ingress-controller[Configuring ingress cluster traffic]
// Egress traffic
include::modules/security-network-egress.adoc[leveloffset=+1]
.Additional resources
* xref:../../networking/openshift_sdn/configuring-egress-firewall.adoc#configuring-egress-firewall[Configuring an egress firewall to control access to external IP addresses]
* xref:../../networking/openshift_sdn/assigning-egress-ips.adoc#assigning-egress-ips[Configuring egress IPs for a project]

View File

@@ -0,0 +1,50 @@
[id="security-platform"]
= Securing the container platform
include::modules/common-attributes.adoc[]
:context: security-platform
toc::[]
{product-title} and Kubernetes APIs are key to automating container management at scale. APIs are used to:
* Validate and configure the data for pods, services, and replication controllers.
* Perform project validation on incoming requests and invoke triggers on other
major system components.
Security-related features in {product-title} that are based on Kubernetes include:
* Multitenancy, which combines Role-Based Access Controls and network policies
to isolate containers at multiple levels.
* Admission plug-ins, which form boundaries between an API and those
making requests to the API.
{product-title} uses Operators to automate and simplify the management of
Kubernetes-level security features.
// Multitenancy
include::modules/security-platform-multi-tenancy.adoc[leveloffset=+1]
// Admission plug-ins
include::modules/security-platform-admission.adoc[leveloffset=+1]
// Authentication and authorization
include::modules/security-platform-authentication.adoc[leveloffset=+1]
// Managing certificates for the platform
include::modules/security-platform-certificates.adoc[leveloffset=+1]
.Additional resources
* xref:../../architecture/architecture.adoc#architecture-platform-introduction_architecture[Introduction to {product-title}]
* xref:../../authentication/using-rbac.adoc#using-rbac[Using RBAC to define and apply permissions]
ifndef::openshift-origin[]
* xref:../../architecture/admission-plug-ins.adoc#admission-plug-ins[About Admission plug-ins]
endif::[]
* xref:../../authentication/managing-security-context-constraints.adoc#managing-pod-security-policies[Managing Security Context Constraints]
* xref:../../authentication/managing-security-context-constraints.adoc#security-context-constraints-command-reference_configuring-internal-oauth[SCC reference commands]
* xref:../../authentication/understanding-and-creating-service-accounts.adoc#service-accounts-granting-roles_understanding-service-accounts[Examples of granting roles to service accounts]
* xref:../../authentication/configuring-internal-oauth.adoc#configuring-internal-oauth[Configuring the internal OAuth server]
* xref:../../authentication/understanding-identity-provider.adoc#understanding-identity-provider[Understanding identity provider configuration]
* xref:../../security/certificate-types-descriptions.adoc#ocp-certificates[Certificate types and descriptions]
* xref:../../security/certificate-types-descriptions.adoc#proxy-certificates_ocp-certificates[Proxy certificates]

View File

@@ -0,0 +1,47 @@
[id="security-registries"]
= Using container registries securely
include::modules/common-attributes.adoc[]
:context: security-registries
toc::[]
Container registries store container images to:
* Make images accessible to others
* Organize images into repositories that can include multiple versions of an image
* Optionally limit access to images, based on different authentication methods, or
make them publicly available
There are public container registries, such as Quay.io and Docker Hub
where many people and organizations share their images.
The Red Hat Registry offers supported Red Hat and partner images,
while the Red Hat Ecosystem Catalog offers detailed descriptions
and health checks for those images.
To manage your own registry, you could purchase a container
registry such as
link:https://access.redhat.com/products/red-hat-quay[Red Hat Quay].
From a security standpoint, some registries provide special features to
check and improve the health of your containers.
For example, Red Hat Quay offers container vulnerability scanning
with Clair security scanner, build triggers to automatically rebuild
images when source code changes in GitHub and other locations, and
the ability to use role-based access control (RBAC) to
secure access to images.
// Where do your containers come from?
include::modules/security-registries-where.adoc[leveloffset=+1]
// Immutable and certified containers
include::modules/security-registries-immutable.adoc[leveloffset=+1]
// Red Hat Registry and Red Hat Ecosystem (Container) Catalog
include::modules/security-registries-ecosystem.adoc[leveloffset=+1]
// OpenShift Container Registry
include::modules/security-registries-openshift.adoc[leveloffset=+1]
.Additional resources
* xref:../../registry/architecture-component-imageregistry.adoc#architecture-component-imageregistry[Integrated {product-title} Registry]
// Quay Container Registry
include::modules/security-registries-quay.adoc[leveloffset=+1]

View File

@@ -0,0 +1,28 @@
[id="security-storage"]
= Securing attached storage
include::modules/common-attributes.adoc[]
:context: security-storage
toc::[]
{product-title} supports multiple types of storage, both
for on-premise and cloud providers. In particular,
{product-title} can use storage types that support the Container
Storage Interface.
// Persistent volume plug-ins
include::modules/security-storage-persistent.adoc[leveloffset=+1]
// Shared storage
include::modules/security-storage-shared.adoc[leveloffset=+1]
// Block storage
include::modules/security-storage-block.adoc[leveloffset=+1]
.Additional resources
* xref:../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[Understanding persistent storage]
* xref:../../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-using-csi[Configuring CSI volumes]
* xref:../../storage/dynamic-provisioning.adoc#dynamic-provisioning[Dynamic provisioning]
* xref:../../storage/persistent_storage/persistent-storage-nfs.adoc#persistent-storage-using-nfs[Persistent storage using NFS]
* xref:../../storage/persistent_storage/persistent-storage-aws.adoc#persistent-storage-using-aws-ebs[Persistent storage using AWS Elastic Block Store]
* xref:../../storage/persistent_storage/persistent-storage-gce.adoc#persistent-storage-using-gce[Persistent storage using GCE Persistent Disk]

View File

@@ -0,0 +1,59 @@
[id="security-understanding"]
= Understanding container security
include::modules/common-attributes.adoc[]
:context: security-understanding
toc::[]
Securing a containerized application relies on multiple levels of security:
* Container security begins with a trusted base container image and continues
through the container build process as it moves through your CI/CD pipeline.
* When a container is deployed, its security depends on it running
on secure operating systems and networks, and
establishing firm boundaries between the container itself and
the users and hosts that interact with it.
* Continued security relies on being able to scan container images for
vulnerabilities and having an efficient way to correct and
replace vulnerable images.
Beyond what a platform such as {product-title} offers out of the box,
your organization will likely have its own security demands. Some level
of compliance verification might be needed before you can even bring
{product-title} into your data center.
Likewise, you may need to add your own agents, specialized hardware drivers,
or encryption features to {product-title}, before it can meet your
organization's security standards.
This guide provides a high-level walkthrough of the container security measures
available in {product-title}, including solutions for the host layer, the
container and orchestration layer, and the build and application layer.
It then points you to specific OpenShift Container Platform documentation to
help you achieve those security measures.
This guide contains the following information:
* Why container security is important and how it compares with existing security standards.
* Which container security measures are provided by the host ({op-system} and {op-system-base}) layer and
which are provided by {product-title}.
* How to evaluate your container content and sources for vulnerabilities.
* How to design your build and deployment process to proactively check container content.
* How to control access to containers through authentication and authorization.
* How networking and attached storage are secured in {product-title}.
* Containerized solutions for API management and SSO.
The goal of this guide is to understand the incredible security benefits of
using {product-title} for your containerized workloads and how the entire
Red Hat ecosystem plays a part in making and keeping containers secure.
It will also help you understand how you can engage with the {product-title}
to achieve your organization's security goals.
// What are containers?
include::modules/security-understanding-containers.adoc[leveloffset=+1]
// What is OpenShift?
include::modules/security-understanding-openshift.adoc[leveloffset=+1]
.Additional resources
* xref:../../architecture/architecture.adoc#architecture[{product-title} architecture]
* link:https://access.redhat.com/articles/5059881[OpenShift Security Guide]