1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

[enterprise-4.5] OKD: fix docs build and update OKD documentation

This commit is contained in:
Vadim Rutkovsky
2020-03-28 12:02:37 +01:00
committed by Michael Burke
parent 4f5e9f4bb9
commit 78d2c1ddac
12 changed files with 332 additions and 24 deletions

View File

@@ -66,7 +66,7 @@ Topics:
---
Name: Release notes
Dir: release_notes
Distros: openshift-enterprise,openshift-webscale
Distros: openshift-enterprise,openshift-webscale,openshift-origin
Topics:
- Name: OpenShift Container Platform 4.5 release notes
File: ocp-4-5-release-notes
@@ -87,9 +87,12 @@ Topics:
Distros: openshift-enterprise,openshift-webscale,openshift-origin,openshift-dedicated,openshift-online
- Name: Understanding OpenShift development
File: understanding-development
- Name: Fedora CoreOS
File: architecture-rhcos
Distros: openshift-origin
- Name: Red Hat Enterprise Linux CoreOS
File: architecture-rhcos
Distros: openshift-enterprise,openshift-webscale,openshift-origin
Distros: openshift-enterprise,openshift-webscale
- Name: The CI/CD methodology and practice
File: cicd_gitops
Distros: openshift-enterprise,openshift-webscale
@@ -268,6 +271,7 @@ Topics:
File: updating-cluster-cli
- Name: Updating a cluster that includes RHEL compute machines
File: updating-cluster-rhel-compute
Distros: openshift-enterprise,openshift-webscale
#- Name: Updating a disconnected cluster
# File: updating-disconnected-cluster
# - Name: Troubleshooting an update
@@ -924,8 +928,10 @@ Topics:
File: creating-infrastructure-machinesets
- Name: Adding a RHEL compute machine
File: adding-rhel-compute
Distros: openshift-enterprise,openshift-webscale
- Name: Adding more RHEL compute machines
File: more-rhel-compute
Distros: openshift-enterprise,openshift-webscale
- Name: Deploying machine health checks
File: deploying-machine-health-checks
---
@@ -1355,7 +1361,7 @@ Topics:
File: odo-release-notes
- Name: Helm CLI
Dir: helm_cli
Distros: openshift-enterprise,openshift-webscale
Distros: openshift-enterprise,openshift-webscale,openshift-origin
Topics:
- Name: Getting started with Helm on OpenShift Container Platform
File: getting-started-with-helm-on-openshift-container-platform

View File

@@ -16,7 +16,7 @@ image::odc_add_view.png[Add View]
* *YAML*: Use the editor to add YAML or JSON definitions to create and modify resources.
* *Database*: See the *Developer Catalog* to select the required database service and add it to your application.
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
ifdef::openshift-enterprise,openshift-webscale[]
[NOTE]
====
Serverless options in the *Developer* perspective are displayed only if the xref:../../serverless/installing_serverless/installing-openshift-serverless.adoc#serverless-install-web-console_installing-openshift-serverless[*OpenShift Serverless Operator*] is installed in your cluster.
@@ -30,7 +30,7 @@ To create applications using the *Developer* perspective ensure that:
* You are in the xref:../../web_console/odc-about-developer-perspective.adoc#odc-about-developer-perspective[*Developer* perspective].
* You have the appropriate xref:../../authentication/using-rbac.adoc#default-roles_using-rbac[roles and permissions] in a project to create applications and other workloads in {product-title}.
ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
ifdef::openshift-enterprise,openshift-webscale[]
To create serverless applications, in addition to the preceeding prerequisites, ensure that:

View File

@@ -72,11 +72,11 @@ The following diagram displays the process of building and pushing an image:
.Create a simple containerized application and push it to a registry
image::create-push-app.png[Creating and pushing a containerized application]
If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating
If you use a computer that runs {op-system-base-full} as the operating
system, the process of creating a containerized application requires the
following steps:
. Install container build tools: RHEL contains a set of tools that includes
. Install container build tools: {op-system-base} contains a set of tools that includes
podman, buildah, and skopeo that you use to build and manage containers.
. Create a Dockerfile to combine base image and software: Information about
building your container goes into a file that is named `Dockerfile`. In that
@@ -84,7 +84,7 @@ file, you identify the base image you build from, the software packages you
install, and the software you copy into the container. You also identify
parameter values like network ports that you expose outside the container and
volumes that you mount inside the container. Put your Dockerfile and the
software you want to containerized in a directory on your RHEL system.
software you want to containerized in a directory on your {op-system-base} system.
. Run buildah or docker build: Run the `buildah build-using-dockerfile` or
the `docker build` command to pull you chosen base image to the local system and
creates a container image that is stored locally. You can also build container
@@ -110,7 +110,7 @@ endif::openshift-origin,openshift-enterprise,openshift-webscale[]
=== Container build tool options
While the Docker Container Engine and `docker` command are popular tools
to work with containers, with RHEL and many other Linux systems, you can
to work with containers, with {op-system-base} and many other Linux systems, you can
instead choose a different set of container tools that includes podman, skopeo,
and buildah. You can still use Docker Container Engine tools to create
containers that will run in {product-title} and any other container platform.

View File

@@ -15,8 +15,10 @@ You can install the OpenShift CLI (`oc`) either by downloading the binary or by
// Installing the CLI by downloading the binary
include::modules/cli-installing-cli.adoc[leveloffset=+2]
ifndef::openshift-origin[]
// Installing the CLI by using an RPM
include::modules/cli-installing-cli-rpm.adoc[leveloffset=+2]
endif::[]
// Logging in to the CLI
include::modules/cli-logging-in.adoc[leveloffset=+1]

View File

@@ -30,8 +30,9 @@ include::modules/installation-special-config-encrypt-disk-tpm2.adoc[leveloffset=
include::modules/installation-special-config-encrypt-disk-tang.adoc[leveloffset=+2]
include::modules/installation-special-config-crony.adoc[leveloffset=+1]
ifndef::openshift-origin[]
== Additional resources
See xref:../../installing/installing-fips.adoc#installing-fips[Support for FIPS cryptography]
for information on FIPS support.
endif::[]

View File

@@ -20,8 +20,8 @@ It is not possible to upgrade your existing {product-title} 3 cluster to {produc
[id="migration-comparing-ocp-3-4"]
== Comparing {product-title} 3 and {product-title} 4
With {product-title} 3, administrators individually deployed {op-system-base-full} hosts, and then installed {product-title} on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates.
With {product-title} 3, administrators individually deployed Red Hat Enterprise Linux (RHEL) hosts, and then installed {product-title} on top of these hosts to form a cluster. Administrators were responsible for properly configuring these hosts and performing updates.
{product-title} 4 represents a significant change in the way that {product-title} clusters are deployed and managed. {product-title} 4 includes new technologies and functionality, such as Operators, MachineSets, and {op-system-first}, which are core to the operation of the cluster. This technology shift enables clusters to self-manage some functions previously performed by administrators. This also ensures platform stability and consistency, and simplifies installation and scaling.
@@ -33,7 +33,7 @@ For more information, see xref:../../architecture/architecture.adoc#architecture
[discrete]
==== Immutable infrastructure
{product-title} 4 uses {op-system-first}, which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. {op-system} is an immutable container host, rather than a customizable operating system like RHEL. {op-system} enables {product-title} 4 to manage and automate the deployment of the underlying container host. {op-system} is a part of {product-title}, which means that everything runs inside a container and is deployed using {product-title}.
{product-title} 4 uses {op-system-first}, which is designed to run containerized applications, and provides efficient installation, Operator-based management, and simplified upgrades. {op-system} is an immutable container host, rather than a customizable operating system like {op-system-base}. {op-system} enables {product-title} 4 to manage and automate the deployment of the underlying container host. {op-system} is a part of {product-title}, which means that everything runs inside a container and is deployed using {product-title}.
In {product-title} 4, control plane nodes must run {op-system}, ensuring that full-stack automation is maintained for the control plane. This makes rolling out updates and upgrades a much easier process than in {product-title} 3.
@@ -52,13 +52,15 @@ For more information, see xref:../../operators/olm-what-operators-are.adoc#olm-w
[discrete]
==== Installation process
To install {product-title} 3.11, you prepared your Red Hat Enterprise Linux (RHEL) hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster.
To install {product-title} 3.11, you prepared your {op-system-base-full} hosts, set all of the configuration values your cluster needed, and then ran an Ansible playbook to install and set up your cluster.
In {product-title} 4.4, you use the OpenShift installation program to create a minimum set of resources required for a cluster. Once the cluster is running, you use Operators to further configure your cluster and to install new services. After first boot, {op-system-first} systems are managed by the Machine Config Operator (MCO) that runs in the {product-title} cluster.
For more information, see xref:../../architecture/architecture-installation.adoc#installation-process_architecture-installation[Installation process].
ifndef::openshift-origin[]
If you want to add RHEL worker machines to your {product-title} 4.4 cluster, you use an Ansible playbook to join the RHEL worker machines after the cluster is running. For more information, see xref:../../machine_management/adding-rhel-compute.adoc#adding-rhel-compute[Adding RHEL compute machines to an {product-title} cluster].
endif::[]
[discrete]
==== Infrastructure options
@@ -146,7 +148,9 @@ For more information, see xref:../../networking/configuring-networkpolicy.adoc#n
In {product-title} 3.11, you could use IPsec to encrypt traffic between hosts. {product-title} 4.4 does not support IPsec. It is recommended to use Red Hat OpenShift Service Mesh to enable mutual TLS between services.
ifndef::openshift-origin[]
For more information, see xref:../../service_mesh/service_mesh_arch/understanding-ossm.adoc#understanding-ossm[Understanding Red Hat OpenShift Service Mesh].
endif::[]
[id="migration-preparing-logging"]
=== Logging considerations

View File

@@ -5,7 +5,7 @@
[id="cli-installing-cli-rpm_{context}"]
= Installing the CLI by using an RPM
For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI (`oc`) as an RPM if you have an active {product-title} subscription on your Red Hat account.
For {op-system-base-full}, you can install the OpenShift CLI (`oc`) as an RPM if you have an active {product-title} subscription on your Red Hat account.
.Prerequisites

View File

@@ -41,9 +41,14 @@ install the new version of `oc`.
====
.Procedure
ifdef::openshift-origin[]
. Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system
. Download `oc.tar.gz`
endif::[]
ifndef::openshift-origin[]
. From the link:https://cloud.redhat.com/openshift/install[Infrastructure Provider] page on the {cloud-redhat-com} site, navigate to the page for your installation type and
click *Download Command-line Tools*.
endif::[]
. Click the folder for your operating system and architecture and click the
compressed file.
+

View File

@@ -9,6 +9,14 @@
:prewrap!:
:op-system-first: Red Hat Enterprise Linux CoreOS (RHCOS)
:op-system: RHCOS
:op-system-base: RHEL
:op-system-base-full: Red Hat Enterprise Linux (RHEL)
ifdef::openshift-origin[]
:op-system-first: Fedora CoreOS (FCOS)
:op-system: FCOS
:op-system-base: Fedora
:op-system-base-full: Fedora
endif::[]
:tsb-name: Template Service Broker
:kebab: image:kebab.png[title="Options menu"]
:rh-openstack-first: Red Hat OpenStack Platform (RHOSP)
@@ -16,4 +24,4 @@
:cloud-redhat-com: Red Hat OpenShift Cluster Manager
:rh-virtualization-first: Red Hat Virtualization (RHV)
:rh-virtualization: RHV
:launch: image:app-launcher.png[title="Application Launcher"]
:launch: image:app-launcher.png[title="Application Launcher"]

View File

@@ -26,7 +26,7 @@ The following diagram shows a subset of the installation targets and dependencie
.{product-title} installation targets and dependencies
image::targets-and-dependencies.png[{product-title} installation targets and dependencies]
After installation, each cluster machine uses {op-system-first} as the operating system. {op-system} is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. It includes the `kubelet`, which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes.
After installation, each cluster machine uses {op-system-first} as the operating system. {op-system} is the immutable container host version of {op-system-base-full} and features a {op-system-base} kernel with SELinux enabled by default. It includes the `kubelet`, which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes.
Every control plane machine in an {product-title} {product-version} cluster must
use {op-system}, which includes a critical first-boot provisioning tool called

View File

@@ -8,14 +8,14 @@
{op-system-first} represents the next generation of single-purpose
container operating system technology. Created by the same development teams
that created Red Hat Enterprise Linux Atomic Host and CoreOS Container Linux,
{op-system} combines the quality standards of Red Hat Enterprise Linux (RHEL)
{op-system} combines the quality standards of {op-system-base-full}
with the automated, remote upgrade features from Container Linux.
{op-system} is supported only as a component of {product-title}
{product-version} for all {product-title} machines. {op-system} is the only
supported operating system for {product-title} control plane, or master,
machines. While {op-system} is the default operating system for all cluster
machines, you can create compute machines, which are also known as worker machines, that use RHEL as their
machines, you can create compute machines, which are also known as worker machines, that use {op-system-base} as their
operating system. There are two general ways {op-system} is deployed in
{product-title} {product-version}:
@@ -34,15 +34,15 @@ files to provision your machines.
The following list describes key features of the {op-system} operating system:
* **Based on RHEL**: The underlying operating system consists primarily of RHEL components.
The same quality, security, and control measures that support RHEL also support
* **Based on {op-system-base}**: The underlying operating system consists primarily of {op-system-base} components.
The same quality, security, and control measures that support {op-system-base} also support
{op-system}. For example, {op-system} software is in
RPM packages, and each {op-system} system starts up with a RHEL kernel and a set
RPM packages, and each {op-system} system starts up with a {op-system-base} kernel and a set
of services that are managed by the systemd init system.
* **Controlled immutability**: Although it contains RHEL components, {op-system}
* **Controlled immutability**: Although it contains {op-system-base} components, {op-system}
is designed to be managed
more tightly than a default RHEL installation. Management is
more tightly than a default {op-system-base} installation. Management is
performed remotely from the {product-title} cluster. When you set up your
{op-system} machines, you can modify only a few system settings. This controlled
immutability allows {product-title} to

View File

@@ -0,0 +1,282 @@
// Module included in the following assemblies:
//
// * whatsnew/index.adoc
[[whats-new-features-and-enhancements]]
= New features and enhancements
This release adds improvements related to the following components and concepts.
[id="ocp-operators"]
== Operators
xref:../operators/olm-what-operators-are.adoc#olm-what-operators-are[Operators]
are pieces of software that ease the operational complexity of running another
piece of software. They act like an extension of the software vendors
engineering team, watching over a Kubernetes environment (such as
{product-title}) and using its current state to make decisions in real time.
Advanced Operators are designed to handle upgrades seamlessly, react to failures
automatically, and not take shortcuts, like skipping a software backup process
to save time.
[id="ocp-operator-lifecycle-manager"]
=== Operator Lifecycle Manager (OLM)
This feature is now fully supported in OpenShift v4.
The OLM aids cluster administrators in installing, upgrading, and granting
access to Operators running on their cluster:
* Includes a catalog of curated Operators, with the ability to load other Operators into the cluster
* Handles rolling updates of all Operators to new versions
* Supports role-based access control (RBAC) for certain teams to use certain Operators
See
xref:../operators/understanding_olm/olm-understanding-olm.adoc#olm-understanding-olm[Understanding the Operator Lifecycle Manager (OLM)] for more information.
[id="ocp-installation-and-upgrade"]
== Installation and upgrade
Red Hat OpenShift v4 has an installer-provisioned infrastructure, where
the installation program controls all areas of the installation process.
Installer-provisioned infrastructure also provides an opinionated best practices
deployment of OpenShift v4 for AWS instances only. This provides a
slimmer default installation, with incremental feature buy-in through
OperatorHub.
You can also install with a user-provided infrastructure on
AWS, bare metal, or vSphere hosts. If you use the installer-provisioned
infrastructure installation, the cluster provisions and manages all of the
cluster infrastructure for you.
Upgrading from 3.x to 4 is currently not available. You must perform a new
installation of OpenShift v4.
Easy, over-the-air upgrades for asynchronous z-stream releases of
OpenShift v4 is available. Cluster administrators can upgrade using the
*Cluster Settings* tab in the web console.
See
xref:../updating/updating-cluster.adoc#updating-cluster[Updating a cluster]
for more information.
[id="ocp-operator-hub"]
=== OperatorHub
OperatorHub is available to administrators and helps with easy discovery and
installation of all optional components and applications. It includes offerings
from Red Hat products, Red Hat partners, and the community.
.Features provided with base installation and OperatorHub
[cols="3",options="header"]
|===
|Feature |New installer |OperatorHub
|Console and authentication
|* [x]
| -
|Prometheus cluster monitoring
|* [x]
| -
|Over-the-air updates
|* [x]
| -
|Machine management
|* [x]
| -
|Optional service brokers
| -
|* [x]
|Optional {product-title} components
| -
|* [x]
|Red Hat product Operators
| -
|* [x]
|Red Hat partner Operators
| -
|* [x]
|Community Operators
| -
|* [x]
|===
See
xref:../operators/olm-understanding-operatorhub.adoc#olm-understanding-operatorhub[Understanding the OperatorHub] for more information.
[id="ocp-storage"]
== Storage
Storage support in OpenShift v4 is the same as OpenShift v3 with
the exception of the following available in Technology Preview: EFS (CSI Driver
handled via Amazon), Manila provisioner/operator, and Snapshot.
[id="ocp-scale"]
== Scale
[id="ocp-scale-cluster-limits"]
=== Cluster maximums
Updated guidance around
xref:../scalability_and_performance/planning-your-environment-according-to-object-maximums.adoc[Cluster
maximums] for OpenShift v4 is now available.
Use the link:https://access.redhat.com/labs/ocplimitscalculator/[{product-title}
Limit Calculator] to estimate cluster limits for your environment.
[id="ocp-node-tuning-operator"]
=== Node Tuning Operator
The
xref:../scalability_and_performance/using-node-tuning-operator.adoc#using-node-tuning-operator[Node
Tuning Operator] is now part of a standard {product-title} installation in
OpenShift v4.
The Node Tuning Operator helps you manage node-level tuning by orchestrating the
tuned daemon. The majority of high-performance applications require some level
of kernel tuning. The Node Tuning Operator provides a unified management
interface to users of node-level sysctls and more flexibility to add custom
tuning, which is currently a Technology Preview feature, specified by user
needs. The Operator manages the containerized tuned daemon for OpenShift
Container Platform as a Kubernetes DaemonSet. It ensures the custom tuning
specification is passed to all containerized tuned daemons running in the
cluster in the format that the daemons understand. The daemons run on all nodes
in the cluster, one per node.
[id="ocp-cluster-monitoring"]
== Cluster monitoring
[id="ocp-autoscale-pods-horizontally-based-on-custom-metrics-api"]
=== Autoscale pods horizontally based on the custom metrics API (Technology Preview)
This feature, currently in Technology Preview, enables you to configure
horizontal pod autoscaling (HPA) based on the custom metrics API. As part of
this Technology Preview, a Prometheus Adapter component can be deployed to
provide any app metrics for the custom metrics API.
Limitations:
* The adapter only connects to a single Prometheus instance (or a set of
load-balanced replicas, using Kubernetes services).
* Manually deploying adapter and configuring it to use Prometheus.
* Syntax for the Prometheus Adapter configuration could change in the future.
* The `APIService` configuration to wire Kubernetes' API aggregation to the
instance of the custom metrics adapter will be overwritten in future releases,
if {product-title} ships an out-of-the-box custom metrics adapter.
[id="ocp-cluster-monitoring-alerting-UI"]
=== New alerting user interface
An alerting UI is now natively integrated into the {product-title} web console.
You can now view cluster-level alerts and alerting rules from a single place, as
well as configure silences.
[id="ocp-cluster-monitoring-telemeter"]
=== Telemeter
Telemeter collects anonymized cluster-related metrics to proactively help
customers with their {product-title} clusters. This helps:
* Gather crucial health metrics of {product-title} installations.
* Enable a viable feedback loop of {product-title} upgrades.
* Gather the cluster's number of nodes per cluster and their size (CPU cores and
RAM).
* Gather the size of etcd.
* Gather details about the health condition and status for any OpenShift framework
component installed on an OpenShift cluster.
[id="ocp-cluster-monitoring-autoscale"]
=== Autoscale pods horizontally based on the resource metrics API
By default, OpenShift Cluster Monitoring exposes CPU and Memory utilization
through the Kubernetes resource metrics API. There is no longer a requirement to
install a separate metrics server.
[id="ocp-developer-experience"]
== Developer experience
[id="ocp-multistage-builds"]
=== Multi-stage Dockerfile Builds Generally Available
Multi-stage Dockerfiles are now supported in all `Docker` strategy builds.
[id="ocp-registry"]
== Registry
[id="ocp-registry-managed-by-operator"]
=== The registry is now managed by an Operator
The registry is now managed by an Operator instead of `oc adm registry`.
[id="ocp-networking"]
== Networking
[id="ocp-cno"]
=== Cluster Network Operator (CNO)
The cluster network is now configured and managed by an Operator. The Operator
upgrades and monitors the cluster network.
[id="ocp-openshift-sdn"]
=== OpenShift SDN
The default mode is now `NetworkPolicy`.
[id="ocp-multus"]
=== Multus
Multus is a meta plug-in for Kubernetes Container Network Interface (CNI), which
enables a user to create multiple network interfaces per pod.
[id="ocp-sriov"]
=== SR-IOV
OpenShift v4 includes the Technical Preview capability to use specific
SR-IOV hardware on {product-title} nodes, which enables the user to
attach SR-IOV virtual function (VF) interfaces to Pods in addition to other
network interfaces.
[id="ocp-f5"]
=== F5 router plug-in support
F5 router plug-in is no longer supported as part of {product-title} directly.
However, F5 has developed a container connector that replaces the functionality.
It is recommended to work with F5 support to implement their solution.
[id="ocp-web-console"]
== Web console
[id="ocp-developer-catalog"]
=== Developer Catalog
OpenShift v4 features a redesigned Developer Catalog that brings all of
the new Operators and existing broker services together, with new ways to
discover, sort, and understand how to best use each type of offering. The
Developer Catalog is the entry point for a developer to access all services
available to them. It merges all capabilities from Operators, the Service
Catalog, brokers, and Source-to-Image (S2I).
[id="ocp-new-management-screens"]
=== New management screens
New management screens in OpenShift v4 support automated operations.
Examples include the management of machine sets and machines, taints,
tolerations, and cluster settings.
[id="ocp-security"]
== Security
In OpenShift v4, Operators are utilized to install, configure, and
manage the various certificate signing servers. Certificates are managed
as secrets stored within the cluster itself.