mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
Post-installation configuration
This commit is contained in:
@@ -307,6 +307,21 @@ Topics:
|
||||
# - Name: Troubleshooting an update
|
||||
# File: updating-troubleshooting
|
||||
---
|
||||
Name: Post-installation configuration
|
||||
Dir: post_installation_configuration
|
||||
Distros: openshift-origin,openshift-enterprise,openshift-webscale
|
||||
Topics:
|
||||
- Name: Cluster tasks
|
||||
File: cluster-tasks
|
||||
- Name: Node tasks
|
||||
File: node-tasks
|
||||
- Name: Network configuration
|
||||
File: network-configuration
|
||||
- Name: Storage configuration
|
||||
File: storage-configuration
|
||||
- Name: Preparing for users
|
||||
File: preparing-for-users
|
||||
---
|
||||
Name: Support
|
||||
Dir: support
|
||||
Distros: openshift-enterprise,openshift-webscale,openshift-online,openshift-dedicated
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/encrypting-etcd.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="about-etcd_{context}"]
|
||||
= About etcd encryption
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/using-node-tuning-operator.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="accessing-an-example-node-tuning-operator-specification_{context}"]
|
||||
= Accessing an example Node Tuning Operator specification
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assmeblies:
|
||||
//
|
||||
// * authentication/removing-kubeadmin.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="understanding-kubeadmin_{context}"]
|
||||
= The kubeadmin user
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
//
|
||||
// * authentication/understanding-authentication.adoc
|
||||
// * authentication/understanding-identity-provider.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="removing-kubeadmin_{context}"]
|
||||
= Removing the kubeadmin user
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * storage/optimizing-storage.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="available-persistent-storage-options_{context}"]
|
||||
= Available persistent storage options
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * disaster_recovery/backing-up-etcd.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="backing-up-etcd-data_{context}"]
|
||||
= Backing up etcd data
|
||||
|
||||
@@ -1,3 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
// * scalability_and_performance/routing-optimization.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="baseline-router-performance_{context}"]
|
||||
= Baseline Ingress Controller (router) performance
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/applying-autoscaling.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="cluster-autoscaler-about_{context}"]
|
||||
= About the ClusterAutoscaler
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/applying-autoscaling.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="cluster-autoscaler-cr_{context}"]
|
||||
= ClusterAutoscaler resource definition
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/using-node-tuning-operator.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="custom-tuning-default-profiles-set_{context}"]
|
||||
= Default profiles set on a cluster
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="configuring-huge-pages_{context}"]
|
||||
= Configuring huge pages
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/recommended-host-practices.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="create-a-kubeletconfig-crd-to-edit-kubelet-parameters_{context}"]
|
||||
= Create a KubeletConfig CRD to edit kubelet parameters
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/using-node-tuning-operator.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="custom-tuning-specification_{context}"]
|
||||
= Custom tuning specification
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/applying-autoscaling.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1,19 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
|
||||
[id="differences-between-machinesets-and-machineconfigpool_{context}"]
|
||||
= Understanding the difference between MachineSets and the MachineConfigPool
|
||||
|
||||
MachineSets describe {product-title} nodes with respect to the cloud or machine
|
||||
provider.
|
||||
|
||||
The MachineConfigPool allows MachineConfigController components to define and
|
||||
provide the status of machines in the context of upgrades.
|
||||
|
||||
The MachineConfigPool allows users to configure how upgrades are rolled out to the
|
||||
{product-title} nodes in the MachineConfigPool.
|
||||
|
||||
NodeSelector can be replaced with a reference to MachineSets.
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/encrypting-etcd.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="disabling-etcd-encryption_{context}"]
|
||||
= Disabling etcd encryption
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * disaster_recovery/scenario-2-restoring-cluster-state.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
|
||||
[id="dr-scenario-2-restoring-cluster-state_{context}"]
|
||||
= Restoring to a previous cluster state
|
||||
|
||||
You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single master host. Then the etcd cluster Operator handles scaling to the remaining master hosts.
|
||||
You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining master hosts.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
@@ -15,29 +17,29 @@ You can use a saved etcd backup to restore back to a previous cluster state. You
|
||||
|
||||
.Procedure
|
||||
|
||||
. Select a master host to use as the recovery host. This is the host that you will run the restore operation on.
|
||||
. Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
|
||||
|
||||
. Establish SSH connectivity to each of the master nodes, including the recovery host.
|
||||
. Establish SSH connectivity to each of the control plane nodes, including the recovery host.
|
||||
+
|
||||
The Kubernetes API server will become inaccessible once the restore process has started, so you cannot access the master nodes. For this reason, it is recommended to establish SSH connectivity to each master host in a separate terminal.
|
||||
The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
If you do not complete this step, you will not be able to access the master hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
|
||||
====
|
||||
|
||||
. Copy the etcd backup directory to the recovery master host.
|
||||
. Copy the etcd backup directory to the recovery control plane host.
|
||||
+
|
||||
This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static Pods to the `/home/core/` directory of your recovery master host.
|
||||
This procedure assumes that you copied the `backup` directory containing the etcd snapshot and the resources for the static Pods to the `/home/core/` directory of your recovery control plane host.
|
||||
|
||||
. Stop the static Pods on all other master nodes.
|
||||
. Stop the static Pods on all other control plane nodes.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
It is not required to manually stop the Pods on the recovery host. The recovery script will stop the Pods on the recovery host.
|
||||
====
|
||||
|
||||
.. Access a master host that is not the recovery host.
|
||||
.. Access a control plane host that is not the recovery host.
|
||||
|
||||
.. Move the existing etcd Pod file out of the kubelet manifest directory:
|
||||
+
|
||||
@@ -71,7 +73,7 @@ The output of this command should be empty.
|
||||
|
||||
.. Repeat this step on each of the other master hosts that is not the recovery host.
|
||||
|
||||
. Access the recovery master host.
|
||||
. Access the recovery control plane host.
|
||||
|
||||
|
||||
. If the cluster-wide proxy is enabled, be sure that you have exported the `NO_PROXY`, `HTTP_PROXY`, and `HTTPS_PROXY` environment variables.
|
||||
@@ -81,7 +83,7 @@ The output of this command should be empty.
|
||||
You can check whether the proxy is enabled by reviewing the output of `oc get proxy cluster -o yaml`. The proxy is enabled if the `httpProxy`, `httpsProxy`, and `noProxy` fields have values set.
|
||||
====
|
||||
|
||||
. Run the restore script on the recovery master host and pass in the path to the etcd backup directory:
|
||||
. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
|
||||
@@ -1,22 +1,23 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// storage/dynamic-provisioning.adoc
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="about_{context}"]
|
||||
= About dynamic provisioning
|
||||
|
||||
The StorageClass resource object describes and classifies storage that can
|
||||
The StorageClass resource object describes and classifies storage that can
|
||||
be requested, as well as provides a means for passing parameters for
|
||||
dynamically provisioned storage on demand. StorageClass objects can also
|
||||
serve as a management mechanism for controlling different levels of
|
||||
dynamically provisioned storage on demand. StorageClass objects can also
|
||||
serve as a management mechanism for controlling different levels of
|
||||
storage and access to the storage. Cluster Administrators (`cluster-admin`)
|
||||
or Storage Administrators (`storage-admin`) define and create the
|
||||
StorageClass objects that users can request without needing any intimate
|
||||
or Storage Administrators (`storage-admin`) define and create the
|
||||
StorageClass objects that users can request without needing any detailed
|
||||
knowledge about the underlying storage volume sources.
|
||||
|
||||
The {product-title} persistent volume framework enables this functionality
|
||||
and allows administrators to provision a cluster with persistent storage.
|
||||
The framework also gives users a way to request those resources without
|
||||
The {product-title} persistent volume framework enables this functionality
|
||||
and allows administrators to provision a cluster with persistent storage.
|
||||
The framework also gives users a way to request those resources without
|
||||
having any knowledge of the underlying infrastructure.
|
||||
|
||||
Many storage types are available for use as persistent volumes in
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="storage-class-annotations_{context}"]
|
||||
= StorageClass annotations
|
||||
@@ -26,7 +27,7 @@ metadata:
|
||||
----
|
||||
|
||||
This enables any Persistent Volume Claim (PVC) that does not specify a
|
||||
specific volume to automatically be provisioned through the
|
||||
specific StorageClass to automatically be provisioned through the
|
||||
default StorageClass.
|
||||
|
||||
[NOTE]
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="available-plug-ins_{context}"]
|
||||
= Available dynamic provisioning plug-ins
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="aws-definition_{context}"]
|
||||
= AWS Elastic Block Store (EBS) object definition
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="azure-disk-definition_{context}"]
|
||||
= Azure Disk object definition
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// storage/persistent_storage/persistent-storage-azure-file.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="azure-file-considerations_{context}"]
|
||||
= Considerations when using Azure File
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following assemblies
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
|
||||
[id="azure-file-definition_{context}"]
|
||||
= Azure File object definition
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * virt/virtual_machines/importing_vms/virt-importing-rhv-vm.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
|
||||
[id="change-default-storage-class_{context}"]
|
||||
= Changing the default StorageClass
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="openstack-cinder-storage-class_{context}"]
|
||||
= {rh-openstack} Cinder object definition
|
||||
@@ -19,11 +20,10 @@ parameters:
|
||||
fsType: ext4 <3>
|
||||
----
|
||||
<1> Volume type created in Cinder. Default is empty.
|
||||
<2> Availability Zone. If not specified, volumes are generally
|
||||
round-robined across all active zones where the {product-title} cluster
|
||||
<2> Availability Zone. If not specified, volumes are generally
|
||||
round-robined across all active zones where the {product-title} cluster
|
||||
has a node.
|
||||
<3> File system that is created on dynamically provisioned volumes. This
|
||||
value is copied to the `fsType` field of dynamically provisioned
|
||||
persistent volumes and the file system is created when the volume is
|
||||
<3> File system that is created on dynamically provisioned volumes. This
|
||||
value is copied to the `fsType` field of dynamically provisioned
|
||||
persistent volumes and the file system is created when the volume is
|
||||
mounted for the first time. The default value is `ext4`.
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="defining-storage-classes_{context}"]
|
||||
= Defining a StorageClass
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="gce-persistentdisk-storage-class_{context}"]
|
||||
= GCE PersistentDisk (gcePD) object definition
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="basic-storage-class-definition_{context}"]
|
||||
= Basic StorageClass object definition
|
||||
|
||||
The following resource shows the parameters and default values that you
|
||||
use to configure a StorageClass. This example uses the AWS
|
||||
The following resource shows the parameters and default values that you
|
||||
use to configure a StorageClass. This example uses the AWS
|
||||
ElasticBlockStore (EBS) object definition.
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following definitions:
|
||||
//
|
||||
// * storage/dynamic-provisioning.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
|
||||
[id="vsphere-definition_{context}"]
|
||||
= VMware vSphere object definition
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * security/encrypting-etcd.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="enabling-etcd-encryption_{context}"]
|
||||
= Enabling etcd encryption
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed-by-apps.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="how-huge-pages-are-consumed-by-apps_{context}"]
|
||||
= How huge pages are consumed by apps
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * authentication/understanding-identity-provider.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="identity-provider-default-CR_{context}"]
|
||||
= Sample identity provider CR
|
||||
|
||||
@@ -12,12 +12,13 @@
|
||||
// * authentication/identity_providers/configuring-gitlab-identity-provider.adoc
|
||||
// * authentication/identity_providers/configuring-google-identity-provider.adoc
|
||||
// * authentication/identity_providers/configuring-oidc-identity-provider.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="identity-provider-overview_{context}"]
|
||||
= About identity providers in {product-title}
|
||||
|
||||
By default, only a `kubeadmin` user exists on your cluster. To specify an
|
||||
identity provider, you must create a Custom Resource (CR) that describes
|
||||
By default, only a `kubeadmin` user exists on your cluster. To specify an
|
||||
identity provider, you must create a Custom Resource (CR) that describes
|
||||
that identity provider and add it to the cluster.
|
||||
|
||||
[NOTE]
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * authentication/understanding-identity-provider.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="identity-provider-parameters_{context}"]
|
||||
= Identity provider parameters
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
//
|
||||
// * registry/configuring-registry-operator.adoc
|
||||
// * openshift_images/image-configuration.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="images-configuration-cas_{context}"]
|
||||
= Configuring additional trust stores for image registry access
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * openshift_images/image-configuration.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="images-configuration-file_{context}"]
|
||||
= Configuring image settings
|
||||
@@ -68,5 +69,5 @@ pods. For instance, whether or not to allow insecure access. It does not contain
|
||||
configuration for the internal cluster registry.
|
||||
<5> `insecureRegistries`: Registries which do not have a valid TLS certificate or
|
||||
only support HTTP connections.
|
||||
<6> `blockedRegistries`: Blacklisted for image pull and push actions. All other
|
||||
<6> `blockedRegistries`: Denylisted for image pull and push actions. All other
|
||||
registries are allowed.
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * openshift_images/image-configuration.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="images-configuration-insecure_{context}"]
|
||||
= Importing insecure registries and blocking registries
|
||||
@@ -51,7 +52,7 @@ status:
|
||||
internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
|
||||
----
|
||||
<1> Specify an insecure registry.
|
||||
<2> Specify registries that should be blacklisted for image pull and push actions. All other
|
||||
<2> Specify registries that should be denylisted for image pull and push actions. All other
|
||||
registries are allowed. Either `blockedRegistries` or `allowedRegistries` can be set, but not both.
|
||||
<3> Specify registries that should be permitted for image pull and push actions. All other registries are denied. Either `blockedRegistries` or `allowedRegistries` can be set, but not both.
|
||||
+
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * openshift_images/image-configuration.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="images-configuration-parameters_{context}"]
|
||||
= Image controller configuration parameters
|
||||
@@ -58,10 +59,10 @@ field in ImageStreams. The value must be in `hostname[:port]` format.
|
||||
`insecureRegistries`: Registries which do not have a valid TLS certificate or
|
||||
only support HTTP connections.
|
||||
|
||||
`blockedRegistries`: Blacklisted for image pull and push actions. All other
|
||||
`blockedRegistries`: Denylisted for image pull and push actions. All other
|
||||
registries are allowed.
|
||||
|
||||
`allowedRegistries`: Whitelisted for image pull and push actions. All other
|
||||
`allowedRegistries`: Allowlisted for image pull and push actions. All other
|
||||
registries are blocked.
|
||||
|
||||
Only one of `blockedRegistries` or `allowedRegistries` may be set
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * openshift_images/image-configuration.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="images-configuration-registry-mirror_{context}"]
|
||||
= Configuring image registry repository mirroring
|
||||
@@ -69,8 +70,8 @@ on a Red Hat Enterprise Linux
|
||||
[source,terminal]
|
||||
----
|
||||
$ skopeo copy \
|
||||
docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:c505667389712dc337986e29ffcb65116879ef27629dc3ce6e1b17727c06e78f \
|
||||
docker://example.io/ubi8/ubi-minimal
|
||||
docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:c505667389712dc337986e29ffcb65116879ef27629dc3ce6e1b17727c06e78f \
|
||||
docker://example.io/example/ubi-minimal
|
||||
----
|
||||
+
|
||||
In this example, you have a container image registry that is named
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/creating-infrastructure-machinesets.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="infrastructure-components_{context}"]
|
||||
= {product-title} infrastructure components
|
||||
@@ -15,4 +16,4 @@ The following {product-title} components are infrastructure components:
|
||||
* Service brokers
|
||||
|
||||
Any node that runs any other container, pod, or component is a worker node that
|
||||
your subscription must cover.
|
||||
your subscription must cover.
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
// * installing/installing_ibm_z/installing-ibm-z.adoc
|
||||
// * machine_management/adding-rhel-compute.adoc
|
||||
// * machine_management/more-rhel-compute.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
ifeval::["{context}" == "installing-ibm-z"]
|
||||
:ibm-z:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/applying-autoscaling.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machine-autoscaler-about_{context}"]
|
||||
= About the MachineAutoscaler
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/applying-autoscaling.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machine-autoscaler-cr_{context}"]
|
||||
= MachineAutoscaler resource definition
|
||||
@@ -29,7 +30,7 @@ which MachineSet this MachineAutoscaler scales, specify or include the name of
|
||||
the MachineSet to scale. The MachineSet name takes the following form:
|
||||
`<clusterid>-<machineset>-<aws-region-az>`
|
||||
<2> Specify the minimum number Machines of the specified type that must remain in the
|
||||
specified zone after the ClusterAutoscaler initiates cluster scaling. If running in AWS, GCP, or Azure, this value can be set to `0`. For other providers, do not set this value to `0`.
|
||||
specified zone after the ClusterAutoscaler initiates cluster scaling. If running in AWS, GCP, or Azure, this value can be set to `0`. For other providers, do not set this value to `0`.
|
||||
<3> Specify the maximum number Machines of the specified type that the ClusterAutoscaler can deploy in the
|
||||
specified AWS zone after it initiates cluster scaling. Ensure that the `maxNodesTotal` value in the `ClusterAutoscaler` definition is large enough to allow the MachineAutoScaler to deploy this number of machines.
|
||||
<4> In this section, provide values that describe the existing MachineSet to
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/deploying-machine-health-checks.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="machine-health-checks-about_{context}"]
|
||||
= About MachineHealthChecks
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/deploying-machine-health-checks.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="machine-health-checks-creating_{context}"]
|
||||
= Creating a MachineHealthCheck resource
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/deploying-machine-health-checks.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
|
||||
[id="machine-health-checks-resource_{context}"]
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
// * machine_management/creating-machinesets.adoc
|
||||
// * machine_management/deploying-machine-health-checks.adoc
|
||||
// * machine_management/manually-scaling-machinesets.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
// * machine_management/creating_machinesets/creating-machineset-azure.adoc
|
||||
// * machine_management/creating_machinesets/creating-machineset-gcp.adoc
|
||||
// * machine_management/creating_machinesets/creating-machineset-osp.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-creating_{context}"]
|
||||
= Creating a MachineSet
|
||||
|
||||
@@ -1,11 +1,17 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/manually-scaling-machineset.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-manually-scaling_{context}"]
|
||||
= Scaling a MachineSet manually
|
||||
|
||||
If you must add or remove an instance of a machine in a MachineSet, you can manually scale the MachineSet.
|
||||
If you must add or remove an instance of a machine in a MachineSet, you can
|
||||
manually scale the MachineSet.
|
||||
|
||||
This guidance is relevant to fully automated, installer provisioned
|
||||
infrastructure installations. Customized, user provisioned infrastructure
|
||||
installations does not have MachineSets.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
//
|
||||
// * machine_management/creating-infrastructure-machinesets.adoc
|
||||
// * machine_management/creating_machinesets/creating-machineset-aws.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-yaml-aws_{context}"]
|
||||
= Sample YAML for a MachineSet Custom Resource on AWS
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
//
|
||||
// * machine_management/creating-infrastructure-machinesets.adoc
|
||||
// * machine_management/creating-machineset-azure.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-yaml-azure_{context}"]
|
||||
= Sample YAML for a MachineSet Custom Resource on Azure
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
//
|
||||
// * machine_management/creating-infrastructure-machinesets.adoc
|
||||
// * machine_management/creating-machineset-gcp.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="machineset-yaml-gcp_{context}"]
|
||||
= Sample YAML for a MachineSet Custom Resource on GCP
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/recommended-host-practices.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="master-node-sizing_{context}"]
|
||||
= Master node sizing
|
||||
= Control plane node sizing
|
||||
|
||||
The master node resource requirements depend on the number of nodes in the cluster. The following master node size recommendations are based on the results of control plane density focused testing.
|
||||
The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing.
|
||||
|
||||
[options="header",cols="3*"]
|
||||
|===
|
||||
@@ -27,7 +28,7 @@ The master node resource requirements depend on the number of nodes in the clust
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Because you cannot modify the master node size in a running {product-title} {product-version} cluster, you must estimate your total node count and use the suggested master size during installation.
|
||||
Because you cannot modify the control plane node size in a running {product-title} {product-version} cluster, you must estimate your total node count and use the suggested control plane node size during installation.
|
||||
====
|
||||
|
||||
[NOTE]
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * applications/projects/configuring-project-creation.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="modifying-template-for-new-projects_{context}"]
|
||||
= Modifying the template for new projects
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/using-node-tuning-operator.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="supported-tuned-daemon-plug-ins_{context}"]
|
||||
= Supported Tuned daemon plug-ins
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/using-node-tuning-operator.adoc
|
||||
//* operators/operator-reference.adoc
|
||||
// * operators/operator-reference.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
ifeval::["{context}" == "red-hat-operators"]
|
||||
:operators:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-cluster-disabling-features.adoc
|
||||
// * nodes/nodes-cluster-enabling-features.adoc
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-enabling-features-cluster_{context}"]
|
||||
= Enabling Technology Preview features using FeatureGates
|
||||
|
||||
@@ -1,13 +1,11 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/clusters/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-node-overcommit_{context}"]
|
||||
= Node-level overcommit
|
||||
= Node-level overcommit
|
||||
|
||||
You can use various ways to control overcommit on specific nodes, such as quality of service (QOS)
|
||||
You can use various ways to control overcommit on specific nodes, such as quality of service (QOS)
|
||||
guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes
|
||||
and specific projects.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-overcommit-configure-nodes_{context}"]
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-overcommit-node-disable_{context}"]
|
||||
= Disabling overcommitment for a node
|
||||
@@ -15,4 +16,3 @@ To disable overcommitment in a node run the following command on that node:
|
||||
----
|
||||
$ sysctl -w vm.overcommit_memory=0
|
||||
----
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-overcommit-node-enforcing_{context}"]
|
||||
|
||||
|
||||
@@ -1,18 +1,19 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-overcommit-node-resources_{context}"]
|
||||
|
||||
= Reserving resources for system processes
|
||||
|
||||
To provide more reliable scheduling and minimize node resource overcommitment,
|
||||
each node can reserve a portion of its resources for use by system daemons
|
||||
that are required to run on your node for your cluster to function (*sshd*, etc.).
|
||||
To provide more reliable scheduling and minimize node resource overcommitment,
|
||||
each node can reserve a portion of its resources for use by system daemons
|
||||
that are required to run on your node for your cluster to function.
|
||||
In particular, it is recommended that you reserve resources for incompressible resources such as memory.
|
||||
|
||||
.Procedure
|
||||
|
||||
To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources
|
||||
available for scheduling.
|
||||
available for scheduling.
|
||||
For more details, see Allocating Resources for Nodes.
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-overcommit-project-disable_{context}"]
|
||||
= Disabling overcommitment for a project
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-overcommit-qos-about_{context}"]
|
||||
= Understanding overcomitment and quality of service classes
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-overcommit-resource-requests_{context}"]
|
||||
= Resource requests and overcommitment
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-overcommit-reserving-memory_{context}"]
|
||||
= Understanding compute resources and containers
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/clusters/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-project-overcommit_{context}"]
|
||||
= Project-level limits
|
||||
= Project-level limits
|
||||
|
||||
To help control overcommit, you can set per-project resource limit ranges,
|
||||
specifying memory and CPU limits and defaults for a project that overcommit
|
||||
@@ -11,6 +12,4 @@ cannot exceed.
|
||||
|
||||
For information on project-level resource limits, see Additional Resources.
|
||||
|
||||
Alternatively, you can disable overcommitment for specific projects.
|
||||
|
||||
|
||||
Alternatively, you can disable overcommitment for specific projects.
|
||||
|
||||
@@ -1,21 +1,22 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/clusters/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-resource-configure_{context}"]
|
||||
= Configuring cluster-level overcommit
|
||||
= Configuring cluster-level overcommit
|
||||
|
||||
|
||||
The Cluster Resource Override Operator requires a `ClusterResourceOverride` custom resource (CR)
|
||||
The Cluster Resource Override Operator requires a `ClusterResourceOverride` custom resource (CR)
|
||||
and a label for each project where you want the Operator to control overcommit.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The Cluster Resource Override Operator has no effect if limits have not
|
||||
been set on containers. You must specify default limits for a project using a LimitRange
|
||||
been set on containers. You must specify default limits for a project using a LimitRange
|
||||
object or configure limits in Pod specs in order for the overrides to apply.
|
||||
|
||||
.Procedure
|
||||
.Procedure
|
||||
|
||||
To modify cluster-level overcommit:
|
||||
|
||||
@@ -36,7 +37,7 @@ spec:
|
||||
<2> Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25.
|
||||
<3> Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200.
|
||||
|
||||
. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit:
|
||||
. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -53,4 +54,3 @@ metadata:
|
||||
|
||||
----
|
||||
<1> Add this label to each project.
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/clusters/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-resource-override-deploy-cli_{context}"]
|
||||
= Installing the Cluster Resource Override Operator using the CLI
|
||||
|
||||
@@ -1,16 +1,17 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/clusters/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-resource-override-deploy-console_{context}"]
|
||||
= Installing the Cluster Resource Override Operator using the web console
|
||||
|
||||
You can use the {product-title} web console to install the Cluster Resource Override Operator to help control overcommit in your cluster.
|
||||
You can use the {product-title} web console to install the Cluster Resource Override Operator to help control overcommit in your cluster.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* The Cluster Resource Override Operator has no effect if limits have not
|
||||
been set on containers. You must specify default limits for a project using a LimitRange
|
||||
been set on containers. You must specify default limits for a project using a LimitRange
|
||||
object or configure limits in Pod specs in order for the overrides to apply.
|
||||
|
||||
.Procedure
|
||||
@@ -29,7 +30,7 @@ To install the Cluster Resource Override Operator using the {product-title} web
|
||||
|
||||
.. Choose *ClusterResourceOverride Operator* from the list of available Operators and click *Install*.
|
||||
|
||||
.. On the *Create Operator Subscription* page, make sure *A specific Namespace on the cluster* is selected for *Installation Mode*.
|
||||
.. On the *Create Operator Subscription* page, make sure *A specific Namespace on the cluster* is selected for *Installation Mode*.
|
||||
|
||||
.. Make sure *clusterresourceoverride-operator* is selected for *Installed Namespace*.
|
||||
|
||||
@@ -123,6 +124,6 @@ metadata:
|
||||
|
||||
labels:
|
||||
clusterresourceoverrides.admission.autoscaling.openshift.io: enabled <1>
|
||||
----
|
||||
----
|
||||
<1> Add the `clusterresourceoverrides.admission.autoscaling.openshift.io: enabled` label to the Namespace.
|
||||
////
|
||||
|
||||
@@ -1,14 +1,15 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/clusters/nodes-cluster-overcommit.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-cluster-resource-override_{context}"]
|
||||
= Cluster-level overcommit using the Cluster Resource Override Operator
|
||||
= Cluster-level overcommit using the Cluster Resource Override Operator
|
||||
|
||||
The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage
|
||||
container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits.
|
||||
|
||||
You must install the Cluster Resource Override Operator using the {product-title} console or CLI as shown in the following sections.
|
||||
You must install the Cluster Resource Override Operator using the {product-title} console or CLI as shown in the following sections.
|
||||
During the installation, you create a `ClusterResourceOverride` custom resource (CR), where you set the level of overcommit, as shown in the
|
||||
following example:
|
||||
|
||||
@@ -31,12 +32,12 @@ spec:
|
||||
[NOTE]
|
||||
====
|
||||
The Cluster Resource Override Operator overrides have no effect if limits have not
|
||||
been set on containers. Create a LimitRange object with default limits per individual project
|
||||
been set on containers. Create a LimitRange object with default limits per individual project
|
||||
or configure limits in Pod specs in order for the overrides to apply.
|
||||
====
|
||||
|
||||
When configured, overrides can be enabled per-project by applying the following
|
||||
label to the Namespace object for each project:
|
||||
label to the Namespace object for each project:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
@@ -53,4 +54,4 @@ metadata:
|
||||
|
||||
----
|
||||
|
||||
The Operator watches for the `ClusterResourceOverride` CR and ensures that the `ClusterResourceOverride` admission webhook is installed into the same namespace as the Operator.
|
||||
The Operator watches for the `ClusterResourceOverride` CR and ensures that the `ClusterResourceOverride` admission webhook is installed into the same namespace as the Operator.
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-nodes-garbage-collection.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-nodes-garbage-collection-configuring_{context}"]
|
||||
= Configuring garbage collection for containers and images
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-nodes-garbage-collection.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
|
||||
[id="nodes-nodes-garbage-collection-containers_{context}"]
|
||||
= Understanding how terminated containers are removed though garbage collection
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-nodes-garbage-collection.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-nodes-garbage-collection-images_{context}"]
|
||||
= Understanding how images are removed though garbage collection
|
||||
|
||||
Image garbage collection relies on disk usage as reported by *cAdvisor* on the
|
||||
node to decide which images to remove from the node.
|
||||
node to decide which images to remove from the node.
|
||||
|
||||
The policy for image garbage collection is based on two conditions:
|
||||
|
||||
@@ -16,7 +17,7 @@ garbage collection. The default is *85*.
|
||||
* The percent of disk usage (expressed as an integer) to which image garbage
|
||||
collection attempts to free. Default is *80*.
|
||||
|
||||
For image garbage collection, you can modify any of the following variables using
|
||||
For image garbage collection, you can modify any of the following variables using
|
||||
a Custom Resource.
|
||||
|
||||
.Variables for configuring image garbage collection
|
||||
@@ -51,4 +52,3 @@ stamp.
|
||||
|
||||
Once the collection starts, the oldest images get deleted first until the
|
||||
stopping criterion is met.
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-nodes-managing-max-pods.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-nodes-managing-max-pods-about_{context}"]
|
||||
= Configuring the maximum number of Pods per Node
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-pods-plugin.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-pods-plugins-about_{context}"]
|
||||
= Understanding device plug-ins
|
||||
@@ -71,4 +72,3 @@ file system, as well as socket creation, they must be run in a privileged
|
||||
security context.
|
||||
* More specific details regarding deployment steps can be found with each device
|
||||
plug-in implementation.
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-pods-plugins.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-pods-plugins-device-mgr_{context}"]
|
||||
= Understanding the Device Manager
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-pods-plugins.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-pods-plugins-install_{context}"]
|
||||
= Enabling Device Manager
|
||||
@@ -32,7 +33,7 @@ For example:
|
||||
[source,terminal]
|
||||
----
|
||||
Name: 00-worker
|
||||
Namespace:
|
||||
Namespace:
|
||||
Labels: machineconfiguration.openshift.io/role=worker <1>
|
||||
----
|
||||
<1> Label required for the device manager.
|
||||
|
||||
@@ -2,14 +2,15 @@
|
||||
//
|
||||
// * nodes/nodes-pods-configuring.adoc
|
||||
// * nodes/nodes-cluster-pods-configuring
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="nodes-pods-configuring-pod-distruption-about_{context}"]
|
||||
= Understanding how to use pod disruption budgets to specify the number of pods that must be up
|
||||
= Understanding how to use Pod disruption budgets to specify the number of Pods that must be up
|
||||
|
||||
A _pod disruption budget_ is part of the
|
||||
link:http://kubernetes.io/docs/admin/disruptions/[Kubernetes] API, which can be
|
||||
managed with `oc` commands like other object types. They
|
||||
allow the specification of safety constraints on pods during operations, such as
|
||||
allow the specification of safety constraints on Pods during operations, such as
|
||||
draining a node for maintenance.
|
||||
|
||||
`PodDisruptionBudget` is an API object that specifies the minimum number or
|
||||
@@ -20,19 +21,19 @@ upgrade) and is only honored on voluntary evictions (not on node failures).
|
||||
A `PodDisruptionBudget` object's configuration consists of the following key
|
||||
parts:
|
||||
|
||||
* A label selector, which is a label query over a set of pods.
|
||||
* An availability level, which specifies the minimum number of pods that must be
|
||||
* A label selector, which is a label query over a set of Pods.
|
||||
* An availability level, which specifies the minimum number of Pods that must be
|
||||
available simultaneously, either:
|
||||
** `minAvailable` is the number of pods must always be available, even during a disruption.
|
||||
** `minAvailable` is the number of Pods must always be available, even during a disruption.
|
||||
** `maxUnavailable` is the number of Pods can be unavailable during a disruption.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
A `maxUnavailable` of `0%` or `0` or a `minAvailable` of `100%` or equal to the number of replicas,
|
||||
is permitted, but can block nodes from being drained.
|
||||
A `maxUnavailable` of `0%` or `0` or a `minAvailable` of `100%` or equal to the number of replicas
|
||||
is permitted but can block nodes from being drained.
|
||||
====
|
||||
|
||||
You can check for pod disruption budgets across all projects with the following:
|
||||
You can check for Pod disruption budgets across all projects with the following:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
@@ -48,10 +49,10 @@ test-project my-pdb 2 foo=bar
|
||||
----
|
||||
|
||||
The `PodDisruptionBudget` is considered healthy when there are at least
|
||||
`minAvailable` pods running in the system. Every pod above that limit can be evicted.
|
||||
`minAvailable` Pods running in the system. Every Pod above that limit can be evicted.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Depending on your pod priority and preemption settings,
|
||||
lower-priority pods might be removed despite their pod disruption budget requirements.
|
||||
Depending on your Pod priority and preemption settings,
|
||||
lower-priority Pods might be removed despite their Pod disruption budget requirements.
|
||||
====
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
//
|
||||
// * nodes/nodes-pods-configuring.adoc
|
||||
// * nodes/nodes-cluster-pods-configuring
|
||||
// * post_installation_configuration/cluster-tasks.adoc
|
||||
|
||||
[id="nodes-pods-pod-disruption-configuring_{context}"]
|
||||
= Specifying the number of pods that must be up with pod disruption budgets
|
||||
@@ -44,7 +45,7 @@ metadata:
|
||||
spec:
|
||||
maxUnavailable: 25% <2>
|
||||
selector: <3>
|
||||
matchLabels:
|
||||
matchLabels:
|
||||
foo: bar
|
||||
----
|
||||
<1> `PodDisruptionBudget` is part of the `policy/v1beta1` API group.
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/scheduling/nodes-scheduler-taints-tolerations.adoc
|
||||
// * nodes/nodes-scheduler-taints-tolerations.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
|
||||
[id="nodes-scheduler-taints-tolerations-about_{context}"]
|
||||
= Understanding taints and tolerations
|
||||
@@ -76,9 +79,9 @@ The following taints are built into kubernetes:
|
||||
* `node.cloudprovider.kubernetes.io/uninitialized`: When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint.
|
||||
|
||||
[id="nodes-scheduler-taints-tolerations-about-seconds_{context}"]
|
||||
== Understanding how to use toleration seconds to delay pod evictions
|
||||
== Understanding how to use toleration seconds to delay Pod evictions
|
||||
|
||||
You can specify how long a Pod can remain bound to a node before being evicted by specifying the `tolerationSeconds` parameter in the Pod specification. If a taint with the `NoExecute` effect is added to a node, any Pods that do not tolerate the taint are evicted immediately (Pods that do tolerate the taint are not evicted). However, if a Pod that to be evicted has the `tolerationSeconds` parameter, the Pod is not evicted until that time period expires.
|
||||
You can specify how long a Pod can remain bound to a node before being evicted by specifying the `tolerationSeconds` parameter in the Pod specification. If a taint with the `NoExecute` effect is added to a node, any Pods that do not tolerate the taint are evicted immediately. Pods that do tolerate the taint are not evicted. However, if a Pod that does tolerate the taint has the `tolerationSeconds` parameter, the Pod is not evicted until that time period expires.
|
||||
|
||||
.Example output
|
||||
[source,yaml]
|
||||
@@ -149,26 +152,26 @@ In this case, the Pod cannot be scheduled onto the node, because there is no tol
|
||||
one of the three that is not tolerated by the Pod.
|
||||
|
||||
[id="nodes-scheduler-taints-tolerations-about-prevent_{context}"]
|
||||
== Preventing pod eviction for node problems
|
||||
== Preventing Pod eviction for node problems
|
||||
|
||||
The Taint-Based Evictions feature, enabled by default, adds a taint with the `NoExecute` effect to nodes that are not ready or are unreachable. This allows you to specify how long a Pod should remain bound to a node that becomes unreachable or not ready, rather than using the default of five minutes. For example, you might want to allow a Pod on an unreachable node if the workload is safe to remain running while a networking issue resolves.
|
||||
|
||||
If a node enters a not ready state, the node controller adds the `node.kubernetes.io/not-ready:NoExecute` taint to the node. If a node enters an unreachable state, the the node controller adds the `node.kubernetes.io/unreachable:NoExecute` taint to the node.
|
||||
If a node enters a not ready state, the node controller adds the `node.kubernetes.io/not-ready:NoExecute` taint to the node. If a node enters an unreachable state, the node controller adds the `node.kubernetes.io/unreachable:NoExecute` taint to the node.
|
||||
|
||||
The `NoExecute` taint affects Pods that are already running on the node as follows:
|
||||
The `NoExecute` taint affects Pods that are already running on the node in the following ways:
|
||||
|
||||
* Pods that do not tolerate the taint are evicted immediately.
|
||||
* Pods that tolerate the taint without specifying `tolerationSeconds` in their toleration specification remain bound forever.
|
||||
* Pods that tolerate the taint with a specified `tolerationSeconds` remain bound for the specified amount of time.
|
||||
|
||||
[id="nodes-scheduler-taints-tolerations-about-taintNodesByCondition_{context}"]
|
||||
== Understanding pod scheduling and node conditions (Taint Node by Condition)
|
||||
== Understanding Pod scheduling and node conditions (Taint Node by Condition)
|
||||
|
||||
{product-title} automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the `NoSchedule` effect, which means no Pod can be scheduled on the node, unless the Pod has a matching toleration. This feature, *Taint Nodes By Condition*, is enabled by default.
|
||||
|
||||
The scheduler checks for these taints on nodes before scheduling Pods. If the taint is present, the Pod is scheduled on a different node. Because the scheduler checks for taints and not the actual Node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate Pod tolerations.
|
||||
{product-title} automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the `NoSchedule` effect, which means no Pod can be scheduled on the node unless the Pod has a matching toleration. This feature, *Taint Nodes By Condition*, is enabled by default.
|
||||
|
||||
The DaemonSet controller automatically adds the following tolerations to all daemons, to ensure backward compatibility:
|
||||
The scheduler checks for these taints on nodes before scheduling Pods. If the taint is present, the Pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate Pod tolerations.
|
||||
|
||||
To ensure backward compatibility, the DaemonSet controller automatically adds the following tolerations to all daemons:
|
||||
|
||||
* node.kubernetes.io/memory-pressure
|
||||
* node.kubernetes.io/disk-pressure
|
||||
@@ -181,7 +184,7 @@ You can also add arbitrary tolerations to DaemonSets.
|
||||
[id="nodes-scheduler-taints-tolerations-about-taintBasedEvictions_{context}"]
|
||||
== Understanding evicting pods by condition (Taint-Based Evictions)
|
||||
|
||||
The Taint-Based Evictions feature, enabled by default, evicts Pods from a node that experiences specific conditions, such as `not-ready` and `unreachable`.
|
||||
The Taint-Based Evictions feature, enabled by default, evicts Pods from a node that experiences specific conditions, such as `not-ready` and `unreachable`.
|
||||
When a node experiences one of these conditions, {product-title} automatically adds taints to the node, and starts evicting and rescheduling the Pods on different nodes.
|
||||
|
||||
Taint Based Evictions has a `NoExecute` effect, where any Pod that does not tolerate the taint will be evicted immediately and any Pod that does tolerate the taint will never be evicted.
|
||||
@@ -191,7 +194,7 @@ Taint Based Evictions has a `NoExecute` effect, where any Pod that does not tole
|
||||
{product-title} evicts Pods in a rate-limited way to prevent massive Pod evictions in scenarios such as the master becoming partitioned from the nodes.
|
||||
====
|
||||
|
||||
This feature, in combination with `tolerationSeconds`, allows you to specify how long a Pod should stay bound to a node that has a node condition. If the condition still exists after the `tolerationSections` period, the taint remains on the node and the Pods are evicted in a rate-limited manner. If the condition clears before the `tolerationSeconds` period, Pods are not removed.
|
||||
This feature, in combination with `tolerationSeconds`, allows you to specify how long a Pod stays bound to a node that has a node condition. If the condition still exists after the `tolerationSections` period, the taint remains on the node and the Pods are evicted in a rate-limited manner. If the condition clears before the `tolerationSeconds` period, Pods are not removed.
|
||||
|
||||
{product-title} automatically adds a toleration for `node.kubernetes.io/not-ready` and `node.kubernetes.io/unreachable` with `tolerationSeconds=300`, unless the Pod configuration specifies either toleration.
|
||||
|
||||
@@ -209,9 +212,9 @@ spec
|
||||
tolerationSeconds: 300
|
||||
----
|
||||
|
||||
These tolerations ensure that the default Pod behavior is to remain bound for 5 minutes after one of these node conditions problems is detected.
|
||||
These tolerations ensure that the default Pod behavior is to remain bound for five minutes after one of these node conditions problems is detected.
|
||||
|
||||
You can configure these tolerations as needed. For example, if you have an application with a lot of local state you might want to keep the Pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding Pod eviction.
|
||||
You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the Pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding Pod eviction.
|
||||
|
||||
DaemonSet Pods are created with NoExecute tolerations for the following taints with no tolerationSeconds:
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-scheduler-taints-tolerations.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-scheduler-taints-tolerations-adding_{context}"]
|
||||
= Adding taints and tolerations
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-scheduler-taints-tolerations.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-scheduler-taints-tolerations-bindings_{context}"]
|
||||
= Binding a user to a Node using taints and tolerations
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-scheduler-taints-tolerations.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-scheduler-taints-tolerations_dedicating_{context}"]
|
||||
= Dedicating a Node for a User using taints and tolerations
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-scheduler-taints-tolerations.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-scheduler-taints-tolerations-removing_{context}"]
|
||||
= Removing taints and tolerations
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * nodes/nodes-scheduler-taints-tolerations.adoc
|
||||
// * post_installation_configuration/node-tasks.adoc
|
||||
|
||||
[id="nodes-scheduler-taints-tolerations-special_{context}"]
|
||||
= Controlling Nodes with special hardware using taints and tolerations
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * networking/network_policy/about-network-policy.adoc
|
||||
// * networking/configuring-networkpolicy.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="nw-networkpolicy-about_{context}"]
|
||||
|
||||
@@ -11,8 +13,10 @@ In {product-title} {product-version}, OpenShift SDN supports using NetworkPolicy
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
The Kubernetes `v1` NetworkPolicy features are available in {product-title}
|
||||
except for egress policy types and IPBlock.
|
||||
IPBlock is supported in NetworkPolicy with limitations for OpenshiftSDN; it
|
||||
supports IPBlock without except clauses. If you create a policy with an IPBlock
|
||||
section including an except clause, the SDN Pods log generates warnings and the
|
||||
entire IPBlock section of that policy is ignored.
|
||||
====
|
||||
|
||||
[WARNING]
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * networking/network_policy/creating-network-policy.adoc
|
||||
// * networking/configuring-networkpolicy.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="nw-networkpolicy-create_{context}"]
|
||||
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * networking/network_policy/deleting-network-policy.adoc
|
||||
// * networking/configuring-networkpolicy.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="nw-networkpolicy-delete_{context}"]
|
||||
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * networking/network_policy/multitenant-network-policy.adoc
|
||||
// * networking/configuring-networkpolicy.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="nw-networkpolicy-multitenant-isolation_{context}"]
|
||||
= Configuring multitenant isolation using NetworkPolicy
|
||||
|
||||
@@ -3,6 +3,8 @@
|
||||
// * networking/network_policy/creating-network-policy.adoc
|
||||
// * networking/network_policy/viewing-network-policy.adoc
|
||||
// * networking/network_policy/editing-network-policy.adoc
|
||||
// * networking/configuring-networkpolicy.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="nw-networkpolicy-object_{context}"]
|
||||
|
||||
@@ -21,7 +23,7 @@ spec:
|
||||
matchLabels:
|
||||
app: mongodb
|
||||
ingress:
|
||||
- from:
|
||||
- from:
|
||||
- podSelector: <3>
|
||||
matchLabels:
|
||||
app: app
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * networking/network_policy/default-network-policy.adoc
|
||||
// * networking/configuring-networkpolicy.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="nw-networkpolicy-project-defaults_{context}"]
|
||||
= Adding network policy objects to the new project template
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * networking/network_policy/viewing-network-policy.adoc
|
||||
// * networking/configuring-networkpolicy.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="nw-networkpolicy-view_{context}"]
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
// * installing/installing_bare_metal/installing-bare-metal-network-customizations.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-network-customizations.adoc
|
||||
// * installing/installing_gcp/installing-gcp-network-customizations.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
// Installation assemblies need different details than the CNO operator does
|
||||
ifeval::["{context}" == "cluster-network-operator"]
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
//
|
||||
// * networking/configuring-a-custom-pki.adoc
|
||||
// * networking/enable-cluster-wide-proxy.adoc
|
||||
// * post_installation_configuration/network-configuration.adoc
|
||||
|
||||
[id="nw-proxy-configure-object_{context}"]
|
||||
= Enabling the cluster-wide proxy
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * operators/olm-adding-operators-to-cluster.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="olm-installing-operator-from-operatorhub-using-cli_{context}"]
|
||||
= Installing from OperatorHub using the CLI
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * operators/olm-adding-operators-to-cluster.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
ifeval::["{context}" != "olm-adding-operators-to-a-cluster"]
|
||||
:filter-type: jaeger
|
||||
:filter-operator: Jaeger
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * operators/olm-adding-operators-to-cluster.adoc
|
||||
// * post_installation_configuration/preparing-for-users.adoc
|
||||
|
||||
[id="olm-installing-operators-from-operatorhub_{context}"]
|
||||
= Installing Operators from OperatorHub
|
||||
@@ -14,7 +15,9 @@ endif::[]
|
||||
ifdef::openshift-dedicated[]
|
||||
web console. You can then subscribe the Operator to the default
|
||||
`openshift-operators` namespace to make it available for developers on your
|
||||
cluster.
|
||||
cluster. When you subscribe the Operator to all namespaces, the Operator is
|
||||
installed in the `openshift-operators` namespace; this installation method is
|
||||
not supported by all Operators.
|
||||
|
||||
In {product-title} clusters, a curated list of Operators is made available for
|
||||
installation from OperatorHub. Administrators can only install Operators to
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user