mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
365 lines
20 KiB
Plaintext
365 lines
20 KiB
Plaintext
:_mod-docs-content-type: ASSEMBLY
|
|
:context: post-install-cluster-tasks
|
|
[id="post-install-cluster-tasks"]
|
|
= Postinstallation cluster tasks
|
|
include::_attributes/common-attributes.adoc[]
|
|
|
|
toc::[]
|
|
|
|
After installing {product-title}, you can further expand and customize your cluster to your requirements.
|
|
|
|
[id="available_cluster_customizations"]
|
|
== Available cluster customizations
|
|
|
|
You complete most of the cluster configuration and customization after you deploy your {product-title} cluster. A number of _configuration resources_ are available.
|
|
|
|
[NOTE]
|
|
====
|
|
If you install your cluster on {ibm-z-name}, not all features and functions are available.
|
|
====
|
|
|
|
You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider.
|
|
|
|
For current documentation of the settings that you control by using these resources, use the `oc explain` command, for example `oc explain builds --api-version=config.openshift.io/v1`
|
|
|
|
[id="configuration-resources_{context}"]
|
|
=== Cluster configuration resources
|
|
|
|
All cluster configuration resources are globally scoped (not namespaced) and named `cluster`.
|
|
////
|
|
Config changes should not require coordinated changes between config resources, if you find
|
|
yourself struggling to update these docs to explain coordinated changes, please reach out
|
|
to @api-approvers (github) or #forum-api-review (slack).
|
|
////
|
|
|
|
[cols="2a,8a",options="header"]
|
|
|===
|
|
|Resource name
|
|
|Description
|
|
|
|
|`apiserver.config.openshift.io`
|
|
|Provides API server configuration such as xref:../security/certificates/api-server.adoc#api-server-certificates[certificates and certificate authorities].
|
|
|
|
|`authentication.config.openshift.io`
|
|
|Controls the xref:../authentication/understanding-identity-provider.adoc#understanding-identity-provider[identity provider] and authentication configuration for the cluster.
|
|
|
|
|`build.config.openshift.io`
|
|
|Controls default and enforced xref:../cicd/builds/build-configuration.adoc#build-configuration[configuration] for all builds on the cluster.
|
|
|
|
|`console.config.openshift.io`
|
|
|Configures the behavior of the web console interface, including the xref:../web_console/configuring-web-console.adoc#configuring-web-console[logout behavior].
|
|
|
|
|`featuregate.config.openshift.io`
|
|
|Enables xref:../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling-features[FeatureGates]
|
|
so that you can use Tech Preview features.
|
|
|
|
|`image.config.openshift.io`
|
|
|Configures how specific xref:../openshift_images/image-configuration.adoc#image-configuration[image registries] should be treated (allowed, disallowed, insecure, CA details).
|
|
|
|
|`ingress.config.openshift.io`
|
|
|Configuration details related to xref:../networking/networking_operators/ingress-operator.adoc#nw-installation-ingress-config-asset_ingress-operator[routing] such as the default domain for routes.
|
|
|
|
|`oauth.config.openshift.io`
|
|
|Configures identity providers and other behavior related to xref:../authentication/configuring-internal-oauth.adoc#configuring-internal-oauth[internal OAuth server] flows.
|
|
|
|
|`project.config.openshift.io`
|
|
|Configures xref:../applications/projects/configuring-project-creation.adoc#configuring-project-creation[how projects are created] including the project template.
|
|
|
|
|`proxy.config.openshift.io`
|
|
|Defines proxies to be used by components needing external network access. Note: not all components currently consume this value.
|
|
|
|
|`scheduler.config.openshift.io`
|
|
|Configures xref:../nodes/scheduling/nodes-scheduler-profiles.adoc#nodes-scheduler-profiles[scheduler] behavior such as profiles and default node selectors.
|
|
|
|
|===
|
|
|
|
[id="operator-configuration-resources_{context}"]
|
|
=== Operator configuration resources
|
|
|
|
These configuration resources are cluster-scoped instances, named `cluster`, which control the behavior of a specific component as
|
|
owned by a particular Operator.
|
|
|
|
[cols="2a,8a",options="header"]
|
|
|===
|
|
|Resource name
|
|
|Description
|
|
|
|
|`consoles.operator.openshift.io`
|
|
|Controls console appearance such as branding customizations
|
|
|
|
|`config.imageregistry.operator.openshift.io`
|
|
|Configures xref:../registry/configuring-registry-operator.adoc#registry-operator-configuration-resource-overview_configuring-registry-operator[{product-registry} settings] such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type.
|
|
|
|
|`config.samples.operator.openshift.io`
|
|
|Configures the
|
|
xref:../openshift_images/configuring-samples-operator.adoc#configuring-samples-operator[Samples Operator]
|
|
to control which example image streams and templates are installed on the cluster.
|
|
|
|
|===
|
|
|
|
[id="additional-configuration-resources_{context}"]
|
|
=== Additional configuration resources
|
|
|
|
These configuration resources represent a single instance of a particular component. In some cases, you can request multiple
|
|
instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific
|
|
resource instance name in a specific namespace. Reference the component-specific
|
|
documentation for details on how and when you can create additional resource instances.
|
|
|
|
[cols="2a,2a,2a,8a",options="header"]
|
|
|===
|
|
|Resource name
|
|
|Instance name
|
|
|Namespace
|
|
|Description
|
|
|
|
|`alertmanager.monitoring.coreos.com`
|
|
|`main`
|
|
|`openshift-monitoring`
|
|
|Controls the link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_core_platform_monitoring/configuring-alerts-and-notifications[Alertmanager] deployment parameters.
|
|
|
|
|`ingresscontroller.operator.openshift.io`
|
|
|`default`
|
|
|`openshift-ingress-operator`
|
|
|Configures xref:../networking/networking_operators/ingress-operator.adoc#configuring-ingress-controller[Ingress Operator] behavior such as domain, number of replicas, certificates, and controller placement.
|
|
|
|
|===
|
|
|
|
[id="informational-resources_{context}"]
|
|
=== Informational Resources
|
|
|
|
You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly.
|
|
|
|
[cols="2a,2a,8a",options="header"]
|
|
|===
|
|
|Resource name|Instance name|Description
|
|
|
|
|`clusterversion.config.openshift.io`
|
|
|`version`
|
|
|In {product-title} {product-version}, you must not customize the `ClusterVersion`
|
|
resource for production clusters. Instead, follow the process to
|
|
xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[update a cluster].
|
|
|
|
|`dns.config.openshift.io`
|
|
|`cluster`
|
|
|You cannot modify the DNS settings for your cluster. You can
|
|
xref:../networking/networking_operators/dns-operator.adoc#nw-dns-operator-status_dns-operator[check the DNS Operator status].
|
|
|
|
|`infrastructure.config.openshift.io`
|
|
|`cluster`
|
|
|Configuration details allowing the cluster to interact with its cloud provider.
|
|
|
|
|`network.config.openshift.io`
|
|
|`cluster`
|
|
|You cannot modify your cluster networking after installation. To customize your network, follow the process to
|
|
xref:../installing/installing_aws/ipi/installing-aws-customizations.adoc#installing-aws-customizations[customize networking during installation].
|
|
|
|
|===
|
|
|
|
[id="adding-worker-nodes_{context}"]
|
|
== Adding worker nodes
|
|
|
|
After you deploy your {product-title} cluster, you can add worker nodes to scale cluster resources. There are different ways you can add worker nodes depending on the installation method and the environment of your cluster.
|
|
|
|
[id="adding-nodes-iso_{context}"]
|
|
=== Adding worker nodes to an on-premise cluster
|
|
|
|
For on-premise clusters, you can add worker nodes by using the {product-title} CLI (`oc`) to generate an ISO image, which can then be used to boot one or more nodes in your target cluster.
|
|
This process can be used regardless of how you installed your cluster.
|
|
|
|
You can add one or more nodes at a time while customizing each node with more complex configurations, such as static network configuration, or you can specify only the MAC address of each node.
|
|
Any configurations that are not specified during ISO generation are retrieved from the target cluster and applied to the new nodes.
|
|
|
|
Preflight validation checks are also performed when booting the ISO image to inform you of failure-causing issues before you attempt to boot each node.
|
|
|
|
xref:../nodes/nodes/nodes-nodes-adding-node-iso.adoc#adding-node-iso[Adding worker nodes to an on-premise cluster]
|
|
|
|
=== Adding worker nodes to installer-provisioned infrastructure clusters
|
|
|
|
For installer-provisioned infrastructure clusters, you can manually or automatically scale the `MachineSet` object to match the number of available bare-metal hosts.
|
|
|
|
To add a bare-metal host, you must configure all network prerequisites, configure an associated `baremetalhost` object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console.
|
|
|
|
* xref:../scalability_and_performance/managing-bare-metal-hosts.adoc#adding-bare-metal-host-to-cluster-using-web-console_managing-bare-metal-hosts[Adding worker nodes using the web console]
|
|
|
|
* xref:../scalability_and_performance/managing-bare-metal-hosts.adoc#adding-bare-metal-host-to-cluster-using-yaml_managing-bare-metal-hosts[Adding worker nodes using YAML in the web console]
|
|
|
|
* xref:../installing/installing_bare_metal/bare-metal-expanding-the-cluster.adoc#preparing-the-bare-metal-node_bare-metal-expanding[Manually adding a worker node to an installer-provisioned infrastructure cluster]
|
|
|
|
=== Adding worker nodes to user-provisioned infrastructure clusters
|
|
|
|
For user-provisioned infrastructure clusters, you can add worker nodes by using a {op-system-base} or {op-system} ISO image and connecting it to your cluster using cluster Ignition config files. For RHEL worker nodes, the following example uses Ansible playbooks to add worker nodes to the cluster. For RHCOS worker nodes, the following example uses an ISO image and network booting to add worker nodes to the cluster.
|
|
|
|
* xref:../post_installation_configuration/node-tasks.adoc#post-install-config-adding-fcos-compute[Adding RHCOS worker nodes to a user-provisioned infrastructure cluster]
|
|
|
|
=== Adding worker nodes to clusters managed by the Assisted Installer
|
|
|
|
For clusters managed by the Assisted Installer, you can add worker nodes by using the {cluster-manager-first} console, the Assisted Installer REST API or you can manually add worker nodes using an ISO image and cluster Ignition config files.
|
|
|
|
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#sno-adding-worker-nodes-to-sno-clusters_add-workers[Adding worker nodes using the OpenShift Cluster Manager]
|
|
|
|
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#adding-worker-nodes-using-the-assisted-installer-api[Adding worker nodes using the Assisted Installer REST API]
|
|
|
|
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#sno-adding-worker-nodes-to-single-node-clusters-manually_add-workers[Manually adding worker nodes to a SNO cluster]
|
|
|
|
=== Adding worker nodes to clusters managed by the multicluster engine for Kubernetes
|
|
|
|
For clusters managed by the multicluster engine for Kubernetes, you can add worker nodes by using the dedicated multicluster engine console.
|
|
|
|
* link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.9/html/clusters/cluster_mce_overview#on-prem-creating-your-cluster-with-the-console[Creating your cluster with the console]
|
|
|
|
[id="post-install-adjust-worker-nodes"]
|
|
== Adjust worker nodes
|
|
If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new compute machine sets, scale them up, then scale the original compute machine set down before removing them.
|
|
|
|
include::modules/differences-between-machinesets-and-machineconfigpool.adoc[leveloffset=+2]
|
|
|
|
include::modules/machineset-manually-scaling.adoc[leveloffset=+2]
|
|
|
|
include::modules/machineset-delete-policy.adoc[leveloffset=+2]
|
|
|
|
include::modules/nodes-scheduler-node-selectors-cluster.adoc[leveloffset=+2]
|
|
|
|
[id="post-worker-latency-profiles"]
|
|
== Improving cluster stability in high latency environments using worker latency profiles
|
|
|
|
include::snippets/worker-latency-profile-intro.adoc[]
|
|
|
|
include::modules/nodes-cluster-worker-latency-profiles-about.adoc[leveloffset=+2]
|
|
|
|
include::modules/nodes-cluster-worker-latency-profiles-using.adoc[leveloffset=+2]
|
|
|
|
[id="post-install-cpms-setup"]
|
|
== Managing control plane machines
|
|
|
|
xref:../machine_management/control_plane_machine_management/cpmso-about.adoc#cpmso-about[Control plane machine sets] provide management capabilities for control plane machines that are similar to what compute machine sets provide for compute machines. The availability and initial status of control plane machine sets on your cluster depend on your cloud provider and the version of {product-title} that you installed. For more information, see xref:../machine_management/control_plane_machine_management/cpmso-getting-started.adoc#cpmso-getting-started[Getting started with control plane machine sets].
|
|
|
|
// Adding a control plane node to your cluster
|
|
include::modules/creating-control-plane-node.adoc[leveloffset=+2]
|
|
|
|
[id="post-install-creating-infrastructure-machinesets-production"]
|
|
== Creating infrastructure machine sets for production environments
|
|
|
|
You can create a compute machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
|
|
|
|
For information on infrastructure nodes and which components can run on infrastructure nodes, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets[Creating infrastructure machine sets].
|
|
|
|
To create an infrastructure node, you can xref:../post_installation_configuration/cluster-tasks.adoc#machineset-creating_post-install-cluster-tasks[use a machine set], xref:../post_installation_configuration/cluster-tasks.adoc#creating-an-infra-node_post-install-cluster-tasks[assign a label to the nodes], or xref:../post_installation_configuration/cluster-tasks.adoc#creating-infra-machines_post-install-cluster-tasks[use a machine config pool].
|
|
|
|
For sample machine sets that you can use with these procedures, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets-clouds[Creating machine sets for different clouds].
|
|
|
|
Applying a specific node selector to all infrastructure components causes {product-title} to xref:../post_installation_configuration/cluster-tasks.adoc#moving-resources-to-infrastructure-machinesets[schedule those workloads on nodes with that label].
|
|
|
|
include::modules/machineset-creating.adoc[leveloffset=+2]
|
|
|
|
include::modules/creating-an-infra-node.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
|
|
* For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see xref:../nodes/scheduling/nodes-scheduler-node-selectors.adoc#project-node-selectors_nodes-scheduler-node-selectors[Project node selectors].
|
|
|
|
include::modules/creating-infra-machines.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
|
|
* See xref:../architecture/control-plane.adoc#architecture-machine-config-pools_control-plane[Node configuration management with machine config pools] for more information on grouping infra machines in a custom pool.
|
|
|
|
[id="assigning-machine-set-resources-to-infra-nodes"]
|
|
== Assigning machine set resources to infrastructure nodes
|
|
|
|
After creating an infrastructure machine set, the `worker` and `infra` roles are applied to new infra nodes. Nodes with the `infra` role are not counted toward the total number of subscriptions that are required to run the environment, even when the `worker` role is also applied.
|
|
|
|
However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control.
|
|
|
|
include::modules/binding-infra-node-workloads-using-taints-tolerations.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
|
|
* See xref:../nodes/scheduling/nodes-scheduler-about.adoc#nodes-scheduler-about[Controlling pod placement using the scheduler] for general information on scheduling a pod to a node.
|
|
|
|
[id="moving-resources-to-infrastructure-machinesets"]
|
|
== Moving resources to infrastructure machine sets
|
|
|
|
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.
|
|
|
|
include::modules/infrastructure-moving-router.adoc[leveloffset=+2]
|
|
|
|
include::modules/infrastructure-moving-registry.adoc[leveloffset=+2]
|
|
|
|
include::modules/infrastructure-moving-monitoring.adoc[leveloffset=+2]
|
|
|
|
include::modules/cluster-autoscaler-about.adoc[leveloffset=+1]
|
|
|
|
include::modules/cluster-autoscaler-cr.adoc[leveloffset=+2]
|
|
|
|
:FeatureName: cluster autoscaler
|
|
:FeatureResourceName: ClusterAutoscaler
|
|
include::modules/deploying-resource.adoc[leveloffset=+2]
|
|
|
|
[id="custer-tasks-applying-autoscaling"]
|
|
== Applying autoscaling to your cluster
|
|
|
|
Applying autoscaling to an {product-title} cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster.
|
|
|
|
For more information, see xref:../machine_management/applying-autoscaling.adoc#applying-autoscaling[Applying autoscaling to an {product-title} cluster].
|
|
|
|
[id="post-install-tp-tasks"]
|
|
== Enabling Technology Preview features using FeatureGates
|
|
|
|
You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the `FeatureGate` custom resource (CR).
|
|
|
|
include::modules/nodes-cluster-enabling-features-about.adoc[leveloffset=+2]
|
|
|
|
include::modules/nodes-cluster-enabling-features-console.adoc[leveloffset=+2]
|
|
|
|
include::modules/nodes-cluster-enabling-features-cli.adoc[leveloffset=+2]
|
|
|
|
[id="post-install-etcd-tasks"]
|
|
== etcd tasks
|
|
Back up etcd, enable or disable etcd encryption, or defragment etcd data.
|
|
|
|
[NOTE]
|
|
====
|
|
If you deployed a bare-metal cluster, you can scale the cluster up to 5 nodes as part of your post-installation tasks. For more information, see xref:../etcd/etcd-performance.adoc#etcd-node-scaling_etcd-performance[Node scaling for etcd].
|
|
====
|
|
|
|
include::modules/about-etcd-encryption.adoc[leveloffset=+2]
|
|
|
|
include::modules/etcd-encryption-types.adoc[leveloffset=+2]
|
|
|
|
include::modules/enabling-etcd-encryption.adoc[leveloffset=+2]
|
|
|
|
include::modules/disabling-etcd-encryption.adoc[leveloffset=+2]
|
|
|
|
include::modules/backup-etcd.adoc[leveloffset=+2]
|
|
|
|
include::modules/etcd-defrag.adoc[leveloffset=+2]
|
|
|
|
include::modules/dr-restoring-cluster-state.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
* xref:../etcd/etcd-practices.adoc#recommended-etcd-practices[Recommended etcd practices]
|
|
* xref:../installing/installing_bare_metal/upi/installing-bare-metal.adoc#installing-bare-metal[Installing a user-provisioned cluster on bare metal]
|
|
* xref:../installing/overview/index.adoc#replacing-a-bare-metal-control-plane-node_bare-metal-expanding[Replacing a bare-metal control plane node]
|
|
|
|
include::modules/dr-scenario-cluster-state-issues.adoc[leveloffset=+2]
|
|
|
|
[id="post-install-pod-disruption-budgets"]
|
|
== Pod disruption budgets
|
|
|
|
Understand and configure pod disruption budgets.
|
|
|
|
include::modules/nodes-pods-pod-disruption-about.adoc[leveloffset=+2]
|
|
|
|
include::modules/nodes-pods-pod-disruption-configuring.adoc[leveloffset=+2]
|
|
|
|
include::modules/pod-disruption-eviction-policy.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
* xref:../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling-features[Enabling features using feature gates]
|
|
* link:https://kubernetes.io/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy[Unhealthy Pod Eviction Policy] in the Kubernetes documentation
|