mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
732 lines
25 KiB
Plaintext
732 lines
25 KiB
Plaintext
:_mod-docs-content-type: ASSEMBLY
|
|
:context: post-install-cluster-tasks
|
|
[id="post-install-cluster-tasks"]
|
|
= Postinstallation cluster tasks
|
|
include::_attributes/common-attributes.adoc[]
|
|
|
|
toc::[]
|
|
|
|
After installing {product-title}, you can further expand and customize your cluster to your requirements.
|
|
|
|
[id="available_cluster_customizations"]
|
|
== Available cluster customizations
|
|
|
|
You complete most of the cluster configuration and customization after you deploy your {product-title} cluster. A number of _configuration resources_ are available.
|
|
|
|
[NOTE]
|
|
====
|
|
If you install your cluster on {ibm-z-name}, not all features and functions are available.
|
|
====
|
|
|
|
You modify the configuration resources to configure the major features of the
|
|
cluster, such as the image registry, networking configuration, image build
|
|
behavior, and the identity provider.
|
|
|
|
For current documentation of the settings that you control by using these resources, use
|
|
the `oc explain` command, for example `oc explain builds --api-version=config.openshift.io/v1`
|
|
|
|
[id="configuration-resources_{context}"]
|
|
=== Cluster configuration resources
|
|
|
|
All cluster configuration resources are globally scoped (not namespaced) and named `cluster`.
|
|
////
|
|
Config changes should not require coordinated changes between config resources, if you find
|
|
yourself struggling to update these docs to explain coordinated changes, please reach out
|
|
to @api-approvers (github) or #forum-api-review (slack).
|
|
////
|
|
|
|
[cols="2a,8a",options="header"]
|
|
|===
|
|
|Resource name
|
|
|Description
|
|
|
|
|`apiserver.config.openshift.io`
|
|
|Provides API server configuration such as xref:../security/certificates/api-server.adoc#api-server-certificates[certificates and certificate authorities].
|
|
|
|
|`authentication.config.openshift.io`
|
|
|Controls the xref:../authentication/understanding-identity-provider.adoc#understanding-identity-provider[identity provider] and authentication configuration for the cluster.
|
|
|
|
|`build.config.openshift.io`
|
|
|Controls default and enforced xref:../cicd/builds/build-configuration.adoc#build-configuration[configuration] for all builds on the cluster.
|
|
|
|
|`console.config.openshift.io`
|
|
|Configures the behavior of the web console interface, including the xref:../web_console/configuring-web-console.adoc#configuring-web-console[logout behavior].
|
|
|
|
|`featuregate.config.openshift.io`
|
|
|Enables xref:../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling[FeatureGates]
|
|
so that you can use Tech Preview features.
|
|
|
|
|`image.config.openshift.io`
|
|
|Configures how specific xref:../openshift_images/image-configuration.adoc#image-configuration[image registries] should be treated (allowed, disallowed, insecure, CA details).
|
|
|
|
|`ingress.config.openshift.io`
|
|
|Configuration details related to xref:../networking/ingress-operator.adoc#nw-installation-ingress-config-asset_configuring-ingress[routing] such as the default domain for routes.
|
|
|
|
|`oauth.config.openshift.io`
|
|
|Configures identity providers and other behavior related to xref:../authentication/configuring-internal-oauth.adoc#configuring-internal-oauth[internal OAuth server] flows.
|
|
|
|
|`project.config.openshift.io`
|
|
|Configures xref:../applications/projects/configuring-project-creation.adoc#configuring-project-creation[how projects are created] including the project template.
|
|
|
|
|`proxy.config.openshift.io`
|
|
|Defines proxies to be used by components needing external network access. Note: not all components currently consume this value.
|
|
|
|
|`scheduler.config.openshift.io`
|
|
|Configures xref:../nodes/scheduling/nodes-scheduler-profiles.adoc#nodes-scheduler-profiles[scheduler] behavior such as profiles and default node selectors.
|
|
|
|
|===
|
|
|
|
[id="operator-configuration-resources_{context}"]
|
|
=== Operator configuration resources
|
|
|
|
These configuration resources are cluster-scoped instances, named `cluster`, which control the behavior of a specific component as
|
|
owned by a particular Operator.
|
|
|
|
[cols="2a,8a",options="header"]
|
|
|===
|
|
|Resource name
|
|
|Description
|
|
|
|
|`consoles.operator.openshift.io`
|
|
|Controls console appearance such as branding customizations
|
|
|
|
|`config.imageregistry.operator.openshift.io`
|
|
|Configures xref:../registry/configuring-registry-operator.adoc#registry-operator-configuration-resource-overview_configuring-registry-operator[{product-registry} settings] such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type.
|
|
|
|
|`config.samples.operator.openshift.io`
|
|
|Configures the
|
|
xref:../openshift_images/configuring-samples-operator.adoc#configuring-samples-operator[Samples Operator]
|
|
to control which example image streams and templates are installed on the cluster.
|
|
|
|
|===
|
|
|
|
|
|
[id="additional-configuration-resources_{context}"]
|
|
=== Additional configuration resources
|
|
|
|
These configuration resources represent a single instance of a particular component. In some cases, you can request multiple
|
|
instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific
|
|
resource instance name in a specific namespace. Reference the component-specific
|
|
documentation for details on how and when you can create additional resource instances.
|
|
|
|
[cols="2a,2a,2a,8a",options="header"]
|
|
|===
|
|
|Resource name
|
|
|Instance name
|
|
|Namespace
|
|
|Description
|
|
|
|
|`alertmanager.monitoring.coreos.com`
|
|
|`main`
|
|
|`openshift-monitoring`
|
|
|Controls the xref:../observability/monitoring/managing-alerts.adoc#managing-alerts[Alertmanager] deployment parameters.
|
|
|
|
|`ingresscontroller.operator.openshift.io`
|
|
|`default`
|
|
|`openshift-ingress-operator`
|
|
|Configures xref:../networking/ingress-operator.adoc#configuring-ingress[Ingress Operator] behavior such as domain, number of replicas, certificates, and controller placement.
|
|
|
|
|===
|
|
|
|
|
|
[id="informational-resources_{context}"]
|
|
=== Informational Resources
|
|
|
|
You use these resources to retrieve information about the cluster. Some configurations might require you to edit these resources directly.
|
|
|
|
[cols="2a,2a,8a",options="header"]
|
|
|===
|
|
|Resource name|Instance name|Description
|
|
|
|
|`clusterversion.config.openshift.io`
|
|
|`version`
|
|
|In {product-title} {product-version}, you must not customize the `ClusterVersion`
|
|
resource for production clusters. Instead, follow the process to
|
|
xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[update a cluster].
|
|
|
|
|`dns.config.openshift.io`
|
|
|`cluster`
|
|
|You cannot modify the DNS settings for your cluster. You can
|
|
xref:../networking/dns-operator.adoc#dns-operator[view the DNS Operator status].
|
|
|
|
|`infrastructure.config.openshift.io`
|
|
|`cluster`
|
|
|Configuration details allowing the cluster to interact with its cloud provider.
|
|
|
|
|`network.config.openshift.io`
|
|
|`cluster`
|
|
|You cannot modify your cluster networking after installation. To customize your network, follow the process to
|
|
xref:../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#installing-aws-network-customizations[customize networking during installation].
|
|
|
|
|===
|
|
|
|
include::modules/images-update-global-pull-secret.adoc[leveloffset=+1]
|
|
|
|
[id="adding-worker-nodes_{context}"]
|
|
== Adding worker nodes
|
|
|
|
After you deploy your {product-title} cluster, you can add worker nodes to scale cluster resources. There are different ways you can add worker nodes depending on the installation method and the environment of your cluster.
|
|
|
|
=== Adding worker nodes to installer-provisioned infrastructure clusters
|
|
|
|
For installer-provisioned infrastructure clusters, you can manually or automatically scale the `MachineSet` object to match the number of available bare-metal hosts.
|
|
|
|
To add a bare-metal host, you must configure all network prerequisites, configure an associated `baremetalhost` object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console.
|
|
|
|
* xref:../scalability_and_performance/managing-bare-metal-hosts.adoc#adding-bare-metal-host-to-cluster-using-web-console_managing-bare-metal-hosts[Adding worker nodes using the web console]
|
|
|
|
* xref:../scalability_and_performance/managing-bare-metal-hosts.adoc#adding-bare-metal-host-to-cluster-using-yaml_managing-bare-metal-hosts[Adding worker nodes using YAML in the web console]
|
|
|
|
* xref:../installing/installing_bare_metal_ipi/ipi-install-expanding-the-cluster.adoc#preparing-the-bare-metal-node_ipi-install-expanding[Manually adding a worker node to an installer-provisioned infrastructure cluster]
|
|
|
|
=== Adding worker nodes to user-provisioned infrastructure clusters
|
|
|
|
For user-provisioned infrastructure clusters, you can add worker nodes by using a {op-system-base} or {op-system} ISO image and connecting it to your cluster using cluster Ignition config files. For RHEL worker nodes, the following example uses Ansible playbooks to add worker nodes to the cluster. For RHCOS worker nodes, the following example uses an ISO image and network booting to add worker nodes to the cluster.
|
|
|
|
* xref:../post_installation_configuration/node-tasks.adoc#post-install-config-adding-fcos-compute[Adding RHCOS worker nodes to a user-provisioned infrastructure cluster]
|
|
|
|
* xref:../post_installation_configuration/node-tasks.adoc#post-install-config-adding-rhel-compute[Adding RHEL worker nodes to a user-provisioned infrastructure cluster]
|
|
|
|
=== Adding worker nodes to clusters managed by the Assisted Installer
|
|
|
|
For clusters managed by the Assisted Installer, you can add worker nodes by using the {cluster-manager-first} console, the Assisted Installer REST API or you can manually add worker nodes using an ISO image and cluster Ignition config files.
|
|
|
|
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#sno-adding-worker-nodes-to-sno-clusters_add-workers[Adding worker nodes using the OpenShift Cluster Manager]
|
|
|
|
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#adding-worker-nodes-using-the-assisted-installer-api[Adding worker nodes using the Assisted Installer REST API]
|
|
|
|
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#sno-adding-worker-nodes-to-single-node-clusters-manually_add-workers[Manually adding worker nodes to a SNO cluster]
|
|
|
|
=== Adding worker nodes to clusters managed by the multicluster engine for Kubernetes
|
|
|
|
For clusters managed by the multicluster engine for Kubernetes, you can add worker nodes by using the dedicated multicluster engine console.
|
|
|
|
* link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.9/html/clusters/cluster_mce_overview#on-prem-creating-your-cluster-with-the-console[Creating your cluster with the console]
|
|
|
|
////
|
|
[id="default-crds_{context}"]
|
|
== Custom resources
|
|
|
|
A number of Custom Resource Definitions (CRDs) are available for you to use to
|
|
further tune your {product-title} deployment. You can deploy Custom Resources
|
|
that are based on many of these CRDs to add more functionality to your
|
|
{product-title} cluster.
|
|
|
|
.Default CRDs
|
|
[cols="2a,2a,8a,2a,2a",options="header"]
|
|
|===
|
|
|Name
|
|
|Group
|
|
|Description
|
|
|Namespaced
|
|
|Can deploy CR
|
|
|
|
|
|
|Alertmanager
|
|
|monitoring.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|Authentication
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|Build
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|CatalogSourceConfig
|
|
|operators.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|CatalogSource
|
|
|operators.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|ClusterAutoscaler
|
|
|autoscaling.openshift.io
|
|
|
|
|
|Global
|
|
|Yes
|
|
|
|
|ClusterDNS
|
|
|dns.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|IngressController
|
|
|operator.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|ClusterNetwork
|
|
|network.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|ClusterOperator
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|ClusterOperator
|
|
|operatorstatus.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|Cluster
|
|
|machine.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|ClusterServiceVersion
|
|
|operators.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|ClusterVersion
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|Config
|
|
|imageregistry.operator.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|Config
|
|
|samples.operator.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|Console
|
|
|console.config.openshift.io
|
|
|The top-level configuration for the web console.
|
|
|Namespaced
|
|
|The console CR is created by default with more or less empty values. It honors
|
|
new values. If it is deleted, it recreates automatically.
|
|
|
|
|ControllerConfig
|
|
|machineconfiguration.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|CredentialsRequest
|
|
|cloudcredential.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|DNS
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|EgressNetworkPolicy
|
|
|network.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|HostSubnet
|
|
|network.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|Image
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|Infrastructure
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|Ingress
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|InstallPlan
|
|
|operators.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|KubeControllerManager
|
|
|operator.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|KubeletConfig
|
|
|machineconfiguration.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|MachineAutoscaler
|
|
|autoscaling.openshift.io
|
|
|
|
|
|Namespaced
|
|
|Yes
|
|
|
|
|MachineClass
|
|
|machine.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|MachineConfigPool
|
|
|machineconfiguration.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|MachineConfig
|
|
|machineconfiguration.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|MachineDeployment
|
|
|machine.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|MachineHealthCheck
|
|
|healthchecking.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|Machine
|
|
|machine.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|MachineSet
|
|
|machine.openshift.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|MCOConfig
|
|
|machineconfiguration.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|NetNamespace
|
|
|network.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|NetworkAttachmentDefinition
|
|
|k8s.cni.cncf.io
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|NetworkConfig
|
|
|networkoperator.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|Network
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|OAuth
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|OpenShiftAPIServer
|
|
|operator.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|OpenShiftControllerManagerOperatorConfig
|
|
|openshiftcontrollermanager.operator.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|OperatorGroup
|
|
|operators.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|Project
|
|
|config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|Prometheus
|
|
|monitoring.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|PrometheusRule
|
|
|monitoring.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|ServiceCertSignerOperatorConfig
|
|
|servicecertsigner.config.openshift.io
|
|
|
|
|
|Global
|
|
|
|
|
|
|
|ServiceMonitor
|
|
|monitoring.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|Subscription
|
|
|operators.coreos.com
|
|
|
|
|
|Namespaced
|
|
|
|
|
|
|
|===
|
|
////
|
|
|
|
[id="post-install-adjust-worker-nodes"]
|
|
== Adjust worker nodes
|
|
If you incorrectly sized the worker nodes during deployment, adjust them by creating one or more new compute machine sets, scale them up, then scale the original compute machine set down before removing them.
|
|
|
|
include::modules/differences-between-machinesets-and-machineconfigpool.adoc[leveloffset=+2]
|
|
|
|
include::modules/machineset-manually-scaling.adoc[leveloffset=+2]
|
|
|
|
include::modules/machineset-delete-policy.adoc[leveloffset=+2]
|
|
|
|
include::modules/nodes-scheduler-node-selectors-cluster.adoc[leveloffset=+2]
|
|
|
|
[id="post-worker-latency-profiles"]
|
|
== Improving cluster stability in high latency environments using worker latency profiles
|
|
|
|
include::snippets/worker-latency-profile-intro.adoc[]
|
|
|
|
include::modules/nodes-cluster-worker-latency-profiles-about.adoc[leveloffset=+2]
|
|
|
|
include::modules/nodes-cluster-worker-latency-profiles-using.adoc[leveloffset=+2]
|
|
|
|
[id="post-install-cpms-setup"]
|
|
== Managing control plane machines
|
|
|
|
xref:../machine_management/control_plane_machine_management/cpmso-about.adoc#cpmso-about[Control plane machine sets] provide management capabilities for control plane machines that are similar to what compute machine sets provide for compute machines. The availability and initial status of control plane machine sets on your cluster depend on your cloud provider and the version of {product-title} that you installed. For more information, see xref:../machine_management/control_plane_machine_management/cpmso-getting-started.adoc#cpmso-getting-started[Getting started with control plane machine sets].
|
|
|
|
[id="post-install-creating-infrastructure-machinesets-production"]
|
|
== Creating infrastructure machine sets for production environments
|
|
|
|
You can create a compute machine set to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
|
|
|
|
In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and {SMProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
|
|
|
|
For information on infrastructure nodes and which components can run on infrastructure nodes, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets[Creating infrastructure machine sets].
|
|
|
|
To create an infrastructure node, you can xref:../post_installation_configuration/cluster-tasks.adoc#machineset-creating_post-install-cluster-tasks[use a machine set], xref:../post_installation_configuration/cluster-tasks.adoc#creating-an-infra-node_post-install-cluster-tasks[assign a label to the nodes], or xref:../post_installation_configuration/cluster-tasks.adoc#creating-infra-machines_post-install-cluster-tasks[use a machine config pool].
|
|
|
|
For sample machine sets that you can use with these procedures, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets-clouds[Creating machine sets for different clouds].
|
|
|
|
Applying a specific node selector to all infrastructure components causes {product-title} to xref:../post_installation_configuration/cluster-tasks.adoc#moving-resources-to-infrastructure-machinesets[schedule those workloads on nodes with that label].
|
|
|
|
include::modules/machineset-creating.adoc[leveloffset=+2]
|
|
|
|
include::modules/creating-an-infra-node.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
|
|
* For information on how to configure project node selectors to avoid cluster-wide node selector key conflicts, see xref:../nodes/scheduling/nodes-scheduler-node-selectors.adoc#project-node-selectors_nodes-scheduler-node-selectors[Project node selectors].
|
|
|
|
include::modules/creating-infra-machines.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
|
|
* See xref:../architecture/control-plane.adoc#architecture-machine-config-pools_control-plane[Node configuration management with machine config pools] for more information on grouping infra machines in a custom pool.
|
|
|
|
[id="assigning-machine-set-resources-to-infra-nodes"]
|
|
== Assigning machine set resources to infrastructure nodes
|
|
|
|
After creating an infrastructure machine set, the `worker` and `infra` roles are applied to new infra nodes. Nodes with the `infra` role are not counted toward the total number of subscriptions that are required to run the environment, even when the `worker` role is also applied.
|
|
|
|
However, when an infra node is assigned the worker role, there is a chance that user workloads can get assigned inadvertently to the infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods that you want to control.
|
|
|
|
include::modules/binding-infra-node-workloads-using-taints-tolerations.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
|
|
* See xref:../nodes/scheduling/nodes-scheduler-about.adoc#nodes-scheduler-about[Controlling pod placement using the scheduler] for general information on scheduling a pod to a node.
|
|
|
|
[id="moving-resources-to-infrastructure-machinesets"]
|
|
== Moving resources to infrastructure machine sets
|
|
|
|
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.
|
|
|
|
include::modules/infrastructure-moving-router.adoc[leveloffset=+2]
|
|
|
|
include::modules/infrastructure-moving-registry.adoc[leveloffset=+2]
|
|
|
|
include::modules/infrastructure-moving-monitoring.adoc[leveloffset=+2]
|
|
|
|
[id="custer-tasks-moving-logging-resources"]
|
|
=== Moving {logging} resources
|
|
|
|
For information about moving {logging} resources, see:
|
|
|
|
* xref:../logging/scheduling_resources/logging-node-selectors.adoc#logging-node-selectors[Using node selectors to move logging resources]
|
|
* xref:../logging/scheduling_resources/logging-taints-tolerations.adoc#cluster-logging-logstore-tolerations_logging-taints-tolerations[Using taints and tolerations to control logging pod placement]
|
|
|
|
include::modules/cluster-autoscaler-about.adoc[leveloffset=+1]
|
|
include::modules/cluster-autoscaler-cr.adoc[leveloffset=+2]
|
|
:FeatureName: cluster autoscaler
|
|
:FeatureResourceName: ClusterAutoscaler
|
|
include::modules/deploying-resource.adoc[leveloffset=+2]
|
|
|
|
include::modules/machine-autoscaler-about.adoc[leveloffset=+1]
|
|
include::modules/machine-autoscaler-cr.adoc[leveloffset=+2]
|
|
:FeatureName: machine autoscaler
|
|
:FeatureResourceName: MachineAutoscaler
|
|
include::modules/deploying-resource.adoc[leveloffset=+2]
|
|
|
|
include::modules/nodes-clusters-cgroups-2.adoc[leveloffset=+1]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
|
|
* xref:../nodes/clusters/nodes-cluster-cgroups-2.adoc#nodes-cluster-cgroups-2[Configuring the Linux cgroup version on your nodes]
|
|
|
|
[id="post-install-tp-tasks"]
|
|
== Enabling Technology Preview features using FeatureGates
|
|
|
|
You can turn on a subset of the current Technology Preview features on for all nodes in the cluster by editing the `FeatureGate` custom resource (CR).
|
|
|
|
include::modules/nodes-cluster-enabling-features-about.adoc[leveloffset=+2]
|
|
include::modules/nodes-cluster-enabling-features-console.adoc[leveloffset=+2]
|
|
include::modules/nodes-cluster-enabling-features-cli.adoc[leveloffset=+2]
|
|
|
|
[id="post-install-etcd-tasks"]
|
|
== etcd tasks
|
|
Back up etcd, enable or disable etcd encryption, or defragment etcd data.
|
|
|
|
include::modules/about-etcd-encryption.adoc[leveloffset=+2]
|
|
include::modules/etcd-encryption-types.adoc[leveloffset=+2]
|
|
include::modules/enabling-etcd-encryption.adoc[leveloffset=+2]
|
|
include::modules/disabling-etcd-encryption.adoc[leveloffset=+2]
|
|
include::modules/backup-etcd.adoc[leveloffset=+2]
|
|
include::modules/etcd-defrag.adoc[leveloffset=+2]
|
|
include::modules/dr-restoring-cluster-state.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
|
|
* xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[Installing a user-provisioned cluster on bare metal]
|
|
* xref:../installing/index.adoc#replacing-a-bare-metal-control-plane-node_ipi-install-expanding[Replacing a bare-metal control plane node]
|
|
|
|
include::modules/dr-scenario-cluster-state-issues.adoc[leveloffset=+2]
|
|
|
|
[id="post-install-pod-disruption-budgets"]
|
|
== Pod disruption budgets
|
|
|
|
Understand and configure pod disruption budgets.
|
|
|
|
include::modules/nodes-pods-pod-disruption-about.adoc[leveloffset=+2]
|
|
include::modules/nodes-pods-pod-disruption-configuring.adoc[leveloffset=+2]
|
|
include::modules/pod-disruption-eviction-policy.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
|
|
* xref:../nodes/clusters/nodes-cluster-enabling-features.adoc#nodes-cluster-enabling[Enabling features using feature gates]
|
|
* link:https://kubernetes.io/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy[Unhealthy Pod Eviction Policy] in the Kubernetes documentation
|
|
|
|
[id="post-install-rotate-remove-cloud-creds"]
|
|
== Rotating or removing cloud provider credentials
|
|
|
|
After installing {product-title}, some organizations require the rotation or removal of the cloud provider credentials that were used during the initial installation.
|
|
|
|
To allow the cluster to use the new credentials, you must update the secrets that the xref:../operators/operator-reference.adoc#cloud-credential-operator_cluster-operators-ref[Cloud Credential Operator (CCO)] uses to manage cloud provider credentials.
|
|
|
|
[id="ccoctl-rotate-remove-cloud-creds"]
|
|
=== Rotating cloud provider credentials with the Cloud Credential Operator utility
|
|
|
|
// Right now only IBM can do this, but it makes sense to set this up so that other clouds can be added.
|
|
The Cloud Credential Operator (CCO) utility `ccoctl` supports updating secrets for clusters installed on {ibm-cloud-name}.
|
|
|
|
//Rotating IBM Cloud credentials with ccoctl
|
|
include::modules/refreshing-service-ids-ibm-cloud.adoc[leveloffset=+3]
|
|
|
|
//Rotating cloud provider credentials manually
|
|
include::modules/manually-rotating-cloud-creds.adoc[leveloffset=+2]
|
|
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
* xref:../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc[vSphere CSI Driver Operator]
|
|
|
|
//Removing cloud provider credentials manually
|
|
include::modules/manually-removing-cloud-creds.adoc[leveloffset=+2]
|
|
|
|
//These additional resources are for the "Rotating or removing cloud provider credentials" section, do not separate them from that content.
|
|
[role="_additional-resources"]
|
|
.Additional resources
|
|
* xref:../authentication/managing_cloud_provider_credentials/about-cloud-credential-operator.adoc#about-cloud-credential-operator[About the Cloud Credential Operator]
|
|
* xref:../authentication/managing_cloud_provider_credentials/cco-mode-passthrough.adoc#admin-credentials-root-secret-formats_cco-mode-passthrough[Admin credentials root secret format]
|
|
|
|
[id="post-install-must-gather-disconnected"]
|
|
== Configuring image streams for a disconnected cluster
|
|
|
|
After installing {product-title} in a disconnected environment, configure the image streams for the Cluster Samples Operator and the `must-gather` image stream.
|
|
|
|
include::modules/installation-images-samples-disconnected-mirroring-assist.adoc[leveloffset=+2]
|
|
|
|
include::modules/installation-restricted-network-samples.adoc[leveloffset=+2]
|
|
|
|
include::modules/installation-preparing-restricted-cluster-to-gather-support-data.adoc[leveloffset=+2]
|
|
|
|
include::modules/images-cluster-sample-imagestream-import.adoc[leveloffset=+1]
|