mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OSDOCS#15974: Removing unused assemblies (Control plane pod)
This commit is contained in:
@@ -1,19 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="configuring-ldap-failover"]
|
||||
= Configuring LDAP failover
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: sssd-ldap-failover
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/ldap-failover-overview.adoc[]
|
||||
|
||||
include::modules/ldap-failover-prereqs.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ldap-failover-generate-certs.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ldap-failover-configure-sssd.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ldap-failover-configure-apache.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ldap-failover-configure-openshift.adoc[leveloffset=+1]
|
||||
@@ -1,39 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="understanding-identity-provider"]
|
||||
= Understanding identity provider configuration
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: understanding-identity-provider
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/identity-provider-parameters.adoc[leveloffset=+1]
|
||||
|
||||
[id="supported-identity-providers"]
|
||||
== Supported identity providers
|
||||
|
||||
You can configure the following types of identity providers:
|
||||
|
||||
[cols="2a,8a",options="header"]
|
||||
|===
|
||||
|
||||
|Identity provider
|
||||
|Description
|
||||
|
||||
|xref:../authentication/identity_providers/configuring-ldap-identity-provider.adoc#configuring-ldap-identity-provider[LDAP]
|
||||
|Configure the `ldap` identity provider to validate user names and passwords
|
||||
against an LDAPv3 server, using simple bind authentication.
|
||||
|
||||
|xref:../authentication/identity_providers/configuring-github-identity-provider.adoc#configuring-github-identity-provider[GitHub or GitHub Enterprise]
|
||||
|Configure a `github` identity provider to validate user names and passwords
|
||||
against GitHub or GitHub Enterprise's OAuth authentication server.
|
||||
|
||||
|xref:../authentication/identity_providers/configuring-google-identity-provider.adoc#configuring-google-identity-provider[Google]
|
||||
|Configure a `google` identity provider using
|
||||
link:https://developers.google.com/identity/protocols/OpenIDConnect[Google's OpenID Connect integration].
|
||||
|
||||
|xref:../authentication/identity_providers/configuring-oidc-identity-provider.adoc#configuring-oidc-identity-provider[OpenID Connect]
|
||||
|Configure an `oidc` identity provider to integrate with an OpenID Connect
|
||||
identity provider using an
|
||||
link:http://openid.net/specs/openid-connect-core-1_0.html#CodeFlowAuth[Authorization Code Flow].
|
||||
|
||||
|===
|
||||
@@ -1,27 +0,0 @@
|
||||
////
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id='configuring-the-odo-cli']
|
||||
= Configuring the odo CLI
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: configuring-the-odo-cli
|
||||
|
||||
toc::[]
|
||||
|
||||
// Comment out per https://issues.redhat.com/browse/RHDEVDOCS-3594
|
||||
// include::modules/developer-cli-odo-using-command-completion.adoc[leveloffset=+1]
|
||||
|
||||
You can find the global settings for `odo` in the `preference.yaml` file which is located by default in your `$HOME/.odo` directory.
|
||||
|
||||
You can set a different location for the `preference.yaml` file by exporting the `GLOBALODOCONFIG` variable.
|
||||
|
||||
// view config
|
||||
include::modules/developer-cli-odo-view-config.adoc[leveloffset=+1]
|
||||
// set key
|
||||
include::modules/developer-cli-odo-set-config.adoc[leveloffset=+1]
|
||||
// unset key
|
||||
include::modules/developer-cli-odo-unset-config.adoc[leveloffset=+1]
|
||||
// preference ref table
|
||||
include::modules/developer-cli-odo-preference-table.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-ignoring-files-or-patterns.adoc[leveloffset=+1]
|
||||
////
|
||||
@@ -1,22 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id=creating-instances-of-services-managed-by-operators]
|
||||
= Creating instances of services managed by Operators
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: creating-instances-of-services-managed-by-operators
|
||||
|
||||
toc::[]
|
||||
|
||||
Operators are a method of packaging, deploying, and managing Kubernetes services. With `{odo-title}`, you can create instances of services from the custom resource definitions (CRDs) provided by the Operators. You can then use these instances in your projects and link them to your components.
|
||||
|
||||
To create services from an Operator, you must ensure that the Operator has valid values defined in its `metadata` to start the requested service. `{odo-title}` uses the `metadata.annotations.alm-examples` YAML file of an Operator to start
|
||||
the service. If this YAML has placeholder values or sample values, a service cannot start. You can modify the YAML file and start the service with the modified values. To learn how to modify YAML files and start services from it, see xref:../../cli_reference/developer_cli_odo/creating-instances-of-services-managed-by-operators.adoc#creating-services-from-yaml-files_creating-instances-of-services-managed-by-operators[Creating services from YAML files].
|
||||
|
||||
== Prerequisites
|
||||
* Install the `oc` CLI and log in to the cluster.
|
||||
** Note that the configuration of the cluster determines the services available to you. To access the Operator services, a cluster administrator must install the respective Operator on the cluster first. To learn more, see xref:../../operators/admin/olm-adding-operators-to-cluster.adoc#olm-installing-operators-from-operatorhub_olm-adding-operators-to-a-cluster[Adding Operators to the cluster].
|
||||
* Install the `{odo-title}` CLI.
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-project.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-listing-available-services-from-the-operators-installed-on-the-cluster.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-creating-a-service-from-an-operator.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-creating-services-from-yaml-files.adoc[leveloffset=+1]
|
||||
@@ -1,23 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id=creating-a-java-application-with-a-database]
|
||||
= Creating a Java application with a database
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: creating-a-java-application-with-a-database
|
||||
toc::[]
|
||||
|
||||
This example describes how to deploy a Java application by using devfile and connect it to a database service.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* A running cluster.
|
||||
* `{odo-title}` is installed.
|
||||
* A Service Binding Operator is installed in your cluster. To learn how to install Operators, contact your cluster administrator or see xref:../../../operators/user/olm-installing-operators-in-namespace.adoc#olm-installing-operators-from-operatorhub_olm-installing-operators-in-namespace[Installing Operators from OperatorHub].
|
||||
* A Dev4Devs PostgreSQL Operator Operator is installed in your cluster. To learn how to install Operators, contact your cluster administrator or see xref:../../../operators/user/olm-installing-operators-in-namespace.adoc#olm-installing-operators-from-operatorhub_olm-installing-operators-in-namespace[Installing Operators from OperatorHub].
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-project.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-java-microservice-jpa-application.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-database-with-odo.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-connecting-a-java-application-to-mysql-database.adoc[leveloffset=+1]
|
||||
@@ -1,31 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id='creating-a-multicomponent-application-with-odo']
|
||||
= Creating a multicomponent application with `{odo-title}`
|
||||
:context: creating-a-multicomponent-application-with-odo
|
||||
|
||||
toc::[]
|
||||
|
||||
`{odo-title}` allows you to create a multicomponent application, modify it, and link its components in an easy and automated way.
|
||||
|
||||
This example describes how to deploy a multicomponent application - a shooter game. The application consists of a front-end Node.js component and a back-end Java component.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* `{odo-title}` is installed.
|
||||
* You have a running cluster. Developers can use link:https://access.redhat.com/documentation/en-us/red_hat_openshift_local/[{openshift-local-productname}] to deploy a local cluster quickly.
|
||||
* Maven is installed.
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-project.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-deploying-the-back-end-component.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-deploying-the-front-end-component.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-linking-both-components.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-exposing-the-components-to-the-public.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-modifying-the-running-application.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-deleting-an-application.adoc[leveloffset=+1]
|
||||
@@ -1,29 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id='creating-a-single-component-application-with-odo']
|
||||
= Creating a single-component application with {odo-title}
|
||||
|
||||
:context: creating-a-single-component-application-with-odo
|
||||
|
||||
toc::[]
|
||||
|
||||
With `{odo-title}`, you can create and deploy applications on clusters.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* `{odo-title}` is installed.
|
||||
* You have a running cluster. You can use link:https://access.redhat.com/documentation/en-us/red_hat_openshift_local/[{openshift-local-productname}] to deploy a local cluster quickly.
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-project.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-creating-and-deploying-a-nodejs-application-with-odo.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-modifying-your-application-code.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-adding-storage-to-the-application-components.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-adding-a-custom-builder-to-specify-a-build-image.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-connecting-your-application-to-multiple-services-using-openshift-service-catalog.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-deleting-an-application.adoc[leveloffset=+1]
|
||||
@@ -1,31 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id=creating-an-application-with-a-database]
|
||||
= Creating an application with a database
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: creating-an-application-with-a-database
|
||||
|
||||
toc::[]
|
||||
|
||||
This example describes how to deploy and connect a database to a front-end application.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* `{odo-title}` is installed.
|
||||
* `oc` client is installed.
|
||||
* You have a running cluster. Developers can use link:https://access.redhat.com/documentation/en-us/red_hat_openshift_local/[{openshift-local-productname}] to deploy a local cluster quickly.
|
||||
* The Service Catalog is installed and enabled on your cluster.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Service Catalog is deprecated on {product-title} 4 and later.
|
||||
====
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-project.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-deploying-the-front-end-component.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-deploying-a-database-in-interactive-mode.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-deploying-a-database-manually.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-connecting-the-database.adoc[leveloffset=+1]
|
||||
@@ -1,14 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id='debugging-applications-in-odo']
|
||||
= Debugging applications in `{odo-title}`
|
||||
:context: debugging-applications-in-odo
|
||||
|
||||
toc::[]
|
||||
|
||||
With `{odo-title}`, you can attach a debugger to remotely debug your application. This feature is only supported for NodeJS and Java components.
|
||||
|
||||
Components created with `{odo-title}` run in the debug mode by default. A debugger agent runs on the component, on a specific port. To start debugging your application, you must start port forwarding and attach the local debugger bundled in your Integrated development environment (IDE).
|
||||
|
||||
include::modules/developer-cli-odo-debugging-an-application.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-configuring-debugging-parameters.adoc[leveloffset=+1]
|
||||
@@ -1,11 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id='deleting-applications']
|
||||
= Deleting applications
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: deleting-applications
|
||||
|
||||
toc::[]
|
||||
|
||||
You can delete applications and all components associated with the application in your project.
|
||||
|
||||
include::modules/developer-cli-odo-deleting-an-application.adoc[leveloffset=+1]
|
||||
@@ -1,37 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="sample-applications"]
|
||||
= Sample applications
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: using-sample-applications
|
||||
|
||||
toc::[]
|
||||
|
||||
`{odo-title}` offers partial compatibility with any language or runtime listed within the {product-title} catalog of component types. For example:
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
NAME PROJECT TAGS
|
||||
dotnet openshift 3.1,latest
|
||||
httpd openshift 2.4,latest
|
||||
java openshift 8,latest
|
||||
nginx openshift 1.10,1.12,1.8,latest
|
||||
nodejs openshift 0.10,4,6,8,latest
|
||||
perl openshift 5.16,5.20,5.24,latest
|
||||
php openshift 5.5,5.6,7.0,7.1,latest
|
||||
python openshift 2.7,3.3,3.4,3.5,3.6,latest
|
||||
ruby openshift 2.0,2.2,2.3,2.4,latest
|
||||
wildfly openshift 10.0,10.1,8.1,9.0,latest
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
For `{odo-title}` Java and Node.js are the officially supported component types.
|
||||
Run `odo catalog list components` to verify the officially supported component types.
|
||||
====
|
||||
|
||||
To access the component over the web, create a URL using `odo url create`.
|
||||
|
||||
|
||||
|
||||
include::modules/developer-cli-odo-sample-applications-git.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-sample-applications-binary.adoc[leveloffset=+1]
|
||||
@@ -1,29 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="using-devfiles-in-odo"]
|
||||
= Using devfiles in {odo-title}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: creating-applications-by-using-devfiles
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/developer-cli-odo-about-devfiles-in-odo.adoc[leveloffset=+1]
|
||||
|
||||
== Creating a Java application by using a devfile
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have installed `{odo-title}`.
|
||||
* You must know your ingress domain cluster name. Contact your cluster administrator if you do not know it. For example, `apps-crc.testing` is the cluster domain name for https://access.redhat.com/documentation/en-us/red_hat_openshift_local/[{openshift-local-productname}].
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Currently odo does not support creating devfile components with `--git` or `--binary` flags. You can only create S2I components when using these flags.
|
||||
====
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-project.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/developer-cli-odo-listing-available-devfile-components.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/developer-cli-odo-deploying-a-java-application-using-a-devfile.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/developer-cli-odo-converting-an-s2i-component-into-a-devfile-component.adoc[leveloffset=+1]
|
||||
@@ -1,11 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="working-with-projects"]
|
||||
= Working with projects
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: working-with-projects
|
||||
|
||||
toc::[]
|
||||
|
||||
Project keeps your source code, tests, and libraries organized in a separate single unit.
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-project.adoc[leveloffset=+1]
|
||||
@@ -1,21 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id='working-with-storage']
|
||||
= Working with storage
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: working-with-storage
|
||||
|
||||
toc::[]
|
||||
|
||||
Persistent storage keeps data available between restarts of `{odo-title}`.
|
||||
|
||||
include::modules/developer-cli-odo-adding-storage-to-the-application-components.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-adding-storage-to-a-specific-container.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-switching-between-ephemeral-and-persistent-storage.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../../../storage/understanding-ephemeral-storage.adoc#storage-ephemeral-storage-overview_understanding-ephemeral-storage[Understanding ephemeral storage].
|
||||
* xref:../../../storage/understanding-persistent-storage.adoc#persistent-storage-overview_understanding-persistent-storage[Understanding persistent storage]
|
||||
@@ -1,30 +0,0 @@
|
||||
////
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id='installing-odo']
|
||||
= Installing odo
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: installing-odo
|
||||
|
||||
toc::[]
|
||||
|
||||
// The following section describes how to install `{odo-title}` on different platforms using the CLI or the Visual Studio Code (VS Code) IDE.
|
||||
|
||||
You can install the `{odo-title}` CLI on Linux, Windows, or macOS by downloading a binary. You can also install the OpenShift VS Code extension, which uses both the `{odo-title}` and the `oc` binaries to interact with your OpenShift Container Platform cluster. For {op-system-base-full}, you can install the `{odo-title}` CLI as an RPM.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Currently, `{odo-title}` does not support installation in a restricted network environment.
|
||||
====
|
||||
|
||||
// You can also find the URL to the latest binaries from the {product-title} web console by clicking the *?* icon in the upper-right corner and selecting *Command Line Tools*
|
||||
|
||||
include::modules/developer-cli-odo-installing-odo-on-linux.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-installing-odo-on-windows.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-installing-odo-on-macos.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-installing-odo-on-vs-code.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-installing-odo-on-linux-rpm.adoc[leveloffset=+1]
|
||||
////
|
||||
@@ -1,11 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id='managing-environment-variables']
|
||||
= Managing environment variables
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: managing-environment-variables
|
||||
|
||||
toc::[]
|
||||
|
||||
`{odo-title}` stores component-specific configurations and environment variables in the `config` file. You can use the `odo config` command to set, unset, and list environment variables for components without the need to modify the `config` file.
|
||||
|
||||
include::modules/developer-cli-odo-setting-and-unsetting-environment-variables.adoc[leveloffset=+1]
|
||||
@@ -1,17 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="odo-architecture"]
|
||||
= odo architecture
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: odo-architecture
|
||||
|
||||
toc::[]
|
||||
|
||||
This section describes `{odo-title}` architecture and how `{odo-title}` manages resources on a cluster.
|
||||
|
||||
include::modules/developer-cli-odo-developer-setup.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-openshift-source-to-image.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-openshift-cluster-objects.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-push-workflow.adoc[leveloffset=+1]
|
||||
|
||||
// [role="_additional-resources"]
|
||||
// == Additional resources
|
||||
@@ -1,21 +0,0 @@
|
||||
////
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id='odo-cli-reference']
|
||||
= odo CLI reference
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: odo-cli-reference
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/developer-cli-odo-ref-build-images.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-catalog.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-create.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-delete.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-deploy.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-link.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-registry.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-service.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-storage.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-flags.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-ref-json-output.adoc[leveloffset=+1]
|
||||
////
|
||||
@@ -1,20 +0,0 @@
|
||||
////
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="understanding-odo"]
|
||||
= Understanding odo
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: understanding-odo
|
||||
|
||||
toc::[]
|
||||
|
||||
Red Hat OpenShift Developer CLI (`odo`) is a tool for creating applications on {product-title} and Kubernetes. With `{odo-title}`, you can develop, test, debug, and deploy microservices-based applications on a Kubernetes cluster without having a deep understanding of the platform.
|
||||
|
||||
`{odo-title}` follows a _create and push_ workflow. As a user, when you _create_, the information (or manifest) is stored in a configuration file. When you _push_, the corresponding resources are created on the Kubernetes cluster. All of this configuration is stored in the Kubernetes API for seamless accessibility and functionality.
|
||||
|
||||
`{odo-title}` uses _service_ and _link_ commands to link components and services together. `{odo-title}` achieves this by creating and deploying services based on Kubernetes Operators in the cluster. Services can be created using any of the Operators available on the Operator Hub. After linking a service, `odo` injects the service configuration into the component. Your application can then use this configuration to communicate with the Operator-backed service.
|
||||
|
||||
include::modules/odo-key-features.adoc[leveloffset=+1]
|
||||
include::modules/odo-core-concepts.adoc[leveloffset=+1]
|
||||
include::modules/odo-listing-components.adoc[leveloffset=+1]
|
||||
include::modules/odo-telemetry.adoc[leveloffset=+1]
|
||||
////
|
||||
@@ -1,23 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
[id="about-odo-in-a-restricted-environment"]
|
||||
= About {odo-title} in a restricted environment
|
||||
:context: about-odo-in-a-restricted-environment
|
||||
|
||||
toc::[]
|
||||
|
||||
|
||||
To run `{odo-title}` in a disconnected cluster or a cluster provisioned in a restricted environment, you must ensure that a cluster administrator has created a cluster with a mirrored registry.
|
||||
|
||||
|
||||
To start working in a disconnected cluster, you must first xref:../../../cli_reference/developer_cli_odo/using_odo_in_a_restricted_environment/pushing-the-odo-init-image-to-the-restricted-cluster-registry.adoc#pushing-the-odo-init-image-to-a-mirror-registry_pushing-the-odo-init-image-to-the-restricted-cluster-registry[push the `odo` init image to the registry of the cluster] and then overwrite the `odo` init image path using the `ODO_BOOTSTRAPPER_IMAGE` environment variable.
|
||||
|
||||
|
||||
After you push the `odo` init image, you must xref:../../../cli_reference/developer_cli_odo/using_odo_in_a_restricted_environment/creating-and-deploying-a-component-to-the-disconnected-cluster.adoc#mirroring-a-supported-builder-image_creating-and-deploying-a-component-to-the-disconnected-cluster[mirror a supported builder image] from the registry, xref:../../../cli_reference/developer_cli_odo/using_odo_in_a_restricted_environment/creating-and-deploying-a-component-to-the-disconnected-cluster.adoc#overwriting-the-mirror-registry_creating-and-deploying-a-component-to-the-disconnected-cluster[overwrite a mirror registry] and then xref:../../../cli_reference/developer_cli_odo/using_odo_in_a_restricted_environment/creating-and-deploying-a-component-to-the-disconnected-cluster.adoc#creating-a-nodejs-application-with-odo_creating-and-deploying-a-component-to-the-disconnected-cluster[create your application]. A builder image is necessary to configure a runtime environment for your application and also contains the build tool needed to build your application, for example npm for Node.js or Maven for Java. A mirror registry contains all the necessary dependencies for your application.
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
ifdef::openshift-enterprise,openshift-webscale[]
|
||||
* xref:../../../disconnected/mirroring/installing-mirroring-installation-images.adoc#installation-about-mirror-registry_installing-mirroring-installation-images[Mirroring images for a disconnected installation]
|
||||
endif::[]
|
||||
* xref:../../../registry/accessing-the-registry.adoc#registry-accessing-directly_accessing-the-registry[Accessing the registry]
|
||||
@@ -1,20 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="creating-and-deploying-a-component-to-the-disconnected-cluster"]
|
||||
= Creating and deploying a component to the disconnected cluster
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: creating-and-deploying-a-component-to-the-disconnected-cluster
|
||||
|
||||
toc::[]
|
||||
|
||||
After you push the `init` image to a cluster with a mirrored registry, you must mirror a supported builder image for your application with the `oc` tool, overwrite the mirror registry using the environment variable, and then create your component.
|
||||
|
||||
== Prerequisites
|
||||
|
||||
* Install `oc` on the client operating system.
|
||||
* xref:../../../cli_reference/developer_cli_odo/installing-odo.adoc#installing-odo-on-linux_installing-odo[Install `{odo-title}`] on the client operating system.
|
||||
* Access to an restricted cluster with a configured {product-registry} or a mirror registry.
|
||||
* xref:../../../cli_reference/developer_cli_odo/using_odo_in_a_restricted_environment/pushing-the-odo-init-image-to-the-restricted-cluster-registry.adoc#pushing-the-odo-init-image-to-a-mirror-registry_pushing-the-odo-init-image-to-the-restricted-cluster-registry[Push the `odo` init image to your cluster registry].
|
||||
|
||||
include::modules/developer-cli-odo-mirroring-a-supported-builder-image.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-overwriting-a-mirror-registry.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-creating-and-deploying-a-nodejs-application-with-odo.adoc[leveloffset=+1]
|
||||
@@ -1,11 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="creating-and-deploying-devfile-components-to-the-disconnected-cluster"]
|
||||
= Creating and deploying devfile components to the disconnected cluster
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: creating-and-deploying-a-component-to-the-disconnected-cluster
|
||||
|
||||
toc::[]
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-nodejs-application-by-using-a-devfile-in-a-disconnected-cluster.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/developer-cli-odo-creating-a-java-application-by-using-a-devfile-in-a-disconnected-cluster.adoc[leveloffset=+1]
|
||||
@@ -1,18 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="pushing-the-odo-init-image-to-the-restricted-cluster-registry"]
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
= Pushing the {odo-title} init image to the restricted cluster registry
|
||||
:context: pushing-the-odo-init-image-to-the-restricted-cluster-registry
|
||||
|
||||
toc::[]
|
||||
|
||||
Depending on the configuration of your cluster and your operating system you can either push the `odo` init image to a mirror registry or directly to an {product-registry}.
|
||||
|
||||
== Prerequisites
|
||||
|
||||
* Install `oc` on the client operating system.
|
||||
* xref:../../../cli_reference/developer_cli_odo/installing-odo.adoc#installing-odo-on-linux_installing-odo[Install `{odo-title}`] on the client operating system.
|
||||
* Access to a restricted cluster with a configured {product-registry} or a mirror registry.
|
||||
|
||||
include::modules/developer-cli-odo-pushing-the-odo-init-image-to-a-mirror-registry.adoc[leveloffset=+1]
|
||||
include::modules/developer-cli-odo-pushing-the-odo-init-image-to-an-internal-registry-directly.adoc[leveloffset=+1]
|
||||
@@ -1,7 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="control-plane-overview"]
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
= Control plane overview
|
||||
:context: control-plane-overview
|
||||
|
||||
// This assembly will contain modules to provide an an overview of OCP control plane and its key components, including Cluster Version Operator, Cluster operators, MCO, and etcd.
|
||||
@@ -1,7 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="etcd-4-5-node"]
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
= Scaling a cluster to 4 or 5 control-plane nodes
|
||||
:context: etcd-4-5-node
|
||||
|
||||
// This assembly will contain modules to provide information about scaling a cluster to 4 or 5 control plane nodes.
|
||||
@@ -1,7 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="etcd-fault-tolerant"]
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
= Setting up fault-tolerant control planes that span data centers
|
||||
:context: etcd-fault-tolerant
|
||||
|
||||
// This assembly will contain modules to provide information about setting up fault-tolerant control planes that span multiple data centers.
|
||||
@@ -1,171 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="hcp-backup-restore-dr"]
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
= Backup, restore, and disaster recovery for {hcp}
|
||||
:context: hcp-backup-restore-dr
|
||||
|
||||
toc::[]
|
||||
|
||||
If you need to back up and restore etcd on a hosted cluster or provide disaster recovery for a hosted cluster, see the following procedures.
|
||||
|
||||
[id="hosted-etcd-non-disruptive-recovery"]
|
||||
== Recovering etcd pods for hosted clusters
|
||||
|
||||
In hosted clusters, etcd pods run as part of a stateful set. The stateful set relies on persistent storage to store etcd data for each member. In a highly available control plane, the size of the stateful set is three pods, and each member has its own persistent volume claim.
|
||||
|
||||
include::modules/hosted-cluster-etcd-status.adoc[leveloffset=+2]
|
||||
include::modules/hosted-cluster-single-node-recovery.adoc[leveloffset=+2]
|
||||
|
||||
[id="hcp-backup-restore"]
|
||||
== Backing up and restoring etcd on a hosted cluster on {aws-short}
|
||||
|
||||
If you use {hcp} for {product-title}, the process to back up and restore etcd is different from xref:../backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.adoc#backing-up-etcd-data_backup-etcd[the usual etcd backup process].
|
||||
|
||||
The following procedures are specific to {hcp} on {aws-short}.
|
||||
|
||||
:FeatureName: {hcp-capital} on the {aws-short} platform
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
// Backing up etcd on a hosted cluster
|
||||
include::modules/backup-etcd-hosted-cluster.adoc[leveloffset=+2]
|
||||
|
||||
// Restoring an etcd snapshot on a hosted cluster
|
||||
include::modules/restoring-etcd-snapshot-hosted-cluster.adoc[leveloffset=+2]
|
||||
|
||||
// Backing up and restoring etcd on a hosted cluster in an on-premise environment
|
||||
include::modules/hosted-cluster-etcd-backup-restore-on-prem.adoc[leveloffset=+1]
|
||||
|
||||
[id="hcp-dr-aws"]
|
||||
== Disaster recovery for a hosted cluster within an {aws-short} region
|
||||
|
||||
In a situation where you need disaster recovery (DR) for a hosted cluster, you can recover a hosted cluster to the same region within {aws-short}. For example, you need DR when the upgrade of a management cluster fails and the hosted cluster is in a read-only state.
|
||||
|
||||
:FeatureName: {hcp-capital}
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
The DR process involves three main steps:
|
||||
|
||||
. Backing up the hosted cluster on the source management cluster
|
||||
. Restoring the hosted cluster on a destination management cluster
|
||||
. Deleting the hosted cluster from the source management cluster
|
||||
|
||||
Your workloads remain running during the process. The Cluster API might be unavailable for a period, but that will not affect the services that are running on the worker nodes.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Both the source management cluster and the destination management cluster must have the `--external-dns` flags to maintain the API server URL, as shown in this example:
|
||||
|
||||
.Example: External DNS flags
|
||||
[source,terminal]
|
||||
----
|
||||
--external-dns-provider=aws \
|
||||
--external-dns-credentials=<AWS Credentials location> \
|
||||
--external-dns-domain-filter=<DNS Base Domain>
|
||||
----
|
||||
|
||||
That way, the server URL ends with `https://api-sample-hosted.sample-hosted.aws.openshift.com`.
|
||||
|
||||
If you do not include the `--external-dns` flags to maintain the API server URL, the hosted cluster cannot be migrated.
|
||||
====
|
||||
|
||||
[id="dr-hosted-cluster-env-context"]
|
||||
=== Example environment and context
|
||||
|
||||
Consider an scenario where you have three clusters to restore. Two are management clusters, and one is a hosted cluster. You can restore either the control plane only or the control plane and the nodes. Before you begin, you need the following information:
|
||||
|
||||
* Source MGMT Namespace: The source management namespace
|
||||
* Source MGMT ClusterName: The source management cluster name
|
||||
* Source MGMT Kubeconfig: The source management `kubeconfig` file
|
||||
* Destination MGMT Kubeconfig: The destination management `kubeconfig` file
|
||||
* HC Kubeconfig: The hosted cluster `kubeconfig` file
|
||||
* SSH key file: The SSH public key
|
||||
* Pull secret: The pull secret file to access the release images
|
||||
* {aws-short} credentials
|
||||
* {aws-short} region
|
||||
* Base domain: The DNS base domain to use as an external DNS
|
||||
* S3 bucket name: The bucket in the {aws-short} region where you plan to upload the etcd backup
|
||||
|
||||
This information is shown in the following example environment variables.
|
||||
|
||||
.Example environment variables
|
||||
[source,terminal]
|
||||
----
|
||||
SSH_KEY_FILE=${HOME}/.ssh/id_rsa.pub
|
||||
BASE_PATH=${HOME}/hypershift
|
||||
BASE_DOMAIN="aws.sample.com"
|
||||
PULL_SECRET_FILE="${HOME}/pull_secret.json"
|
||||
AWS_CREDS="${HOME}/.aws/credentials"
|
||||
AWS_ZONE_ID="Z02718293M33QHDEQBROL"
|
||||
|
||||
CONTROL_PLANE_AVAILABILITY_POLICY=SingleReplica
|
||||
HYPERSHIFT_PATH=${BASE_PATH}/src/hypershift
|
||||
HYPERSHIFT_CLI=${HYPERSHIFT_PATH}/bin/hypershift
|
||||
HYPERSHIFT_IMAGE=${HYPERSHIFT_IMAGE:-"quay.io/${USER}/hypershift:latest"}
|
||||
NODE_POOL_REPLICAS=${NODE_POOL_REPLICAS:-2}
|
||||
|
||||
# MGMT Context
|
||||
MGMT_REGION=us-west-1
|
||||
MGMT_CLUSTER_NAME="${USER}-dev"
|
||||
MGMT_CLUSTER_NS=${USER}
|
||||
MGMT_CLUSTER_DIR="${BASE_PATH}/hosted_clusters/${MGMT_CLUSTER_NS}-${MGMT_CLUSTER_NAME}"
|
||||
MGMT_KUBECONFIG="${MGMT_CLUSTER_DIR}/kubeconfig"
|
||||
|
||||
# MGMT2 Context
|
||||
MGMT2_CLUSTER_NAME="${USER}-dest"
|
||||
MGMT2_CLUSTER_NS=${USER}
|
||||
MGMT2_CLUSTER_DIR="${BASE_PATH}/hosted_clusters/${MGMT2_CLUSTER_NS}-${MGMT2_CLUSTER_NAME}"
|
||||
MGMT2_KUBECONFIG="${MGMT2_CLUSTER_DIR}/kubeconfig"
|
||||
|
||||
# Hosted Cluster Context
|
||||
HC_CLUSTER_NS=clusters
|
||||
HC_REGION=us-west-1
|
||||
HC_CLUSTER_NAME="${USER}-hosted"
|
||||
HC_CLUSTER_DIR="${BASE_PATH}/hosted_clusters/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}"
|
||||
HC_KUBECONFIG="${HC_CLUSTER_DIR}/kubeconfig"
|
||||
BACKUP_DIR=${HC_CLUSTER_DIR}/backup
|
||||
|
||||
BUCKET_NAME="${USER}-hosted-${MGMT_REGION}"
|
||||
|
||||
# DNS
|
||||
AWS_ZONE_ID="Z07342811SH9AA102K1AC"
|
||||
EXTERNAL_DNS_DOMAIN="hc.jpdv.aws.kerbeross.com"
|
||||
----
|
||||
|
||||
[id="dr-hosted-cluster-process"]
|
||||
=== Overview of the backup and restore process
|
||||
|
||||
The backup and restore process works as follows:
|
||||
|
||||
. On management cluster 1, which you can think of as the source management cluster, the control plane and workers interact by using the external DNS API. The external DNS API is accessible, and a load balancer sits between the management clusters.
|
||||
+
|
||||
image::298_OpenShift_Backup_Restore_0123_00.png[Diagram that shows the workers accessing the external DNS API and the external DNS API pointing to the control plane through a load balancer]
|
||||
|
||||
. You take a snapshot of the hosted cluster, which includes etcd, the control plane, and the worker nodes. During this process, the worker nodes continue to try to access the external DNS API even if it is not accessible, the workloads are running, the control plane is saved in a local manifest file, and etcd is backed up to an S3 bucket. The data plane is active and the control plane is paused.
|
||||
+
|
||||
image::298_OpenShift_Backup_Restore_0123_01.png[]
|
||||
|
||||
. On management cluster 2, which you can think of as the destination management cluster, you restore etcd from the S3 bucket and restore the control plane from the local manifest file. During this process, the external DNS API is stopped, the hosted cluster API becomes inaccessible, and any workers that use the API are unable to update their manifest files, but the workloads are still running.
|
||||
+
|
||||
image::298_OpenShift_Backup_Restore_0123_02.png[]
|
||||
|
||||
. The external DNS API is accessible again, and the worker nodes use it to move to management cluster 2. The external DNS API can access the load balancer that points to the control plane.
|
||||
+
|
||||
image::298_OpenShift_Backup_Restore_0123_03.png[]
|
||||
|
||||
. On management cluster 2, the control plane and worker nodes interact by using the external DNS API. The resources are deleted from management cluster 1, except for the S3 backup of etcd. If you try to set up the hosted cluster again on mangagement cluster 1, it will not work.
|
||||
+
|
||||
image::298_OpenShift_Backup_Restore_0123_04.png[]
|
||||
|
||||
You can manually back up and restore your hosted cluster, or you can run a script to complete the process. For more information about the script, see "Running a script to back up and restore a hosted cluster".
|
||||
|
||||
// Backing up the hosted cluster
|
||||
include::modules/dr-hosted-cluster-within-aws-region-backup.adoc[leveloffset=+2]
|
||||
|
||||
// Restoring the hosted cluster
|
||||
include::modules/dr-hosted-cluster-within-aws-region-restore.adoc[leveloffset=+2]
|
||||
|
||||
// Deleting the hosted cluster
|
||||
include::modules/dr-hosted-cluster-within-aws-region-delete.adoc[leveloffset=+2]
|
||||
|
||||
//Helper script
|
||||
include::modules/dr-hosted-cluster-within-aws-region-script.adoc[leveloffset=+2]
|
||||
@@ -1,30 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
:context: nodes-cma-autoscaling-custom-about
|
||||
[id="nodes-cma-autoscaling-custom-understanding"]
|
||||
= Understanding the custom metrics autoscaler
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
|
||||
The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure _triggers_, also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that {product-title} can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling.
|
||||
|
||||
To use the custom metrics autoscaler, you create a `ScaledObject` or `ScaledJob` object, which is a custom resource (CR) that defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload.
|
||||
====
|
||||
|
||||
The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the `minReplicaCount` value in the custom metrics autoscaler CR to `0`, the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the _activation phase_. After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the _scaling phase_.
|
||||
|
||||
Some triggers allow you to change the number of replicas that are scaled by the cluster metrics autoscaler. In all cases, the parameter to configure the activation phase always uses the same phrase, prefixed with _activation_. For example, if the `threshold` parameter configures scaling, `activationThreshold` would configure activation. Configuring the activation and scaling phases allows you more flexibility with your scaling policies. For example, you could configure a higher activation phase to prevent scaling up or down if the metric is particularly low.
|
||||
|
||||
The activation value has more priority than the scaling value in case of different decisions for each. For example, if the `threshold` is set to `10`, and the `activationThreshold` is `50`, if the metric reports `40`, the scaler is not active and the pods are scaled to zero even if the HPA requires 4 instances.
|
||||
|
||||
////
|
||||
[NOTE]
|
||||
====
|
||||
You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload. If you want to scale based on a custom trigger and CPU/Memory, you can create multiple triggers in the scaled object or scaled job.
|
||||
====
|
||||
////
|
||||
@@ -1,89 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
[id="nodes-about-autoscaling-nodes"]
|
||||
= About autoscaling nodes on a cluster
|
||||
:context: nodes-about-autoscaling-nodes
|
||||
toc::[]
|
||||
|
||||
// ifdef::openshift-dedicated[]
|
||||
// [IMPORTANT]
|
||||
// ====
|
||||
// Autoscaling is available only on clusters that were purchased through the Red Hat Marketplace.
|
||||
// ====
|
||||
// endif::[]
|
||||
|
||||
The autoscaler option can be configured to automatically scale the number of machines in a cluster.
|
||||
|
||||
The cluster autoscaler increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.
|
||||
|
||||
Additionally, the cluster autoscaler decreases the size of the cluster when some nodes are consistently not needed for a significant period, such as when it has low resource use and all of its important pods can fit on other nodes.
|
||||
|
||||
When you enable autoscaling, you must also set a minimum and maximum number of worker nodes.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Only cluster owners and organization admins can scale or delete a cluster.
|
||||
====
|
||||
|
||||
[id="nodes-enabling-autoscaling-nodes"]
|
||||
== Enabling autoscaling nodes on a cluster
|
||||
|
||||
You can enable autoscaling on worker nodes to increase or decrease the number of nodes available by editing the machine pool definition for an existing cluster.
|
||||
|
||||
[discrete]
|
||||
include::modules/ocm-enabling-autoscaling-nodes.adoc[leveloffset=+2]
|
||||
|
||||
ifdef::openshift-rosa[]
|
||||
[NOTE]
|
||||
====
|
||||
Additionally, you can configure autoscaling on the default machine pool when you xref:../rosa_getting_started/rosa-creating-cluster.adoc#rosa-creating-cluster[create the cluster using interactive mode].
|
||||
====
|
||||
|
||||
[discrete]
|
||||
include::modules/rosa-enabling-autoscaling-nodes.adoc[leveloffset=+2]
|
||||
endif::[]
|
||||
|
||||
[id="nodes-disabling-autoscaling-nodes"]
|
||||
== Disabling autoscaling nodes on a cluster
|
||||
|
||||
You can disable autoscaling on worker nodes to increase or decrease the number of nodes available by editing the machine pool definition for an existing cluster.
|
||||
|
||||
ifdef::openshift-dedicated[]
|
||||
You can disable autoscaling on a cluster using {cluster-manager} console.
|
||||
endif::[]
|
||||
|
||||
ifdef::openshift-rosa[]
|
||||
You can disable autoscaling on a cluster using {cluster-manager} console or the {product-title} CLI.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Additionally, you can configure autoscaling on the default machine pool when you xref:../rosa_getting_started/rosa-creating-cluster.adoc#rosa-creating-cluster[create the cluster using interactive mode].
|
||||
====
|
||||
endif::[]
|
||||
|
||||
[discrete]
|
||||
include::modules/ocm-disabling-autoscaling-nodes.adoc[leveloffset=+2]
|
||||
|
||||
ifdef::openshift-rosa[]
|
||||
|
||||
[discrete]
|
||||
include::modules/rosa-disabling-autoscaling-nodes.adoc[leveloffset=+2]
|
||||
endif::[]
|
||||
|
||||
Applying autoscaling to an {product-title} cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You can configure the cluster autoscaler only in clusters where the Machine API is operational.
|
||||
====
|
||||
|
||||
include::modules/cluster-autoscaler-about.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="nodes-about-autoscaling-nodes-additional-resources"]
|
||||
== Additional resources
|
||||
* xref:../nodes/nodes-machinepools-about.adoc#machinepools-about[About machinepools]
|
||||
ifdef::openshift-rosa[]
|
||||
* xref:../nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing worker nodes]
|
||||
* xref:../rosa_cli/rosa-manage-objects-cli.adoc#rosa-managing-objects-cli[Managing objects with the rosa CLI]
|
||||
endif::[]
|
||||
@@ -1,42 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
include::_attributes/attributes-openshift-dedicated.adoc[]
|
||||
[id="machinepools-about"]
|
||||
= About machine pools
|
||||
:context: machine-pools-about
|
||||
toc::[]
|
||||
|
||||
{product-title} uses machine pools as an elastic, dynamic provisioning method on top of your cloud infrastructure.
|
||||
|
||||
The primary resources are machines, machine sets, and machine pools.
|
||||
|
||||
== Machines
|
||||
A machine is a fundamental unit that describes the host for a worker node.
|
||||
|
||||
== Machine sets
|
||||
`MachineSet` resources are groups of compute machines. If you need more machines or must scale them down, change the number of replicas in the machine pool to which the compute machine sets belong.
|
||||
|
||||
ifdef::openshift-rosa[]
|
||||
Machine sets are not directly modifiable in ROSA.
|
||||
endif::[]
|
||||
|
||||
== Machine pools
|
||||
Machine pools are a higher level construct to compute machine sets.
|
||||
|
||||
A machine pool creates compute machine sets that are all clones of the same configuration across availability zones. Machine pools perform all of the host node provisioning management actions on a worker node. If you need more machines or must scale them down, change the number of replicas in the machine pool to meet your compute needs. You can manually configure scaling or set autoscaling.
|
||||
|
||||
By default, a cluster is created with one machine pool. You can add additional machine pools to an existing cluster, modify the default machine pool, and delete machine pools.
|
||||
|
||||
Multiple machine pools can exist on a single cluster, and they can each have different types or different size nodes.
|
||||
|
||||
== Machine pools in multiple zone clusters
|
||||
When you create a machine pool in a multiple availability zone (Multi-AZ) cluster, that one machine pool has 3 zones. The machine pool, in turn, creates a total of 3 compute machine sets - one for each zone in the cluster. Each of those compute machine sets manages one or more machines in its respective availability zone.
|
||||
|
||||
If you create a new Multi-AZ cluster, the machine pools are replicated to those zones automatically. If you add a machine pool to an existing Multi-AZ, the new pool is automatically created in those zones. Similarly, deleting a machine pool will delete it from all zones.
|
||||
Due to this multiplicative effect, using machine pools in Multi-AZ cluster can consume more of your project's quota for a specific region when creating machine pools.
|
||||
|
||||
[role="_additional-resources"]
|
||||
== Additional resources
|
||||
ifdef::openshift-rosa[]
|
||||
* xref:../nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing worker nodes]
|
||||
endif::[]
|
||||
* xref:../nodes/nodes-about-autoscaling-nodes.adoc#nodes-about-autoscaling-nodes[About autoscaling]
|
||||
@@ -1,39 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="about-remediation-fencing-maintenance"]
|
||||
= About node remediation, fencing, and maintenance
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: about-node-remediation-fencing-maintenance
|
||||
|
||||
toc::[]
|
||||
|
||||
Hardware is imperfect and software contains bugs. When node-level failures, such as the kernel hangs or network interface controllers (NICs) fail, the work required from the cluster does not decrease, and workloads from affected nodes need to be restarted somewhere. However, some workloads, such as ReadWriteOnce (RWO) volumes and StatefulSets, might require at-most-one semantics.
|
||||
|
||||
Failures affecting these workloads risk data loss, corruption, or both. It is important to ensure that the node reaches a safe state, known as `fencing` before initiating recovery of the workload, known as `remediation` and ideally, recovery of the node also.
|
||||
|
||||
It is not always practical to depend on administrator intervention to confirm the true status of the nodes and workloads. To facilitate such intervention, {product-title} provides multiple components for the automation of failure detection, fencing and remediation.
|
||||
|
||||
[id="about-remediation-fencing-maintenance-snr"]
|
||||
== Self Node Remediation
|
||||
|
||||
The Self Node Remediation Operator is an {product-title} add-on operator which implements an external system of fencing and remediation that reboots unhealthy nodes and deletes resources, such as, Pods and VolumeAttachments. The reboot ensures that the workloads are fenced, and the resource deletion accelerates the rescheduling of affected workloads. Unlike other external systems, Self Node Remediation does not require any management interface, like, for example, Intelligent Platform Management Interface (IPMI) or an API for node provisioning.
|
||||
|
||||
Self Node Remediation can be used by failure detection systems, like Machine Health Check or Node Health Check.
|
||||
|
||||
[id="about-remediation-fencing-maintenance-mhc"]
|
||||
== Machine Health Check
|
||||
|
||||
Machine Health Check utilizes an {product-title} built-in failure detection, fencing and remediation system, which monitors the status of machines and the conditions of nodes. Machine Health Checks can be configured to trigger external fencing and remediation systems, like Self Node Remediation.
|
||||
|
||||
[id="about-remediation-fencing-maintenance-nhc"]
|
||||
== Node Health Check
|
||||
|
||||
The Node Health Check Operator is an {product-title} add-on operator which implements a failure detection system that monitors node conditions. It does not have a built-in fencing or remediation system and so must be configured with an external system that provides such features. By default, it is configured to utilize the Self Node Remediation system.
|
||||
|
||||
[id="about-remediation-fencing-maintenance-node"]
|
||||
== Node Maintenance
|
||||
|
||||
Administrators face situations where they need to interrupt the cluster, for example, replace a drive, RAM, or a NIC.
|
||||
|
||||
In advance of this maintenance, affected nodes should be cordoned and drained. When a node is cordoned, new workloads cannot be scheduled on that node. When a node is drained, to avoid or minimize downtime, workloads on the affected node are transferred to other nodes.
|
||||
|
||||
While this maintenance can be achieved using command-line tools, the Node Maintenance Operator offers a declarative approach to achieve this by using a custom resource. When such a resource exists for a node, the operator cordons and drains the node until the resource is deleted.
|
||||
@@ -1,13 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="machine-health-checks"]
|
||||
= Remediating nodes with Machine Health Checks
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: machine-health-checks
|
||||
|
||||
toc::[]
|
||||
|
||||
Machine health checks automatically repair unhealthy machines in a particular machine pool.
|
||||
|
||||
include::modules/machine-health-checks-about.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-configuring-machine-health-check-with-self-node-remediation.adoc[leveloffset=+1]
|
||||
@@ -1,33 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="node-health-check-operator"]
|
||||
= Remediating nodes with Node Health Checks
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: node-health-check-operator
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use the Node Health Check Operator to identify unhealthy nodes. The Operator uses the Self Node Remediation Operator to remediate the unhealthy nodes.
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
xref:../../../nodes/nodes/ecosystems/eco-self-node-remediation-operator.adoc#self-node-remediation-operator-remediate-nodes[Remediating nodes with the Self Node Remediation Operator]
|
||||
|
||||
include::modules/eco-node-health-check-operator-about.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-node-health-check-operator-control-plane-fencing.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-node-health-check-operator-installation-web-console.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-node-health-check-operator-installation-cli.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-node-health-check-operator-creating-node-health-check.adoc[leveloffset=+1]
|
||||
|
||||
[id="gather-data-nhc"]
|
||||
== Gathering data about the Node Health Check Operator
|
||||
To collect debugging information about the Node Health Check Operator, use the `must-gather` tool. For information about the `must-gather` image for the Node Health Check Operator, see xref:../../../support/gathering-cluster-data.adoc#gathering-data-specific-features_gathering-cluster-data[Gathering data about specific features].
|
||||
|
||||
[id="additional-resources-nhc-operator-installation"]
|
||||
== Additional resources
|
||||
* xref:../../../operators/admin/olm-upgrading-operators.adoc#olm-changing-update-channel_olm-upgrading-operators[Changing the update channel for an Operator]
|
||||
* xref:../../../disconnected/using-olm.adoc#olm-restricted-networks[Using Operator Lifecycle Manager in disconnected environments].
|
||||
@@ -1,70 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="node-maintenance-operator"]
|
||||
= Placing nodes in maintenance mode with Node Maintenance Operator
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: node-maintenance-operator
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use the Node Maintenance Operator to place nodes in maintenance mode by using the `oc adm` utility or `NodeMaintenance` custom resources (CRs).
|
||||
|
||||
include::modules/eco-about-node-maintenance-standalone.adoc[leveloffset=+1]
|
||||
|
||||
[id="installing-standalone-nmo"]
|
||||
== Installing the Node Maintenance Operator
|
||||
You can install the Node Maintenance Operator using the web console or the OpenShift CLI (`oc`).
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If {VirtProductName} version 4.10 or less is installed in your cluster, it includes an outdated version of the Node Maintenance Operator version.
|
||||
====
|
||||
|
||||
include::modules/eco-node-maintenance-operator-installation-web-console.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/eco-node-maintenance-operator-installation-cli.adoc[leveloffset=+2]
|
||||
|
||||
The Node Maintenance Operator is supported in a restricted network environment. For more information, see xref:../../../disconnected/using-olm.adoc#olm-restricted-networks[Using Operator Lifecycle Manager in disconnected environments].
|
||||
|
||||
[id="setting-node-in-maintenance-mode"]
|
||||
== Setting a node to maintenance mode
|
||||
You can place a node into maintenance mode from the web console or from the CLI by using a `NodeMaintenance` CR.
|
||||
|
||||
include::modules/eco-setting-node-maintenance-cr-web-console.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/eco-setting-node-maintenance-cr-cli.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/eco-checking_status_of_node_maintenance_cr_tasks.adoc[leveloffset=+2]
|
||||
|
||||
[id="resuming-node-from-maintenance-mode"]
|
||||
== Resuming a node from maintenance mode
|
||||
You can resume a node from maintenance mode from the web console or from the CLI by using a `NodeMaintenance` CR. Resuming a node brings it out of maintenance mode and makes it schedulable again.
|
||||
|
||||
include::modules/eco-resuming-node-maintenance-cr-web-console.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/eco-resuming-node-maintenance-cr-cli.adoc[leveloffset=+2]
|
||||
|
||||
[id="working-with-bare-metal-nodes"]
|
||||
== Working with bare-metal nodes
|
||||
For clusters with bare-metal nodes, you can place a node into maintenance mode, and resume a node from maintenance mode, by using the web console *Actions* control.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Clusters with bare-metal nodes can also place a node into maintenance mode, and resume a node from maintenance mode, by using the web console and CLI, as outlined. These methods, by using the web console *Actions* control, are applicable to bare-metal clusters only.
|
||||
====
|
||||
|
||||
include::modules/eco-maintaining-bare-metal-nodes.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/eco-setting-node-maintenance-actions-web-console.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/eco-resuming-node-maintenance-actions-web-console.adoc[leveloffset=+2]
|
||||
|
||||
[id="gather-data-nmo"]
|
||||
== Gathering data about the Node Maintenance Operator
|
||||
To collect debugging information about the Node Maintenance Operator, use the `must-gather` tool. For information about the `must-gather` image for the Node Maintenance Operator, see xref:../../../support/gathering-cluster-data.adoc#gathering-data-specific-features_gathering-cluster-data[Gathering data about specific features].
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources-node-maintenance-operator-installation"]
|
||||
== Additional resources
|
||||
* xref:../../../support/gathering-cluster-data.adoc#gathering-cluster-data[Gathering data about your cluster]
|
||||
* xref:../../../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-evacuating_nodes-nodes-working[Understanding how to evacuate pods on nodes]
|
||||
* xref:../../../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-marking_nodes-nodes-working[Understanding how to mark nodes as unschedulable or schedulable]
|
||||
@@ -1,37 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="self-node-remediation-operator-remediate-nodes"]
|
||||
= Using Self Node Remediation
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: self-node-remediation-operator-remediate-nodes
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use the Self Node Remediation Operator to automatically reboot unhealthy nodes. This remediation strategy minimizes downtime for stateful applications and ReadWriteOnce (RWO) volumes, and restores compute capacity if transient failures occur.
|
||||
|
||||
include::modules/eco-self-node-remediation-operator-about.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-self-node-remediation-about-watchdog.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
xref:../../../virt/support/monitoring/virt-monitoring-vm-health.adoc#watchdog_virt-monitoring-vm-health[Configuring a watchdog]
|
||||
|
||||
include::modules/eco-self-node-remediation-operator-control-plane-fencing.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-self-node-remediation-operator-installation-web-console.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-self-node-remediation-operator-installation-cli.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-self-node-remediation-operator-configuring.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/eco-self-node-remediation-operator-troubleshooting.adoc[leveloffset=+1]
|
||||
|
||||
[id="gather-data-self-node-remediation"]
|
||||
== Gathering data about the Self Node Remediation Operator
|
||||
To collect debugging information about the Self Node Remediation Operator, use the `must-gather` tool. For information about the `must-gather` image for the Self Node Remediation Operator, see xref:../../../support/gathering-cluster-data.adoc#gathering-data-specific-features_gathering-cluster-data[Gathering data about specific features].
|
||||
|
||||
[id="additional-resources-self-node-remediation-operator-installation"]
|
||||
== Additional resources
|
||||
* xref:../../../disconnected/using-olm.adoc#olm-restricted-networks[Using Operator Lifecycle Manager in disconnected environments].
|
||||
* xref:../../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster]
|
||||
@@ -1,20 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="nodes-nodes-graceful-shutdown"]
|
||||
= Managing graceful node shutdown
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: nodes-nodes-graceful-shutdown
|
||||
|
||||
toc::[]
|
||||
|
||||
Graceful node shutdown enables the kubelet to delay forcible eviction of pods during a node shutdown. When you configure a graceful node shutdown, you can define a time period for pods to complete running workloads before shutting down. This grace period minimizes interruption to critical workloads during unexpected node shutdown events. Using priority classes, you can also specify the order of pod shutdown.
|
||||
|
||||
// concept topic: how it works
|
||||
include::modules/nodes-nodes-cluster-timeout-graceful-shutdown.adoc[leveloffset=+1]
|
||||
|
||||
// procedure topic: configuring Graceful node shutdown
|
||||
include::modules/nodes-nodes-configuring-graceful-shutdown.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../../nodes/pods/nodes-pods-priority.adoc#nodes-pods-priority[Understanding pod priority]
|
||||
@@ -1,17 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="nodes-nodes-modifying"]
|
||||
= Modifying existing nodes in your {product-title} cluster
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: nodes-nodes-modifying
|
||||
|
||||
toc::[]
|
||||
|
||||
|
||||
As an administrator, you can use the node group configuration map or through the use of labels on the nodes.
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
// modules required to cover the user story. You can also include other
|
||||
// assemblies.
|
||||
|
||||
include::modules/nodes-nodes-working-marking.adoc[leveloffset=+1]
|
||||
@@ -1,29 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
:context: nodes-scheduler-node-names
|
||||
[id="nodes-scheduler-node-names"]
|
||||
= Placing a pod on a specific node by name
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Use the Pod Node Constraints admission controller to ensure a pod
|
||||
is deployed onto only a specified node host by assigning it a label
|
||||
and specifying this in the `nodeName` setting in a pod configuration.
|
||||
|
||||
The Pod Node Constraints admission controller ensures that pods
|
||||
are deployed onto only specified node hosts using labels
|
||||
and prevents users without a specific role from using the
|
||||
`nodeSelector` field to schedule pods.
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
// modules required to cover the user story. You can also include other
|
||||
// assemblies.
|
||||
|
||||
include::modules/nodes-scheduler-node-names-configuring.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="nodes-scheduler-node-names-addtl-resources_{context}"]
|
||||
== Additional resources
|
||||
|
||||
* xref:../../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-updating_nodes-nodes-working[Understanding how to update labels on nodes].
|
||||
@@ -1,21 +0,0 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
:context: nodes-scheduler-node-project
|
||||
[id="nodes-scheduler-node-project"]
|
||||
= Placing a pod in a specific project
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
|
||||
|
||||
The pod Node Selector admission controller allows you to force pods onto nodes associated with a specific project and prevent pods from being scheduled in those nodes.
|
||||
|
||||
// The following include statements pull in the module files that comprise
|
||||
// the assembly. Include any combination of concept, procedure, or reference
|
||||
// modules required to cover the user story. You can also include other
|
||||
// assemblies.
|
||||
|
||||
include::modules/nodes-scheduler-node-projects-about.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/nodes-scheduler-node-projects-configuring.adoc[leveloffset=+2]
|
||||
|
||||
Reference in New Issue
Block a user