1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

RHV-16816 squashing master 4-13 06:10

This commit is contained in:
Rolfe Dlugy-Hegwer
2020-03-29 10:11:00 -04:00
committed by Vikram Goyal
parent 0be3e28b4e
commit e4d929f235
51 changed files with 1046 additions and 69 deletions

View File

@@ -163,11 +163,6 @@ Topics:
File: installing-restricted-networks-gcp
- Name: Uninstalling a cluster on GCP
File: uninstalling-cluster-gcp
- Name: Installing on RHV
Dir: installing_rhv
Topics:
- Name: Installing a cluster on Red Hat Virtualization
File: installing-rhv
- Name: Installing on bare metal
Dir: installing_bare_metal
Topics:
@@ -215,6 +210,17 @@ Topics:
File: uninstalling-cluster-openstack
- Name: Uninstalling a cluster on OpenStack from your own infrastructure
File: uninstalling-openstack-user
- Name: Installing on RHV
Dir: installing_rhv
Topics:
- Name: Creating a custom virtual machine template for RHV
File: installing-rhv-creating-custom-vm
- Name: Installing a cluster quickly on RHV
File: installing-rhv-default
- Name: Installing a cluster on RHV with customizations
File: installing-rhv-customizations
- Name: Uninstalling a cluster on RHV
File: uninstalling-cluster-rhv
- Name: Installing on vSphere
Dir: installing_vsphere
Topics:
@@ -224,8 +230,8 @@ Topics:
File: installing-vsphere-network-customizations
- Name: Restricted network vSphere installation
File: installing-restricted-networks-vsphere
- Name: Gathering installation logs
File: installing-gather-logs
- Name: Troubleshooting installation issues
File: installing-troubleshooting
- Name: Support for FIPS cryptography
File: installing-fips
- Name: Installation configuration

View File

@@ -1,21 +1,21 @@
[id="installing-gather-logs"]
= Gathering installation logs
[id="installing-troubleshooting"]
= Troubleshooting installation issues
include::modules/common-attributes.adoc[]
:context: installing-troubleshooting
toc::[]
To assist in troubleshooting a failed {product-title} installation, you can
gather logs from the bootstrap and control plane, or master, machines.
gather logs from the bootstrap and control plane, or master, machines. You can also get debug information from the installation program.
.Prerequisites
* You attempted to install an {product-title} cluster, and installation failed.
* You provided an SSH key to the installation program, and that key is in your
running `ssh-agent` process.
include::modules/installation-bootstrap-gather.adoc[leveloffset=+1]
include::modules/manually-gathering-logs-with-ssh.adoc[leveloffset=+1]
include::modules/manually-gathering-logs-without-ssh.adoc[leveloffset=+1]
include::modules/installation-getting-debug-information.adoc[leveloffset=+1]

View File

@@ -0,0 +1,53 @@
[id="installing-rhv-creating-custom-vm"]
= Creating a custom virtual machine template on {rh-virtualization}
include::modules/common-attributes.adoc[]
:context: installing-rhv-creating-custom-vm
toc::[]
Before installing a {product-title} cluster on your {rh-virtualization-first} environment, you create a custom virtual machine template and configure your environment so the installation program uses the template.
[IMPORTANT]
====
Creating a custom virtual machine template for {rh-virtualization} is a workaround for a known issue (link:https://bugzilla.redhat.com/show_bug.cgi?id=1818577[*BZ#1818577*]). If you do not create a custom virtual machine, the {product-title} cluster you install will fail.
====
.Prerequisites
* Review details about the
xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update]
processes.
* If you use a firewall,
xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configure it to allow the sites] that your cluster requires access to.
include::modules/installing-rhv-downloading-rhcos-image.adoc[leveloffset=+1]
include::modules/installing-rhv-using-ansible-playbook.adoc[leveloffset=+1]
== Determining the resources available to customize your virtual machine template
If you plan to xref:../../installing/installing_rhv/installing-rhv-default.html#installing-rhv-default[install a cluster quickly on {rh-virtualization} using the default configuration options], skip to the next task, xref:installing-rhv-attaching-virt-disk-to-vm_installing-rhv-creating-custom-vm[Attaching the virtual disk with the RHCOS image to the virtual machine].
Otherwise, if you plan to xref:../../installing/installing_rhv/installing-rhv-customizations.html#installing-rhv-customizations[install a cluster on {rh-virtualization} with customizations], consider the following information.
You can customize the virtual machine template the installation program uses to create the control and compute machines in your {product-title} cluster. You can customize the template so these machines have more than the default virtual CPU, memory, and storage resources. Your customizations apply equally to both control and compute machines; they cannot be customized differently.
Over-allocating resources can cause the installation or operation of your cluster to fail. To avoid over-allocating resources, you must determine how much of each resource is available for each virtual machine: Inspect the resources in your {rh-virtualization-first} cluster and divide them by the number of virtual machines. The resulting quotients are the maximum values you can allocate for each resource.
.Procedure
. Inspect the available resources in your {rh-virtualization} cluster.
.. Go to xref:../../installing/installing_rhv/installing-rhv-default.html#installing-rhv-verifying-the-rhv-environment_installing-rhv-default[Verifying the requirements for the {rh-virtualization} environment]
.. Record the values of the storage *Free Space*, *Logical CPU Cores*, and *Max free Memory for scheduling new VMs*.
. Record the number of virtual machines in your {product-title} cluster.
** By default, your cluster contains seven machines.
** You can also customize the number of machines, as described in xref:../../installing/installing_rhv/installing-rhv-customizations.html#installing-rhv-installation-configuration-parameters_installing-rhv-customizations[Installation configuration parameters for {rh-virtualization}]. To account for the bootstrap machine, add one machine to the number in the configuration file.
. Divide each resource by the number of virtual machines.
. Record these values for use later on.
include::modules/installing-rhv-attaching-virt-disk-to-vm.adoc[leveloffset=+1]
include::modules/installing-rhv-configuring-and-creating-the-vm.adoc[leveloffset=+1]
include::modules/installing-rhv-creating-vm-template.adoc[leveloffset=+1]
include::modules/installing-rhv-export-some-environment-variables.adoc[leveloffset=+1]

View File

@@ -0,0 +1,94 @@
[id="installing-rhv-customizations"]
= Installing a cluster on {rh-virtualization} with customizations
include::modules/common-attributes.adoc[]
:context: installing-rhv-customizations
toc::[]
You can customize and install a cluster on {rh-virtualization-first}. The installation program uses installer-provisioned infrastructure to automate creating and deploying the cluster.
To install a customized cluster, you prepare the environment and perform the following steps:
. Create an installation configuration file, the `install-config.yaml` file, by running the installation program and answering its prompts.
. Inspect and modify parameters in the `install-config.yaml` file.
. Make a working copy of the `install-config.yaml` file.
. Run the installation program with a copy of the `install-config.yaml` file.
Then, the installation program creates the {product-title} cluster.
For an alternative to installing a customized cluster, see xref:../../installing/installing_rhv/installing-rhv-default.adoc#installing-rhv-default[Installing a default cluster].
[NOTE]
====
This installation program is available for Linux and macOS only.
====
.Prerequisites
* Review details about the
xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update]
processes.
* If you use a firewall,
xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configure it to allow the sites] that your cluster requires access to.
* xref:../../installing/installing_rhv/installing-rhv-creating-custom-vm.adoc#installing-rhv-creating-custom-vm[Create a custom virtual machine template on {rh-virtualization}].
[IMPORTANT]
====
Creating a custom virtual machine template for {rh-virtualization} is a workaround for a known issue (link:https://bugzilla.redhat.com/show_bug.cgi?id=1818577[*BZ#1818577*]). If you do not create a custom virtual machine, the {product-title} cluster you install will fail.
====
include::modules/cluster-entitlements.adoc[leveloffset=+1]
include::modules/installing-rhv-requirements.adoc[leveloffset=+1]
include::modules/installing-rhv-verifying-the-rhv-environment.adoc[leveloffset=+1]
include::modules/installing-rhv-preparing-the-network-environment.adoc[leveloffset=+1]
include::modules/installing-rhv-setting-up-ca-certificate.adoc[leveloffset=+1]
include::modules/ssh-agent-using.adoc[leveloffset=+1]
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
include::modules/installation-initializing.adoc[leveloffset=+1]
// The RHV installer for 4.4 GA has not been implemented or tested support for most configuration parameters. This will be fixed in an upcoming z-stream. When that happens, (1) uncomment the following line, (2) update the contents of the included file to display RHV-supported parameters, and (3) remove this comment.
// include::modules/installation-configuration-parameters.adoc[leveloffset=+1]
include::modules/installing-rhv-config-yaml.adoc[leveloffset=+1]
include::modules/installation-launching-installer.adoc[leveloffset=+1]
[IMPORTANT]
====
You have completed the steps required to install the cluster. The remaining steps show you how to verify the cluster and troubleshoot the installation.
====
include::modules/cli-installing-cli.adoc[leveloffset=+1]
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]
To learn more, see xref:../..//cli_reference/openshift_cli/getting-started-cli.html#getting-started-cli[Getting started with the CLI].
include::modules/installation-osp-verifying-cluster-status.adoc[leveloffset=+1]
.Troubleshooting
If the installation fails, the installation program times out and displays an error message. To learn more, see
xref:../../installing/installing-troubleshooting.adoc#installing-troubleshooting[Troubleshooting installation issues].
include::modules/installing-rhv-accessing-the-ocp-web-console.adoc[leveloffset=+1]
include::modules/installation-common-issues.adoc[leveloffset=+1]
== Post-installation tasks
After the {product-title} cluster initializes, you can perform the following tasks.
* Optional: After deployment, add or replace SSH keys using the Machine Config Operator (MCO) in {product-title}.
* Optional: Remove the `kubeadmin` user. Instead, use the authentication provider to create a user with cluster-admin privileges.
.Next steps
* xref:../../installing/install_config/customizations.adoc#customizations[Customize your cluster].
* If necessary, you can
xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting].

View File

@@ -0,0 +1,76 @@
[id="installing-rhv-default"]
= Installing a cluster quickly on {rh-virtualization}
include::modules/common-attributes.adoc[]
:context: installing-rhv-default
toc::[]
You can quickly install a default, non-customized, cluster on {rh-virtualization-first}. The installation program uses installer-provisioned infrastructure to automate creating and deploying the cluster.
To install a default cluster, you prepare the environment, run the installation program and answer its prompts. Then, the installation program creates the {product-title} cluster.
For an alternative to installing a default cluster, see xref:../../installing/installing_rhv/installing-rhv-customizations.adoc#installing-rhv-customizations[Installing a cluster with customizations].
[NOTE]
====
This installation program is available for Linux and macOS only.
====
.Prerequisites
* Review details about the
xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update]
processes.
* If you use a firewall,
xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configure it to allow the sites] that your cluster requires access to.
* xref:../../installing/installing_rhv/installing-rhv-creating-custom-vm.adoc#installing-rhv-creating-custom-vm[Create a custom virtual machine template on {rh-virtualization}].
[IMPORTANT]
====
Creating a custom virtual machine template for {rh-virtualization} is a workaround for a known issue (link:https://bugzilla.redhat.com/show_bug.cgi?id=1818577[*BZ#1818577*]). If you do not create a custom virtual machine, the {product-title} cluster you install will fail.
====
include::modules/cluster-entitlements.adoc[leveloffset=+1]
include::modules/installing-rhv-requirements.adoc[leveloffset=+1]
include::modules/installing-rhv-verifying-the-rhv-environment.adoc[leveloffset=+1]
include::modules/installing-rhv-preparing-the-network-environment.adoc[leveloffset=+1]
include::modules/installing-rhv-setting-up-ca-certificate.adoc[leveloffset=+1]
include::modules/ssh-agent-using.adoc[leveloffset=+1]
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
include::modules/installation-initializing.adoc[leveloffset=+1]
include::modules/installation-launching-installer.adoc[leveloffset=+1]
[IMPORTANT]
====
You have completed the steps required to install the cluster. The remaining steps show you how to verify the cluster and troubleshoot the installation.
====
include::modules/cli-installing-cli.adoc[leveloffset=1]
To learn more, see xref:../..//cli_reference/openshift_cli/getting-started-cli.html#getting-started-cli[Getting started with the CLI].
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=1]
include::modules/installation-osp-verifying-cluster-status.adoc[leveloffset=+1]
.Troubleshooting
If the installation fails, the installation program times out and displays an error message. To learn more, see
xref:../../installing/installing-troubleshooting.adoc#installing-troubleshooting[Troubleshooting installation issues].
include::modules/installing-rhv-accessing-the-ocp-web-console.adoc[leveloffset=+1]
include::modules/installation-common-issues.adoc[leveloffset=+1]
== Post-installation tasks
After the {product-title} cluster initializes, you can perform the following tasks.
* Optional: After deployment, add or replace SSH keys using the Machine Config Operator (MCO) in {product-title}.
* Optional: Remove the `kubeadmin` user. Instead, use the authentication provider to create a user with cluster-admin privileges.

View File

@@ -1,13 +0,0 @@
[id="installing-rhv"]
= Installing a cluster on Red Hat Virtualization
include::modules/common-attributes.adoc[]
:context: installing-rhv
toc::[]
In {product-title} version {product-version}, you can install a cluster
in a Red Hat Virtualization (RHV) environment.
The installer that deploys a cluster on RHV automates the process by using installer-provisioned infrastructure. This installer is available for Linux and macOS only.
To learn more about installing a {product-title} cluster on RHV, read link:https://access.redhat.com/articles/4903411[Quickstart Guide: Installing {product-title} on Red Hat Virtualization]

View File

@@ -0,0 +1,10 @@
[id="uninstalling-cluster-rhv"]
= Uninstalling a cluster on {rh-virtualization}
include::modules/common-attributes.adoc[]
:context: uninstalling-cluster-rhv
toc::[]
You can remove an {product-title} cluster from {rh-virtualization-first}.
include::modules/installation-uninstall-clouds.adoc[leveloffset=+1]

View File

@@ -63,7 +63,7 @@ spec:
<1> A reference to an existing service account.
<2> The path relative to the mount point of the file to project the token into.
<3> Optionally set the expiration of the service account token, in seconds. The default is 3600 seconds (1 hour) and must be at least 600 seconds (10 minutes). The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.
<4> Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server.
<4> Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server.
.. Create the Pod:
+

View File

@@ -22,6 +22,8 @@
// * installing/installing_vsphere/installing-vsphere.adoc
// * installing/installing_ibm_z/installing-ibm-z.adoc
// * openshift_images/samples-operator-alt-registry.adoc
// * installing/installing_rhv/installing-rhv-customizations.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
//
// AMQ docs link to this; do not change anchor

View File

@@ -27,9 +27,12 @@
// * installing/installing_vsphere/installing-restricted-networks-vsphere.adoc
// * installing/installing_vsphere/installing-vsphere.adoc
// * installing/installing_ibm_z/installing-ibm-z.adoc
// * installing/installing_rhv/installing-rhv-customizations.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
[id="cli-logging-in-kubeadmin_{context}"]
= Logging in to the cluster
= Logging in to the cluster
You can log in to your cluster as a default system user by exporting the cluster `kubeconfig` file.
The `kubeconfig` file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.

View File

@@ -5,7 +5,7 @@
[id="cluster-logging-deploy-memory_{context}"]
= Configure memory for Elasticsearch instances
By default the amount of RAM allocated to each ES instance is 16GB. You can change this value as needed.
By default, the amount of RAM allocated to each ES instance is 16GB. You can change this value as needed.
Keep in mind that *half* of this value will be passed to the individual
Elasticsearch pods java processes

View File

@@ -14,3 +14,5 @@
:rh-openstack-first: Red Hat OpenStack Platform (RHOSP)
:rh-openstack: RHOSP
:cloud-redhat-com: Red Hat OpenShift Cluster Manager
:rh-virtualization-first: Red Hat Virtualization (RHV)
:rh-virtualization: RHV

View File

@@ -5,7 +5,7 @@
[id="understanding-default-ingress_{context}"]
= Understanding the default ingress certificate
By default {product-title} uses the Ingress Operator to
By default, {product-title} uses the Ingress Operator to
create an internal CA and issue a wildcard certificate that is valid for
applications under the `.apps` sub-domain. Both the web console and CLI
use this certificate as well.

View File

@@ -189,7 +189,7 @@ AllNodesAtLatestRevision
3 nodes are at revision 3
----
.. Update the the `kubescheduler`:
.. Update the `kubescheduler`:
+
----
$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

View File

@@ -13,7 +13,7 @@ its heap. This value can be modified by the `CONTAINER_HEAP_PERCENT`
environment variable. It can also be capped at an upper limit or overridden
entirely.
By default any other processes run in the Jenkins agent container, such as
By default, any other processes run in the Jenkins agent container, such as
shell scripts or `oc` commands run from pipelines, cannot use more
than the remaining 50% memory limit without provoking an OOM kill.

View File

@@ -145,7 +145,7 @@ endif::vpc[]
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your `ssh-agent` process uses.
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
ifdef::private[]
<9> How to publish the user-facing endpoints of your cluster. Set `publish` to `Internal` to deploy a private cluster, which cannot be accessed from the internet. The default value is `External`.

View File

@@ -122,7 +122,7 @@ endif::vnet[]
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your `ssh-agent` process uses.
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
ifdef::private[]
<14> How to publish the user-facing endpoints of your cluster. Set `publish` to `Internal` to deploy a private cluster, which cannot be accessed from the internet. The default value is `External`.

View File

@@ -122,7 +122,7 @@ endif::restricted[]
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your `ssh-agent` process uses.
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
ifdef::restricted[]
<14> Provide the contents of the certificate file that you used for your mirror

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// *installing/installing-gather-logs.adoc
// *installing/installing-troubleshooting.adoc
[id="installation-bootstrap-gather_{context}"]
= Gathering logs from a failed installation
@@ -34,9 +34,9 @@ the bootstrap and control plane machines:
** If you used installer-provisioned infrastructure, run the following command:
+
----
$ ./openshift-install gather bootstrap --dir=<directory> <1>
$ ./openshift-install gather bootstrap --dir=<installation_directory> <1>
----
<1> `installation_directory` is the directory you stored the {product-title}
<1> `installation_directory` is the directory you specified when you ran `./openshift-install create cluster`. This directory contains the {product-title}
definition files that the installation program creates.
+
For installer-provisioned infrastructure, the installation program stores
@@ -47,13 +47,13 @@ addresses
command:
+
----
$ ./openshift-install gather bootstrap --dir=<directory> \ <1>
$ ./openshift-install gather bootstrap --dir=<installation_directory> \ <1>
--bootstrap <bootstrap_address> \ <2>
--master <master_1_address> \ <3>
--master <master_2_address> \ <3>
--master <master_3_address>" <3>
----
<1> `installation_directory` is the directory you stored the {product-title}
<1> For `installation_directory`, specify the same directory you specified when you ran `./openshift-install create cluster`. This directory contains the {product-title}
definition files that the installation program creates.
<2> `<bootstrap_address>` is the fully-qualified domain name or IP address of
the cluster's bootstrap machine.
@@ -69,7 +69,7 @@ The command output resembles the following example:
+
----
INFO Pulling debug logs from the bootstrap machine
INFO Bootstrap gather logs captured here "<directory>/log-bundle-<timestamp>.tar.gz"
INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz"
----
+
If you open a Red Hat support case about your installation failure, include

View File

@@ -0,0 +1,53 @@
// Module included in the following assemblies:
//
// * installing/installing/installing-troubleshooting.adoc
[id="installation-common-issues_{context}"]
= Troubleshooting common issues with installing on {rh-virtualization-first}
Here are some common issues you might encounter, along with proposed causes and solutions.
[id="cpu-load-increases-and-nodes-go-into-a-not-ready-state_{context}"]
== CPU load increases and nodes go into a Not Ready state
* *Symptom*: CPU load increases significantly and nodes start going into a Not Ready state.
* *Cause*: The storage domain latency might be too high, especially for master nodes.
* *Solution*:
+
Make the nodes Ready again by restarting the kubelet service. Enter:
+
----
$ systemctl restart kubelet
----
+
Inspect the {product-title} metrics service, which automatically gathers and reports on some valuable data such as the etcd disk sync duration. If the cluster is operational, use this data to help determine whether storage latency or throughput is the root issue. If so, consider using a storage resource that has lower latency and higher throughput.
+
To get raw metrics, enter the following command as kubeadmin or user with cluster-admin privileges:
+
----
$ oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics`
----
+
To learn more, see https://access.redhat.com/articles/3793621[Exploring Application Endpoints for the purposes of Debugging with OpenShift 4.x]
[id="trouble-connecting-the-rhv-cluster-api_{context}"]
== Trouble connecting the {product-title} cluster API
* *Symptom*: The installation program completes but the {product-title} cluster API is not available. The bootstrap virtual machine remains up after the bootstrap process is complete. When you enter the following command, the response will time out.
+
----
$ oc login -u kubeadmin -p *** <apiurl>
----
* *Cause*: The bootstrap VM was not deleted by the installation program and has not released the cluster's API IP address.
* *Solution*: Use the `wait-for` subcommand to be notified when the bootstrap process is complete:
+
----
$ ./openshift-install wait-for bootstrap-complete
----
+
When the bootstrap process is complete, delete the bootstrap virtual machine:
+
----
$ ./openshift-install destroy bootstrap
----

View File

@@ -14,6 +14,8 @@
// * installing/installing_gcp/installing-gcp-vpc.adoc
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
// * installing/installing_rhv/installing-rhv-custom.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
ifeval::["{context}" == "installing-aws-customizations"]
:aws:
@@ -144,7 +146,7 @@ container images for {product-title} components.
|The SSH key to use to access your cluster machines.
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your `ssh-agent` process uses.
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
|A valid, local public SSH key that you added to the `ssh-agent` process.

View File

@@ -113,7 +113,7 @@ endif::vpc[]
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your `ssh-agent` process uses.
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
ifdef::private[]
<10> How to publish the user-facing endpoints of your cluster. Set `publish` to `Internal` to deploy a private cluster, which cannot be accessed from the internet. The default value is `External`.

View File

@@ -46,7 +46,7 @@ when copying installation files from an earlier {product-title} version.
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your `ssh-agent` process uses.
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
... Select *aws* as the platform to target.
... If you do not have an AWS profile stored on your computer, enter the AWS

View File

@@ -0,0 +1,22 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-troubleshooting.adoc
[id="installing-getting-debug-information_{context}"]
= Getting debug information from the installation program
You can use any of the following actions to get debug information from the installation program.
* Look at debug messages from a past installation in the hidden `.openshift_install.log` file. For example, enter:
+
----
$ cat ~/<installation_directory>/.openshift_install.log <1>
----
<1> For `installation_directory`, specify the same directory you specified when you ran `./openshift-install create cluster`.
* Re-run the installation program with `--log-level=debug`:
+
----
$ ./openshift-install create cluster --dir=<installation_directory> --log-level=debug <1>
----
<1> For `installation_directory`, specify the same directory you specified when you ran `./openshift-install create cluster`.

View File

@@ -1,4 +1,4 @@
// Module included in the following assemblies:
*installation*// Module included in the following assemblies:
//
// * installing/installing_aws/installing-aws-customizations.adoc
// * installing/installing_aws/installing-aws-network-customizations.adoc
@@ -18,6 +18,8 @@
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
// * installing/installing_openstack/installing-openstack-installer-user.adoc
// * installing/installing_rhv/installing-rhv-customizations.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
// Consider also adding the installation-configuration-parameters.adoc module.
//YOU MUST SET AN IFEVAL FOR EACH NEW MODULE
@@ -71,11 +73,17 @@ ifeval::["{context}" == "installing-openstack-user-kuryr"]
:osp:
:osp-user:
endif::[]
ifeval::["{context}" == "installing-rhv-customizations"]
:rhv:
endif::[]
ifeval::["{context}" == "installing-rhv-default"]
:rhv:
endif::[]
[id="installation-initializing_{context}"]
= Creating the installation configuration file
You can customize your installation of {product-title} on
You can customize the {product-title} cluster you install on
ifdef::aws[]
Amazon Web Services (AWS).
endif::aws[]
@@ -88,19 +96,32 @@ endif::gcp[]
ifdef::osp[]
OpenStack.
endif::osp[]
ifdef::rhv[]
{rh-virtualization-first}.
endif::rhv[]
.Prerequisites
* Obtain the {product-title} installation program and the pull secret for your cluster.
* Download the {product-title} installation program and the pull secret for your cluster.
.Procedure
. Create the `install-config.yaml` file.
+
ifndef::rhv[]
.. Run the following command:
+
----
$ ./openshift-install create install-config --dir=<installation_directory> <1>
----
endif::rhv[]
ifdef::rhv[]
.. For {rh-virtualization-first}, run the installation program with `sudo`:
+
----
$ sudo ./openshift-install create install-config --dir=<installation_directory> <1>
----
endif::rhv[]
<1> For `<installation_directory>`, specify the directory name to store the
files that the installation program creates.
+
@@ -113,13 +134,15 @@ cluster installation, you can copy them into your directory. However, the file
names for the installation assets might change between releases. Use caution
when copying installation files from an earlier {product-title} version.
====
ifndef::rhv[]
.. At the prompts, provide the configuration details for your cloud:
... Optional: Select an SSH key to use to access your cluster machines.
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your `ssh-agent` process uses.
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
endif::rhv[]
ifdef::aws[]
... Select *AWS* as the platform to target.
... If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS
@@ -165,7 +188,9 @@ and compute nodes.
sub-domains of this base and will also include the cluster name.
endif::osp[]
ifndef::osp[]
ifndef::rhv[]
... Enter a descriptive name for your cluster.
endif::rhv[]
endif::osp[]
ifdef::osp[]
... Enter a name for your cluster. The name must be 14 or fewer characters long.
@@ -186,9 +211,59 @@ If you provide a name that is longer
than 6 characters, only the first 6 characters will be used in the infrastructure
ID that is generated from the cluster name.
endif::gcp[]
ifdef::rhv[]
.. Respond to the installation program prompts.
... For `SSH Public Key`, select a password-less public key, such as `~/.ssh/id_rsa.pub`. This key authenticates connections with the new {product-title} cluster.
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, select an SSH key that your `ssh-agent` process uses.
====
... For `Platform`, select `ovirt`.
... For `Enter oVirt's API endpoint URL`, enter the URL of the {rh-virtualization} API using this format:
+
----
https://<engine-fqdn>/ovirt-engine/api <1>
----
<1> For `<engine-fqdn>`, specify the fully qualified domain name of the {rh-virtualization} environment.
+
For example:
+
----
https://rhv-env.virtlab.example.com/ovirt-engine/api
----
+
... For `Is the installed oVirt certificate trusted?`, enter `Yes` since you have already set up a CA certificate. Otherwise, enter `No`.
... For `oVirt's CA bundle`, if you entered `Yes` for the preceding question, copy the certificate content from `/etc/pki/ca-trust/source/anchors/ca.pem` and paste it here. Then, press `Enter` twice. Otherwise, if you entered `No` for the preceding question, this question does not appear.
... For `Enter the oVirt engine username`, enter the username and profile of the {rh-virtualization} administrator using this format:
+
----
<username>@<profile> <1>
----
<1> For `<username>`, specify the username of an {rh-virtualization} administrator. For `<profile>`, specify the login profile, which you can get by going to the {rh-virtualization} Administration Portal login page and reviewing the *Profile* dropdown list. Together, the user name and profile should look similar to this example:
+
----
admin@internal
----
+
... For `Enter password`, enter the {rh-virtualization} admin password.
... For `Select the oVirt cluster`, select the cluster for installing {product-title}.
... For `Select the oVirt storage domain`, select the storage domain for installing {product-title}.
... For `Select the oVirt network`, select a virtual network that has access to the {rh-virtualization} Manager REST API.
... For `Enter the internal API Virtual IP`, enter the static IP address you set aside for the clusters REST API.
... For `Enter the internal DNS Virtual IP`, enter the static IP address you set aside for the clusters internal DNS service.
... For `Enter the ingress IP`, enter the static IP address you reserved for the wildcard apps domain.
... For `Base domain`, enter the base domain of the {product-title} cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: `virtlab.example.com`
... For `Cluster name`, enter the name of the cluster. For example, `my-cluster`. Use cluster name from the externally registered/resolvable DNS entries you created for the {product-title} REST API and apps domain names. The installation program also gives this name to the cluster in the {rh-virtualization} environment.
... For `Pull secret`, copy the pull secret from the `pull-secret.txt` file you downloaded earlier and paste it here. You can also get a copy of the same pull secret from the link:https://cloud.redhat.com/openshift/install/pull-secret[Pull Secret] page on the {cloud-redhat-com} site.
endif::rhv[]
ifndef::rhv[]
... Paste the pull secret that you obtained from the
link:https://cloud.redhat.com/openshift/install/pull-secret[Pull Secret] page on the {cloud-redhat-com} site.
ifdef::openshift-origin[]
This field is optional.
endif::[]
endif::rhv[]
ifeval::["{context}" == "installing-gcp-user-infra"]
.. Optional: If you do not want the cluster to provision compute machines, empty
the compute pool by editing the resulting `install-config.yaml` file to set
@@ -265,4 +340,10 @@ endif::[]
ifeval::["{context}" == "installing-openstack-user-kuryr"]
:!osp:
:!osp-user:
endif::[]
endif::[]
ifeval::["{context}" == "installing-rhv-customizations"]
:!rhv:
endif::[]
ifeval::["{context}" == "installing-rhv-default"]
:!rhv:
endif::[]

View File

@@ -16,6 +16,8 @@
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
// * installing/installing_openstack/installing-openstack-installer.adoc
// * installing/installing_rhv/installing-rhv-customizations.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
// If you use this module in any other assembly, you must update the ifeval
// statements.
@@ -74,6 +76,14 @@ endif::[]
ifeval::["{context}" == "installing-openstack-installer"]
:osp:
endif::[]
ifeval::["{context}" == "installing-rhv-customizations"]
:custom-config:
:rhv:
endif::[]
ifeval::["{context}" == "installing-rhv-default"]
:no-config:
:rhv:
endif::[]
[id="installation-launching-installer_{context}"]
= Deploy the cluster
@@ -87,7 +97,7 @@ You can run the `create cluster` command of the installation program only once,
.Prerequisites
ifndef::osp[* Configure an account with the cloud platform that hosts your cluster.]
ifndef::osp,rhv[* Configure an account with the cloud platform that hosts your cluster.]
* Obtain the {product-title} installation program and the pull secret for your
cluster.
@@ -106,6 +116,7 @@ endif::gcp[]
. Run the installation program:
+
ifndef::rhv[]
----
$ ./openshift-install create cluster --dir=<installation_directory> \ <1>
--log-level=info <2>
@@ -119,6 +130,22 @@ directory name to store the files that the installation program creates.
endif::no-config[]
<2> To view different installation details, specify `warn`, `debug`, or
`error` instead of `info`.
endif::rhv[]
ifdef::rhv[]
----
$ sudo ./openshift-install create cluster --dir=<installation_directory> \ <1>
--log-level=info <2>
----
<1> For `<installation_directory>`, specify the
ifdef::custom-config[]
location of your customized `install-config.yaml` file.
endif::custom-config[]
ifdef::no-config[]
directory name to store the files that the installation program creates.
endif::no-config[]
<2> To view different installation details, specify `warn`, `debug`, or
`error` instead of `info`.
endif::rhv[]
ifdef::no-config[]
+
[IMPORTANT]
@@ -132,13 +159,14 @@ when copying installation files from an earlier {product-title} version.
====
+
--
ifndef::rhv[]
Provide values at the prompts:
.. Optional: Select an SSH key to use to access your cluster machines.
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your `ssh-agent` process uses.
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
ifdef::aws[]
.. Select *aws* as the platform to target.
@@ -203,6 +231,59 @@ ID that is generated from the cluster name.
endif::gcp[]
.. Paste the pull secret that you obtained from the
link:https://cloud.redhat.com/openshift/install/pull-secret[Pull Secret] page on the {cloud-redhat-com} site.
ifdef::openshift-origin[]
This field is optional.
endif::[]
endif::rhv[]
ifdef::rhv[]
Respond to the installation program prompts.
.. Optional: For `SSH Public Key`, select a password-less public key, such as `~/.ssh/id_rsa.pub`. This key authenticates connections with the new {product-title} cluster.
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, select an SSH key that your `ssh-agent` process uses.
====
.. For `Platform`, select `ovirt`.
.. For `Enter oVirt's API endpoint URL`, enter the URL of the {rh-virtualization} API using this format:
+
----
https://<engine-fqdn>/ovirt-engine/api <1>
----
+
<1> For `<engine-fqdn>`, specify the fully qualified domain name of the {rh-virtualization} environment.
+
For example:
+
----
https://rhv-env.virtlab.example.com/ovirt-engine/api
----
+
.. For `Is the installed oVirt certificate trusted?`, enter `Yes` since you have already set up a CA certificate. Otherwise, enter `No`.
.. For `oVirt's CA bundle`, if you entered `Yes` for the preceding question, copy the certificate content from `/etc/pki/ca-trust/source/anchors/ca.pem` and paste it here. Then, press `Enter` twice. Otherwise, if you entered `No` for the preceding question, this question does not appear.
.. For `Enter the oVirt engine username`, enter the username and profile of the {rh-virtualization} administrator using this format:
+
----
<username>@<profile> <1>
----
+
<1> For `<username>`, specify the username of an {rh-virtualization} administrator. For `<profile>`, specify the login profile, which you can get by going to the {rh-virtualization} Administration Portal login page and reviewing the *Profile* dropdown list. Together, the user name and profile should look similar to this example:
+
----
admin@internal
----
+
.. For `Enter password`, enter the {rh-virtualization} admin password.
.. For `Select the oVirt cluster`, select the cluster for installing {product-title}.
.. For `Select the oVirt storage domain`, select the storage domain for installing {product-title}.
.. For `Select the oVirt network`, select a virtual network that has access to the {rh-virtualization} Manager REST API.
.. For `Enter the internal API Virtual IP`, enter the static IP address you set aside for the clusters REST API.
.. For `Enter the internal DNS Virtual IP`, enter the static IP address you set aside for the clusters internal DNS service.
.. For `Enter the ingress IP`, enter the static IP address you reserved for the wildcard apps domain.
.. For `Base domain`, enter the base domain of the {product-title} cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: `virtlab.example.com`
.. For `Cluster name`, enter the name of the cluster. For example, `my-cluster`. Use cluster name from the externally registered/resolvable DNS entries you created for the {product-title} REST API and apps domain names. The installation program also gives this name to the cluster in the {rh-virtualization} environment.
.. For `Pull secret`, copy the pull secret from the `pull-secret.txt` file you downloaded earlier and paste it here. You can also get a copy of the same pull secret from the link:https://cloud.redhat.com/openshift/install/pull-secret[Pull Secret] page on the {cloud-redhat-com} site.
endif::rhv[]
--
endif::no-config[]
+
@@ -240,6 +321,7 @@ ifdef::gcp[]
you can remove it.
endif::gcp[]
ifeval::["{context}" == "installing-aws-customizations"]
:!custom-config:
:!aws:
@@ -295,3 +377,11 @@ endif::[]
ifeval::["{context}" == "installing-openstack-installer"]
:!osp:
endif::[]
ifeval::["{context}" == "installing-rhv-customizations"]
:!custom-config:
:!rhv:
endif::[]
ifeval::["{context}" == "installing-rhv-default"]
:!no-config:
:!rhv:
endif::[]

View File

@@ -21,6 +21,8 @@
// * installing/installing_openstack/installing-openstack-installer.adoc
// * installing/installing_vsphere/installing-vsphere.adoc
// * installing/installing_ibm_z/installing-ibm-z.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
// * installing/installing_rhv/installing-rhv-customizations.adoc
ifeval::["{context}" == "installing-ibm-z"]
:ibm-z:
@@ -31,19 +33,17 @@ endif::[]
Before you install {product-title}, download the installation file on
ifdef::restricted[]
the bastion host.
the bastion host.
endif::restricted[]
ifndef::restricted[]
ifdef::ibm-z[your provisioning machine.]
ifndef::ibm-z[a local computer.]
ifdef::ibm-z[ your provisioning machine.]
ifndef::ibm-z[ a local computer.]
endif::restricted[]
.Prerequisites
ifdef::ibm-z[* You must install the cluster from a machine that runs Linux, for example Red Hat Enterprise Linux 8.]
ifndef::ibm-z[* You must install the cluster from a computer that uses Linux or macOS.]
* You need 500 MB of local disk space to download the installation program.
ifdef::ibm-z[* A machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space]
ifndef::ibm-z[* A computer that runs Linux or macOS, with 500 MB of local disk space]
.Procedure

View File

@@ -3,11 +3,13 @@
// * installing/installing_openstack/installing-openstack-installer.adoc
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
// * installing/installing_rhv/installing-rhv-customizations.adoc
[id="installation-osp-verifying-cluster-status_{context}"]
= Verifying cluster status
To verify your {product-title} cluster's status during or after installation:
You can verify your {product-title} cluster's status during or after installation.
.Procedure

View File

@@ -58,7 +58,7 @@ installer-provisioned infrastructure on the following platforms:
* Microsoft Azure
* {rh-openstack-first} version 13 and 16
** The latest {product-title} release supports both the latest {rh-openstack} long-life release and intermediate release. For complete {rh-openstack} release compatibility, see the link:https://access.redhat.com/articles/4679401[{product-title} on {rh-openstack} support matrix].
* Red Hat Virtualization (RHV)
* {rh-virtualization-first}
For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat.

View File

@@ -8,7 +8,7 @@
During {product-title} installation, you can enable disk encryption on all master and worker nodes.
This feature:
* Is available for installer provisioned infrastructure
* Is available for installer-provisioned infrastructure
and user provisioned infrastructure deployments
* Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only
* Sets up disk encryption during the manifest installation phase so all data written to disk, from first boot forward, is encrypted

View File

@@ -4,6 +4,7 @@
// * installing/installing_azure/uninstalling-cluster-azure.adoc
// * installing/installing_gcp/uninstalling-cluster-gcp.adoc
// * installing/installing_osp/uninstalling-cluster-openstack.adoc
// * installing/installing_rhv/uninstalling-cluster-rhv.adoc
[id="installation-uninstall-clouds_{context}"]
= Removing a cluster that uses installer-provisioned infrastructure

View File

@@ -116,7 +116,7 @@ endif::restricted[]
+
[NOTE]
====
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your `ssh-agent` process uses.
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
====
ifdef::restricted[]
<15> Provide the contents of the certificate file that you used for your mirror

View File

@@ -0,0 +1,25 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-default.adoc
// * installing/installing_rhv/installing-rhv-custom.adoc
[id="installing-rhv-accessing-the-ocp-web-console_{context}"]
= Accessing the {product-title} web console on {rh-virtualization}
After the {product-title} cluster initializes, you can log into the {product-title} web console.
.Procedure
. Optional: In the {rh-virtualization-first} Administration Portal, open *Compute* -> *Cluster*.
. Verify that the installation program creates the virtual machines.
. Return to the command line where the installation program is running. When the installation program finishes, it displays the user name and temporary password for logging into the {product-title} web console.
. In a browser, open the URL of the {product-title} web console. The URL uses this format:
+
----
console-openshift-console.apps.<clustername>.<basedomain> <1>
----
<1> For `<clustername>.<basedomain>`, specify the cluster name and base domain.
+
For example:
+
----
console-openshift-console.apps.my-cluster.virtlab.example.com
----

View File

@@ -0,0 +1,22 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-creating-custom-vm.adoc
[id="installing-rhv-attaching-virt-disk-to-vm_{context}"]
= Attaching the virtual disk with the {op-system} image to a virtual machine
Attach the virtual disk with the {op-system-first} image to a virtual machine.
.Procedure
. Go to the *Compute* -> *Virtual Machines* page and click btn:[New].
. In the *New Virtual Machine* -> *General* window that opens, use *Cluster* to select the {rh-virtualization} cluster where you plan to create the {product-title} cluster.
. Leave *Template* unchanged, with *Blank* selected.
. Set *Operating System* to *Red Hat Enterprise Linux CoreOS*.
. Leave *Optimized for* unchanged, with *Desktop* selected.
. For *Name*, specify the name of the virtual machine. For example, `vmname`.
. For *Instance Images*, click btn:[Attach].
. In the *Attach Virtual Disks* window that opens, find the disk you specified in the playbook. For example, `rhcos_44-81_img-diskname`.
. Click the radio button for this disk.
. Select the *OS* checkbox for this disk. This makes the disk bootable.
. Click btn:[OK] to save and close the *Attach Virtual Disks* window.

View File

@@ -0,0 +1,68 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-custom.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
[id="installing-rhv-installation-configuration-parameters_{context}"]
= Installation configuration parameters for {rh-virtualization}
Before you deploy an {product-title} cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the `install-config.yaml` installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the `install-config.yaml` file to provide more details about the platform.
The following example is specific to installing {product-title} on {rh-virtualization}. It uses numbered callouts to show which parameter values you can edit. Do not modify the parameters values without callouts.
This file is located in the `<installation directory>` you specified when you ran the following command.
----
$ sudo ./openshift-install create install-config --dir=<installation_directory>
----
[NOTE]
====
* Do not copy the following example. Instead, run the installation program to create one.
* You cannot modify these parameters in the `install-config.yaml` file after installation.
====
IMPORTANT: If you make customizations that require additional resources, such as adding control plane and compute machines, verify that your {rh-virtualization} environment has enough resources. Otherwise, these customizations might cause the installation to fail.
.Example `install-config.yaml` configuration file
[source,yaml]
----
apiVersion: v1
baseDomain: <virtlab.example.com> <1>
compute:
- hyperthreading: Enabled
name: worker
platform: {}
replicas: 3 <2>
controlPlane:
hyperthreading: Enabled
name: master
platform: {}
replicas: 3 <3>
metadata:
name: <my-cluster> <4>
platform:
ovirt:
api_vip: <ip-address> <5>
dns_vip: <ip-address> <6>
ingress_vip: <ip-address> <7>
ovirt_cluster_id: <rhv-cluster-id> <8>
ovirt_storage_domain_id: <rhv-storage-domain-id> <9>
publish: External
pullSecret: |
<pull-secret> <10>
sshKey: |
<ssh-public-key> <11>
----
<1> For `<virtlab.example.com>`, specify the base domain of the {product-title} cluster.
<2> Specify `3` or more compute machines. The default value is `3`.
<3> Specify `3` or more control plane machines. The default value is `3`.
<4> For `<my-cluster>`, specify the name of the new {product-title} cluster.
<5> For `<ip-address>`, specify the static IP address of the API for which you created the `api.` DNS entry.
<6> For `<ip-address>`, specify the static IP address of the internal DNS of the {product-title} cluster.
<7> For `<ip-address>`, specify the static IP address of the cluster applications for which you created the `*.apps.` DNS entry.
<8> For `<rhv-cluster-id>`, specify an {rh-virtualization} cluster ID.
<9> For `<rhv-storage-domain-id>`, specify an {rh-virtualization} storage domain ID.
<10> For `<pull-secret>`, specify your pull secret in JSON format.
<11> For `<ssh-public-key>`, specify your public SSH key.

View File

@@ -0,0 +1,21 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-creating-custom-vm.adoc
[id="installing-rhv-configuring-and-creating-the-vm_{context}"]
= Configuring and creating of the virtual machine
Continue configuring the virtual machine and then create it.
.Procedure
. In the *New Virtual Machine* -> *General* window, for each NIC under *Instantiate VM network interfaces by picking a vNIC profile*, select a vNIC profile.
. Go to the *New Virtual Machine* -> *System* window.
. Set *Memory Size* to *16384 MB* or more. This default value is equivalent to 16 GiB. You can adjust this value according to the workload you expect on the compute machines and the amount of memory resources that are available.
. Due to a known issue (link:https://bugzilla.redhat.com/show_bug.cgi?id=1821215[*bz#1821215*]), set *Physical Memory Guaranteed* to *8192 MB*. This value is equivalent to 8 GiB.
. Set *Total Virtual CPUs* to *4* or more. You can adjust this value according to the workload you expect on the compute machines and the resources that are available.
. Expand *Advanced Parameters*.
. Adjust *Cores per Virtual Socket* to *4* or more so it matches the *Total Virtual CPUs* setting.
. Press btn:[OK] to save and close the *New Virtual Machine*.
+
The new virtual machine appears in the *Compute* -> *Virtual Machines* window. Because the virtual machine is not starting, it appears quickly.

View File

@@ -0,0 +1,18 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-creating-custom-vm.adoc
[id="installing-rhv-creating-vm-template_{context}"]
= Creating a virtual machine template
Create the virtual machine template.
.Procedure
. Select the new virtual machine.
. In the upper-right corner of the Administration Portal window, click the *More actions* menu, which looks like icon:ellipsis-v[], and select *Make Template*.
. In the *New Template* window, specify a value for the *Name*. For example, `vm-tmpltname`.
. Verify that *Target* and the other parameter values are correct and reflect your previous choices.
. Verify that *Seal Template (Linux only)* is *not* selected.
. Click btn:[OK].
. Go to the *Compute* -> *Templates* page and wait for the template to be created.

View File

@@ -0,0 +1,43 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-creating-custom-vm.adoc
[id="installing-rhv-downloading-rhcos-image_{context}"]
= Downloading a specific {op-system} image to the {rh-virtualization} Manager machine
Download a specific {op-system-first} image to the {rh-virtualization-first} Manager machine.
IMPORTANT: Do not substitute the {op-system} image specified by these instructions with another image.
.Procedure
. Open a terminal session with the {rh-virtualization} Manager machine. For example, if you run the Manager as a self-hosted engine:
.. Go to the {rh-virtualization} Administration Portal and go to the *Compute* -> *Virtual Machines* page.
.. Select the *HostedEngine* virtual machine and click btn:[Console].
. On the Manager's command line, create a working directory and change to it.
+
----
$ mkdir rhcos
$ cd rhcos
----
. In a browser, go to link:https://github.com/openshift/installer/blob/release-4.4/data/data/rhcos.json[].
. In `rhcos.json`, find `baseURI` and copy its value.
. On the Manager, start a `wget` command and paste the value of `baseURI` but *do not press kbd:[Enter]*. For example:
+
----
$ wget https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.4/44.81.202003062006-0/x86_64/
----
. In `rhcos.json` document, find `openstack` and copy the value of `path`.
. On the Manager, paste the value of `path`. For example:
+
----
$ wget https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.4/44.81.202003062006-0/x86_64/rhcos-44.81.202003062006-0-openstack.x86_64.qcow2.gz
----
+
. Press kbd:[Enter] and wait for `wget` to finish downloading the {op-system} image.
. Unzip the {op-system} image. For example:
+
----
$ gunzip rhcos*
----

View File

@@ -0,0 +1,29 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-creating-custom-vm.adoc
[id="installing-rhv-exporting-some-environment-variables_{context}"]
= Exporting some environment variables
. Determine the values you specified for *Memory* and *Total Virtual CPUs* in the virtual machine settings.
. Run the following environment variables on the command line of your installation machine.
----
$ export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE=<vm-tmpltname> <1>
$ export TF_VAR_ovirt_template_mem=<mem-value> <2>
$ export TF_VAR_ovirt_template_cpu=<cpu-value> <3>
$ export TF_VAR_ovirt_master_mem=<mem-value> <2>
$ export TF_VAR_ovirt_master_cpu=<cpu-value> <3>
----
<1> For `<vm-tmpltname>`, specify the name of the template that you created.
<2> For `<mem-value>`, specify the value of *Memory* in the virtual machine. For example, `16384`.
<3> For `<cpu-value>`, specify value of *Total Virtual CPUs* in the virtual machine. For example, `4`.
+
For example:
+
----
$ export OPENSHIFT_INSTALL_OS_IMAGE_OVERRIDE=vm-tmpltname
$ export TF_VAR_ovirt_template_mem=16384
$ export TF_VAR_ovirt_template_cpu=4
$ export TF_VAR_ovirt_master_mem=16384
$ export TF_VAR_ovirt_master_cpu=4
----

View File

@@ -0,0 +1,44 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-custom.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
[id="installing-rhv-preparing-the-network-environment_{context}"]
= Preparing the network environment on {rh-virtualization}
Configure three static IP addresses for the {product-title} cluster and create DNS entries using two of these addresses.
.Procedure
. Reserve three static IP addresses
.. On the network where you plan to install {product-title}, identify three static IP addresses that are outside the DHCP lease pool.
.. Connect to a host on this network and verify that each of the IP addresses is not in use. For example, use arp to check that none of the IP addresses have entries:
+
----
$ arp 10.35.1.19
10.35.1.19 (10.35.1.19) -- no entry
----
.. Reserve three static IP addresses following the standard practices for your network environment.
.. Record these IP addresses for future reference.
. Create DNS entries for the {product-title} REST API and apps domain names using this format:
+
----
api.<cluster-name>.<base-domain> <ip-address> <1>
*.apps.<cluster-name>.<base-domain> <ip-address> <2>
----
<1> For `<cluster-name>`, `<base-domain>`, and `<ip-address>`, specify the cluster name, base domain, and static IP address of your {product-title} API.
<2> Specify the cluster name, base domain, and static IP address of your {product-title} apps (ingress/load balancer).
+
For example:
+
----
api.my-cluster.virtlab.example.com 10.35.1.19
*.apps.my-cluster.virtlab.example.com 10.35.1.20
----
+
[NOTE]
====
The third static IP address does not require a DNS entry. The {product-title} cluster uses that address for its internal DNS service.
====

View File

@@ -0,0 +1,38 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-custom.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
[id="installing-rhv-requirements_{context}"]
= Requirements for the {rh-virtualization} environment
To install and run an {product-title} cluster, the {rh-virtualization} environment must meet the following requirements. Not meeting these requirements can cause failures.
[IMPORTANT]
====
These requirements are based on the resource allocations in the virtual machine template the installation program uses to create control plane and compute machines. These resources include virtual CPUs, memory, and storage. If you change the virtual machine template or increase the number of {product-title} machines, adjust these requirements.
====
.Requirements
* The {rh-virtualization} version is 4.3.9 or later.
* The {rh-virtualization} environment has one data center whose state is *Up*.
* The {rh-virtualization} data center contains an {rh-virtualization} cluster.
* The {rh-virtualization} cluster has the following resources exclusively for the {product-title} cluster:
** Minimum 28 vCPUs, which is 4 vCPUs for each of the seven virtual machines created during installation.
** 112 GiB RAM or more, including:
*** 16 GiB or more for the bootstrap machine, which provides the temporary control plane.
*** 16 GiB or more for each of the three control plane machines which provide the control plane.
*** 16 GiB or more for each of the three compute machines, which run the application workloads.
* The {rh-virtualization} storage domain for the {product-title} cluster must have 230 GiB or more and must meet link:https://access.redhat.com/solutions/4770281[these etcd backend performance requirements].
* The {rh-virtualization} cluster must have access to an Internet connection to download images from the Red Hat Ecosystem Catalog during installation and updates, and for the Telemetry service to simplify the subscription and entitlement process.
* The {rh-virtualization} cluster has a virtual network with access to the REST API on the {rh-virtualization} Manager.
[NOTE]
====
* All together, the hosts must have the required memory and CPU resources **in addition to and aside from** what they use to operate or provide to non-{product-title} operations.
* The release cycles of {product-title} and
{rh-virtualization} are different and versions tested might vary in the future
depending on the release dates of both products.
* The bootstrap machine provides a temporary control plane while the installation program creates the {product-title} cluster. After it creates the cluster, the installation program removes the bootstrap machine and releases its resources.
====

View File

@@ -0,0 +1,42 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-custom.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
[id="installing-rhv-setting-up-ca-certificate_{context}"]
= Setting up the CA certificate for {rh-virtualization}
Download the CA certificate from the {rh-virtualization-first} Manager and set it up on the installation machine.
You can download the certificate from a webpage on the {rh-virtualization} Manager or by using a `curl` command.
Later, you provide the certificate to the installation program.
.Procedure
. Download the CA certificate using either of these methods:
** Go to the Manager's webpage and click the *CA Certificate* link. The URL of the Manager's webpage uses this format: `https://<engine-fqdn>/ovirt-engine/`.
** Run the following command:
+
----
$ curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' -o /tmp/ca.pem <1>
----
<1> For `<engine-fqdn>`, specify the fully qualified domain name of the {rh-virtualization} Manager, such as `rhv-env.virtlab.example.com`.
. Add the certificate to the computer using the appropriate method for your operating system:
** For Linux, copy the CA certificate to the directory for server certificates, and update the CA trust. For example:
+
----
$ sudo cp /tmp/ca.pem /etc/pki/ca-trust/source/anchors/ca.pem
$ update-ca-trust
----
+
** For macOS, double-click the certificate file and use the *Keychain Access* utility to add the file to the *System* keychain.
[NOTE]
====
If you use your own certificate authority, make sure the system trusts it.
====
.Additional Resources
To learn more, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html/rest_api_guide/documents-002_authentication_and_security[Authentication and Security] in the {rh-virtualization} documentation.

View File

@@ -0,0 +1,66 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-creating-custom-vm.adoc
[id="installing-rhv-using-ansible-playbook_{context}"]
= Using an Ansible playbook to upload an {op-system} image to a data storage domain
Use an Ansible playbook to upload an {op-system-first} image to a data storage domain.
.Prerequisites
* link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/chap-automating_rhv_configuration_using_ansible[Red Hat Ansible Engine, version 2.9 or later]
* link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/python_sdk_guide/index[Python oVirt Engine SDK, version 4.3 or later]
.Procedure
. Create an Ansible playbook file `upload_rhcos_disk.yaml` on the {rh-virtualization-first} Manager. For example:
+
----
$ vi upload_rhcos_disk.yaml
----
. Go to link:https://access.redhat.com/sites/default/files/attachments/upload_rhcos_disk.yml[this file on the Red Hat Customer Portal].
. Paste its contents into your playbook.
. In your playbook, update the parameter values that have callouts.
+
[source,yaml,subs="attributes+"]
----
- hosts: localhost
gather_facts: no
tasks:
- name: Authenticate with engine
ovirt_auth:
url: https://<virtlab.example.com>/ovirt-engine/api <1>
username: <username@profile> <2>
password: <password> <3>
insecure: yes
- name: Upload {op-system} image
ovirt_disk:
auth: "{{ ovirt_auth }}"
name: <rhcos_44-81_img-diskname> <4>
size: 32GiB <5>
interface: virtio_scsi
storage_domain: <SSD_RAID_10> <6>
bootable: yes
timeout: 3600 <7>
upload_image_path: </custom/rhcos-44.81.202003110027-0-openstack.x86_64.qcow2> <8>
wait: yes
----
<1> For `<virtlab.example.com>`, specify the fully-qualified domain name (FQDN) of your {rh-virtualization} Manager.
<2> For `<username@profile>`, specify the admin user name and profile.
<3> For `<password>`, specify the admin password.
<4> For `<rhcos_44-81_img-diskname>`, specify a disk name.
<5> Specify `32GiB`, the default value.
<6> For `<SSD_RAID_10>`, specify a data storage domain name.
<7> Specify a timeout period, in seconds, that gives your {rh-virtualization} environment enough time to upload a 2.4 GB image to storage. The default value, `3600` seconds, gives one hour.
<8> For `</custom/rhcos-44.81.202003110027-0-openstack.x86_64.qcow2>`, specify the image path and file name.
+
. On the Manager, run the Ansible playbook. For example:
+
----
$ ansible-playbook rhcos_image.yaml -vvv
----
. In the Administration Portal, go to the *Storage* -> *Disks* page and find the disk name you specified in the playbook. For example, `rhcos_44-81_img-diskname`.
. When the status of the disk is *OK*, the playbook has finished uploading the {op-system} image to the storage domain.

View File

@@ -0,0 +1,60 @@
// Module included in the following assemblies:
//
// * installing/installing_rhv/installing-rhv-custom.adoc
// * installing/installing_rhv/installing-rhv-default.adoc
[id="installing-rhv-verifying-the-rhv-environment_{context}"]
= Verifying the requirements for the {rh-virtualization} environment
Verify that the {rh-virtualization} environment meets the requirements to install and run an {product-title} cluster. Not meeting these requirements can cause failures.
[IMPORTANT]
====
These requirements are based on the resource allocations in the virtual machine template the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change the virtual machine template or increase the number of {product-title} machines, adjust these requirements.
====
.Procedure
. Check the {rh-virtualization} version.
.. In the {rh-virtualization} Administration Portal, click the *?* help icon in the upper-right corner and select *About*.
.. In the window that opens, confirm that the **{rh-virtualization} Software Version** is **4.3.9** or higher.
. Inspect the data center, cluster, and storage.
.. In the {rh-virtualization} Administration Portal, click *Compute* -> *Data Centers*.
.. Confirm the data center where you plan to install {product-title} displays a green up arrow, meaning it is "Up".
.. Click the name of that data center.
.. In the data center details, on the *Storage* tab, confirm the storage domain where you plan to install {product-title} is *Active*.
.. Record the *Domain Name* for use later on.
.. Confirm *Free Space* has at least 230 GiB.
.. Confirm that the storage domain meets link:https://access.redhat.com/solutions/4770281[these etcd backend performance requirements], which can be link:https://access.redhat.com/solutions/3780861[measured using the fio performance benchmarking tool].
.. In the data center details, click the *Clusters* tab.
.. Find the {rh-virtualization} cluster where you plan to install {product-title}. Record the cluster name for use later on.
. Inspect the {rh-virtualization} host resources.
.. In the {rh-virtualization} Administration Portal, click *Compute > Clusters*.
.. Click the cluster where you plan to install {product-title}.
.. In the cluster details, click the *Hosts* tab.
.. Inspect the hosts and confirm they have a combined total of at least 28 *Logical CPU Cores* available _exclusively_ for the {product-title} cluster.
.. Record the number of available *Logical CPU Cores* for use later on.
.. Confirm that these CPU cores are distributed so each of the seven virtual machines created during installation can have four cores.
.. Confirm that, all together, the hosts have 112 GiB of *Max free Memory for scheduling new VMs* distributed to meet the requirements for each of the following {product-title} machines:
** 16 GiB required for the bootstrap machine
** 16 GiB required for each of the three control plane machines
** 16 GiB for each of the three compute machines
.. Record the amount of *Max free Memory for scheduling new VMs* for use later on.
+
. Verify that the virtual network for installing {product-title} has access to the {rh-virtualization} Managers REST API. From a virtual machine on this network, use a curl command with the {rh-virtualization} Managers REST API. Use the following format:
+
----
curl -k -u <username>@<profile>:<password> \ <1>
https://<engine-fqdn>/ovirt-engine/api <2>
----
<1> For `<username>`, specify the user name of an {rh-virtualization} administrator. For `<profile>`, specify the login profile, which you can get by going to the {rh-virtualization} Administration Portal login page and reviewing the *Profile* dropdown list. For `<password>`, specify the admin password.
<2> For `<engine-fqdn>`, specify the fully qualified domain name of the {rh-virtualization} environment.
+
For example:
+
----
$ curl -k -u rhvadmin@internal:pw123 \
https://rhv-env.virtlab.example.com/ovirt-engine/api
----

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// *installing/installing-gather-logs.adoc
// *installing/installing-troubleshooting.adoc
[id="installation-manually-gathering-logs-with-SSH_{context}"]
= Manually gathering logs with SSH access to your host(s)

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// *installing/installing-gather-logs.adoc
// *installing/installing-troubleshooting.adoc
[id="installation-manually-gathering-logs-without-SSH_{context}"]
= Manually gathering logs without SSH access to your host(s)

View File

@@ -5,7 +5,7 @@
[id="odc-editing-application-configuration-using-developer-perspective_{context}"]
= Editing the application configuration using the Developer perspective
You can use the the *Topology* view in the *Developer* perspective to edit the configuration of your application.
You can use the *Topology* view in the *Developer* perspective to edit the configuration of your application.
[NOTE]
====

View File

@@ -5,7 +5,7 @@
[id="odc-editing-source-code-using-developer-perspective_{context}"]
= Editing the source code of an application using the Developer perspective
You can use the the *Topology* view in the *Developer* perspective to edit the source code of your application.
You can use the *Topology* view in the *Developer* perspective to edit the source code of your application.
.Procedure

View File

@@ -20,7 +20,7 @@ Platform. A bundle is a container image that stores Kubernetes manifests and
metadata associated with an Operator and is meant to present a specific version
of an Operator.
The following layout shows the the directory structure of an Operator bundle
The following layout shows the directory structure of an Operator bundle
inside a bundle image:
----

View File

@@ -58,6 +58,12 @@ endif::[]
ifeval::["{context}" == "installing-ibm-z"]
:ibm-z:
endif::[]
ifeval::["{context}" == "installing-rhv-default"]
:rhv:
endif::[]
ifeval::["{context}" == "installing-rhv-customizations"]
:rhv:
endif::[]
[id="ssh-agent-using_{context}"]
= Generating an SSH private key and adding it to the agent
@@ -79,7 +85,7 @@ You can use this key to SSH into the master nodes as the user `core`. When you
deploy the cluster, the key is added to the `core` user's
`~/.ssh/authorized_keys` list.
ifndef::osp,ibm-z[]
ifndef::osp,ibm-z,rhv[]
[NOTE]
====
You must use a local key, not one that you configured with platform-specific
@@ -104,6 +110,11 @@ $ ssh-keygen -t rsa -b 4096 -N '' \
Running this command generates an SSH key that does not require a password in
the location that you specified.
[IMPORTANT]
====
If you create a new SSH key pair, avoid overwriting existing SSH keys.
====
. Start the `ssh-agent` process as a background task:
+
----
@@ -162,3 +173,9 @@ endif::[]
ifeval::["{context}" == "installing-ibm-z"]
:!ibm-z:
endif::[]
ifeval::["{context}" == "installing-rhv-default"]
:!rhv:
endif::[]
ifeval::["{context}" == "installing-rhv-customizations"]
:!rhv:
endif::[]

View File

@@ -74,7 +74,7 @@ xref:../installing/installing_azure/installing-azure-private.adoc#installing-aws
xref:../installing/installing_gcp/installing-gcp-private.adoc#installing-gcp-private[GCP].
Internet access is still required to access the cloud APIs and installation media.
- **xref:../installing/installing-gather-logs.adoc#installing-gather-logs[Check installation logs]**: Access installation logs to evaluate issues that occur during {product-title} {product-version} installation.
- **xref:../installing/installing-troubleshooting.adoc#installing-troubleshooting[Check installation logs]**: Access installation logs to evaluate issues that occur during {product-title} {product-version} installation.
- **xref:../web_console/web-console.adoc#web-console[Access {product-title}]**: Use credentials output at the end of the installation process to log in to the {product-title} cluster from the command line or web console.