mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
removal of RHV content from OpenShift Docs
This commit is contained in:
committed by
Kathryn Alexander
parent
5ecbb5e260
commit
eba2438ec9
@@ -444,38 +444,6 @@ Topics:
|
||||
File: uninstalling-cluster-openstack
|
||||
- Name: Uninstalling a cluster on OpenStack from your own infrastructure
|
||||
File: uninstalling-openstack-user
|
||||
- Name: Installing on RHV
|
||||
Dir: installing_rhv
|
||||
Distros: openshift-enterprise
|
||||
Topics:
|
||||
- Name: Preparing to install on RHV
|
||||
File: preparing-to-install-on-rhv
|
||||
- Name: Installing a cluster quickly on RHV
|
||||
File: installing-rhv-default
|
||||
- Name: Installing a cluster on RHV with customizations
|
||||
File: installing-rhv-customizations
|
||||
- Name: Installing a cluster on RHV with user-provisioned infrastructure
|
||||
File: installing-rhv-user-infra
|
||||
- Name: Installing a cluster on RHV in a restricted network
|
||||
File: installing-rhv-restricted-network
|
||||
- Name: Uninstalling a cluster on RHV
|
||||
File: uninstalling-cluster-rhv
|
||||
- Name: Installing on oVirt
|
||||
Dir: installing_rhv
|
||||
Distros: openshift-origin
|
||||
Topics:
|
||||
- Name: Preparing to install on RHV
|
||||
File: preparing-to-install-on-rhv
|
||||
- Name: Installing a cluster quickly on oVirt
|
||||
File: installing-rhv-default
|
||||
- Name: Installing a cluster on oVirt with customizations
|
||||
File: installing-rhv-customizations
|
||||
- Name: Installing a cluster on oVirt with user-provisioned infrastructure
|
||||
File: installing-rhv-user-infra
|
||||
- Name: Installing a cluster on RHV in a restricted network
|
||||
File: installing-rhv-restricted-network
|
||||
- Name: Uninstalling a cluster on oVirt
|
||||
File: uninstalling-cluster-rhv
|
||||
- Name: Installing on vSphere
|
||||
Dir: installing_vsphere
|
||||
Distros: openshift-origin,openshift-enterprise
|
||||
@@ -1591,8 +1559,6 @@ Topics:
|
||||
File: persistent-storage-csi-cinder
|
||||
- Name: OpenStack Manila CSI Driver Operator
|
||||
File: persistent-storage-csi-manila
|
||||
- Name: Red Hat Virtualization CSI Driver Operator
|
||||
File: persistent-storage-csi-ovirt
|
||||
- Name: VMware vSphere CSI Driver Operator
|
||||
File: persistent-storage-csi-vsphere
|
||||
- Name: Generic ephemeral volumes
|
||||
@@ -2168,12 +2134,6 @@ Topics:
|
||||
File: creating-machineset-nutanix
|
||||
- Name: Creating a compute machine set on OpenStack
|
||||
File: creating-machineset-osp
|
||||
- Name: Creating a compute machine set on RHV
|
||||
File: creating-machineset-rhv
|
||||
Distros: openshift-enterprise
|
||||
- Name: Creating a compute machine set on oVirt
|
||||
File: creating-machineset-rhv
|
||||
Distros: openshift-origin
|
||||
- Name: Creating a compute machine set on vSphere
|
||||
File: creating-machineset-vsphere
|
||||
- Name: Creating a compute machine set on bare metal
|
||||
@@ -2203,8 +2163,6 @@ Topics:
|
||||
File: adding-aws-compute-user-infra
|
||||
- Name: Adding compute machines to vSphere manually
|
||||
File: adding-vsphere-compute-user-infra
|
||||
- Name: Adding compute machines to a cluster on RHV
|
||||
File: adding-rhv-compute-user-infra
|
||||
- Name: Adding compute machines to bare metal
|
||||
File: adding-bare-metal-compute-user-infra
|
||||
- Name: Managing machines with the Cluster API
|
||||
|
||||
@@ -1,30 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * logging/cluster-logging-exported-fields.adoc
|
||||
|
||||
[id="cluster-logging-exported-fields-ovirt_{context}"]
|
||||
= oVirt exported fields
|
||||
|
||||
These are the oVirt fields exported by OpenShift Logging available for searching
|
||||
from Elasticsearch and Kibana.
|
||||
|
||||
Namespace for oVirt metadata.
|
||||
|
||||
[cols="3,7",options="header"]
|
||||
|===
|
||||
|Parameter
|
||||
|Description
|
||||
|
||||
| `ovirt.entity`
|
||||
|The type of the data source, hosts, VMS, and engine.
|
||||
|
||||
| `ovirt.host_id`
|
||||
|The oVirt host UUID.
|
||||
|===
|
||||
|
||||
[discrete]
|
||||
[id="exported-fields-ovirt.engine_{context}"]
|
||||
=== `ovirt.engine` Fields
|
||||
|
||||
Namespace for metadata related to the {rh-virtualization-engine-name}. The FQDN of the {rh-virtualization-engine-name} is
|
||||
`ovirt.engine.fqdn`
|
||||
@@ -66,11 +66,6 @@ By setting different values for the `credentialsMode` parameter in the `install-
|
||||
|X
|
||||
|
|
||||
|
||||
|{rh-virtualization-first}
|
||||
|
|
||||
|X
|
||||
|
|
||||
|
||||
|VMware vSphere
|
||||
|
|
||||
|X
|
||||
|
||||
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), {rh-openstack-first}, {rh-virtualization-first}, and VMware vSphere.
|
||||
Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), {rh-openstack-first}, and VMware vSphere.
|
||||
|
||||
In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode.
|
||||
|
||||
@@ -41,17 +41,6 @@ To locate the `CredentialsRequest` CRs that are required, see xref:../../install
|
||||
=== {rh-openstack-first} permissions
|
||||
To install an {product-title} cluster on {rh-openstack}, the CCO requires a credential with the permissions of a `member` user role.
|
||||
|
||||
[id="passthrough-mode-permissions-rhv"]
|
||||
=== {rh-virtualization-first} permissions
|
||||
To install an {product-title} cluster on {rh-virtualization}, the CCO requires a credential with the following privileges:
|
||||
|
||||
* `DiskOperator`
|
||||
* `DiskCreator`
|
||||
* `UserTemplateBasedVm`
|
||||
* `TemplateOwner`
|
||||
* `TemplateCreator`
|
||||
* `ClusterAdmin` on the specific cluster that is targeted for {product-title} deployment
|
||||
|
||||
[id="passthrough-mode-permissions-vsware"]
|
||||
=== VMware vSphere permissions
|
||||
To install an {product-title} cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges:
|
||||
|
||||
@@ -27,7 +27,6 @@ endif::openshift-origin[]
|
||||
* Microsoft Azure Stack Hub
|
||||
* Google Cloud Platform (GCP)
|
||||
* {rh-openstack-first}
|
||||
* {rh-virtualization-first}
|
||||
* IBM Cloud VPC
|
||||
* {ibmzProductName} or {linuxoneProductName}
|
||||
* {ibmzProductName} or {linuxoneProductName} for {op-system-base-full} KVM
|
||||
@@ -60,12 +59,15 @@ If you need to perform basic configuration for your installer-provisioned infras
|
||||
|
||||
For installer-provisioned infrastructure installations, you can use an existing xref:../installing/installing_aws/installing-aws-vpc.adoc#installing-aws-vpc[VPC in AWS], xref:../installing/installing_azure/installing-azure-vnet.adoc#installing-azure-vnet[vNet in Azure], or xref:../installing/installing_gcp/installing-gcp-vpc.adoc#installing-gcp-vpc[VPC in GCP]. You can also reuse part of your networking infrastructure so that your cluster in xref:../installing/installing_aws/installing-aws-network-customizations.adoc#installing-aws-network-customizations[AWS], xref:../installing/installing_azure/installing-azure-network-customizations.adoc#installing-azure-network-customizations[Azure], xref:../installing/installing_gcp/installing-gcp-network-customizations.adoc#installing-gcp-network-customizations[GCP] can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. If you have existing accounts and credentials on these clouds, you can re-use them, but you might need to modify the accounts to have the required permissions to install {product-title} clusters on them.
|
||||
|
||||
You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for xref:../installing/installing_openstack/installing-openstack-installer-custom.adoc#installing-openstack-installer-custom[{rh-openstack}], xref:../installing/installing_openstack/installing-openstack-installer-kuryr.adoc#installing-openstack-installer-kuryr[{rh-openstack} with Kuryr], xref:../installing/installing_rhv/installing-rhv-default.adoc#installing-rhv-default[{rh-virtualization}], xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned.adoc#installing-vsphere-installer-provisioned[vSphere], and xref:../installing/installing_bare_metal_ipi/ipi-install-overview#ipi-install-overview[bare metal]. Additionally, for xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc#installing-vsphere-installer-provisioned-network-customizations[vSphere], you can also customize additional network parameters during installation.
|
||||
|
||||
You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for xref:../installing/installing_openstack/installing-openstack-installer-custom.adoc#installing-openstack-installer-custom[{rh-openstack}], xref:../installing/installing_openstack/installing-openstack-installer-kuryr.adoc#installing-openstack-installer-kuryr[{rh-openstack} with Kuryr], xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned.adoc#installing-vsphere-installer-provisioned[vSphere], and xref:../installing/installing_bare_metal_ipi/ipi-install-overview#ipi-install-overview[bare metal]. Additionally, for xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc#installing-vsphere-installer-provisioned-network-customizations[vSphere], you can also customize additional network parameters during installation.
|
||||
|
||||
|
||||
If you want to reuse extensive cloud infrastructure, you can complete a _user-provisioned infrastructure_ installation. With these installations, you manually deploy the machines that your cluster requires during the installation process. If you perform a user-provisioned infrastructure installation on xref:../installing/installing_aws/installing-aws-user-infra.adoc#installing-aws-user-infra[AWS], xref:../installing/installing_azure/installing-azure-user-infra.adoc#installing-azure-user-infra[Azure], xref:../installing/installing_azure_stack_hub/installing-azure-stack-hub-user-infra.adoc#installing-azure-stack-hub-user-infra[Azure Stack Hub], you can use the provided templates to help you stand up all of the required components. You can also reuse a shared xref:../installing/installing_gcp/installing-gcp-user-infra-vpc.adoc#installing-gcp-user-infra-vpc[VPC on GCP]. Otherwise, you can use the xref:../installing/installing_platform_agnostic/installing-platform-agnostic.adoc#installing-platform-agnostic[provider-agnostic] installation method to deploy a cluster into other clouds.
|
||||
|
||||
|
||||
You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use xref:../installing/installing_openstack/installing-openstack-user.adoc#installing-openstack-user[{rh-openstack}], xref:../installing/installing_rhv/installing-rhv-user-infra.adoc#installing-rhv-user-infra[{rh-virtualization}], xref:../installing/installing_ibm_z/installing-ibm-z.adoc#installing-ibm-z[{ibmzProductName} or {linuxoneProductName}], xref:../installing/installing_ibm_z/installing-ibm-z-kvm.adoc#installing-ibm-z-kvm[{ibmzProductName} and {linuxoneProductName} with {op-system-base} KVM], xref:../installing/installing_ibm_power/installing-ibm-power.adoc#installing-ibm-power[IBM Power], or xref:../installing/installing_vsphere/installing-vsphere.adoc#installing-vsphere[vSphere], use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[bare metal installation] procedure. For some of these platforms, such as xref:../installing/installing_openstack/installing-openstack-user-kuryr.adoc#installing-openstack-user-kuryr[{rh-openstack}], xref:../installing/installing_vsphere/installing-vsphere-network-customizations.adoc#installing-vsphere-network-customizations[vSphere], and xref:../installing/installing_bare_metal/installing-bare-metal-network-customizations.adoc#installing-bare-metal-network-customizations[bare metal], you can also customize additional network parameters during installation.
|
||||
You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use xref:../installing/installing_openstack/installing-openstack-user.adoc#installing-openstack-user[{rh-openstack}], xref:../installing/installing_ibm_z/installing-ibm-z.adoc#installing-ibm-z[{ibmzProductName} or {linuxoneProductName}], xref:../installing/installing_ibm_z/installing-ibm-z-kvm.adoc#installing-ibm-z-kvm[{ibmzProductName} and {linuxoneProductName} with {op-system-base} KVM], xref:../installing/installing_ibm_power/installing-ibm-power.adoc#installing-ibm-power[IBM Power], or xref:../installing/installing_vsphere/installing-vsphere.adoc#installing-vsphere[vSphere], use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[bare metal installation] procedure. For some of these platforms, such as xref:../installing/installing_openstack/installing-openstack-user-kuryr.adoc#installing-openstack-user-kuryr[{rh-openstack}], xref:../installing/installing_vsphere/installing-vsphere-network-customizations.adoc#installing-vsphere-network-customizations[vSphere], and xref:../installing/installing_bare_metal/installing-bare-metal-network-customizations.adoc#installing-bare-metal-network-customizations[bare metal], you can also customize additional network parameters during installation.
|
||||
|
||||
|
||||
[id="installing-preparing-security"]
|
||||
=== Do you need extra security for your cluster?
|
||||
@@ -74,7 +76,8 @@ If you use a user-provisioned installation method, you can configure a proxy for
|
||||
|
||||
If you want to prevent your cluster on a public cloud from exposing endpoints externally, you can deploy a private cluster with installer-provisioned infrastructure on xref:../installing/installing_aws/installing-aws-private.adoc#installing-aws-private[AWS], xref:../installing/installing_azure/installing-azure-private.adoc#installing-azure-private[Azure], or xref:../installing/installing_gcp/installing-gcp-private.adoc#installing-gcp-private[GCP].
|
||||
|
||||
If you need to install your cluster that has limited access to the internet, such as a disconnected or restricted network cluster, you can xref:../installing/disconnected_install/installing-mirroring-installation-images.adoc#installing-mirroring-installation-images[mirror the installation packages] and install the cluster from them. Follow detailed instructions for user provisioned infrastructure installations into restricted networks for xref:../installing/installing_aws/installing-restricted-networks-aws.adoc#installing-restricted-networks-aws[AWS], xref:../installing/installing_gcp/installing-restricted-networks-gcp.adoc#installing-restricted-networks-gcp[GCP], xref:../installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc#installing-restricted-networks-ibm-z[{ibmzProductName} or {linuxoneProductName}], xref:../installing/installing_ibm_z/installing-restricted-networks-ibm-z-kvm.adoc#installing-restricted-networks-ibm-z-kvm[{ibmzProductName} or {linuxoneProductName} with {op-system-base} KVM], xref:../installing/installing_ibm_power/installing-restricted-networks-ibm-power.adoc#installing-restricted-networks-ibm-power[IBM Power], xref:../installing/installing_vsphere/installing-restricted-networks-vsphere.adoc#installing-restricted-networks-vsphere[vSphere], or xref:../installing/installing_bare_metal/installing-restricted-networks-bare-metal.adoc#installing-restricted-networks-bare-metal[bare metal]. You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for xref:../installing/installing_aws/installing-restricted-networks-aws-installer-provisioned.adoc#installing-restricted-networks-aws-installer-provisioned[AWS], xref:../installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.adoc#installing-restricted-networks-gcp-installer-provisioned[GCP], xref:../installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.adoc#installing-restricted-networks-nutanix-installer-provisioned[Nutanix], xref:../installing/installing_openstack/installing-openstack-installer-restricted.adoc#installing-openstack-installer-restricted[{rh-openstack}], xref:../installing/installing_rhv/installing-rhv-restricted-network.adoc#installing-rhv-restricted-network[{rh-virtualization}], and xref:../installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc#installing-restricted-networks-installer-provisioned-vsphere[vSphere].
|
||||
If you need to install your cluster that has limited access to the internet, such as a disconnected or restricted network cluster, you can xref:../installing/disconnected_install/installing-mirroring-installation-images.adoc#installing-mirroring-installation-images[mirror the installation packages] and install the cluster from them. Follow detailed instructions for user provisioned infrastructure installations into restricted networks for xref:../installing/installing_aws/installing-restricted-networks-aws.adoc#installing-restricted-networks-aws[AWS], xref:../installing/installing_gcp/installing-restricted-networks-gcp.adoc#installing-restricted-networks-gcp[GCP], xref:../installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc#installing-restricted-networks-ibm-z[{ibmzProductName} or {linuxoneProductName}], xref:../installing/installing_ibm_z/installing-restricted-networks-ibm-z-kvm.adoc#installing-restricted-networks-ibm-z-kvm[{ibmzProductName} or {linuxoneProductName} with {op-system-base} KVM], xref:../installing/installing_ibm_power/installing-restricted-networks-ibm-power.adoc#installing-restricted-networks-ibm-power[IBM Power], xref:../installing/installing_vsphere/installing-restricted-networks-vsphere.adoc#installing-restricted-networks-vsphere[vSphere], or xref:../installing/installing_bare_metal/installing-restricted-networks-bare-metal.adoc#installing-restricted-networks-bare-metal[bare metal]. You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for xref:../installing/installing_aws/installing-restricted-networks-aws-installer-provisioned.adoc#installing-restricted-networks-aws-installer-provisioned[AWS], xref:../installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.adoc#installing-restricted-networks-gcp-installer-provisioned[GCP], xref:../installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.adoc#installing-restricted-networks-nutanix-installer-provisioned[Nutanix, xref:../installing/installing_openstack/installing-openstack-installer-restricted.adoc#installing-openstack-installer-restricted[{rh-openstack}], and xref:../installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc#installing-restricted-networks-installer-provisioned-vsphere[vSphere].
|
||||
|
||||
|
||||
If you need to deploy your cluster to an xref:../installing/installing_aws/installing-aws-government-region.adoc#installing-aws-government-region[AWS GovCloud region], xref:../installing/installing_aws/installing-aws-china.adoc#installing-aws-china-region[AWS China region], or xref:../installing/installing_azure/installing-azure-government-region.adoc#installing-azure-government-region[Azure government region], you can configure those custom regions during an installer-provisioned infrastructure installation.
|
||||
|
||||
@@ -129,7 +132,7 @@ Not all installation options are supported for all platforms, as shown in the fo
|
||||
//This table is for all flavors of OpenShift, except OKD. A separate table is required because OKD does not support multiple AWS architecture types. Trying to maintain one table using conditions, while convenient, is very fragile and prone to publishing errors.
|
||||
ifndef::openshift-origin[]
|
||||
|===
|
||||
||Alibaba |AWS (64-bit x86) |AWS (64-bit ARM) |Azure (64-bit x86) |Azure (64-bit ARM)|Azure Stack Hub |GCP |Nutanix |{rh-openstack} |RHV |Bare metal (64-bit x86) |Bare metal (64-bit ARM) |vSphere |IBM Cloud VPC |{ibmzProductName} |{ibmpowerProductName} |{ibmpowerProductName} Virtual Server
|
||||
||Alibaba |AWS (x86_64) |AWS (arm64) |Azure (x86_64) |Azure (arm64)|Azure Stack Hub |GCP |Nutanix |{rh-openstack} |Bare metal (x86_64) |Bare metal (arm64) |vSphere |VMC |IBM Cloud VPC |{ibmzProductName} |{ibmpowerProductName} |{ibmpowerProductName} Virtual Server
|
||||
|
||||
|Default
|
||||
|xref:../installing/installing_alibaba/installing-alibaba-default.adoc#installing-alibaba-default[✓]
|
||||
@@ -141,7 +144,6 @@ ifndef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-gcp-default.adoc#installing-gcp-default[✓]
|
||||
|xref:../installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc#installing-nutanix-installer-provisioned[✓]
|
||||
|
|
||||
|xref:../installing/installing_rhv/installing-rhv-default.adoc#installing-rhv-default[✓]
|
||||
|xref:../installing/installing_bare_metal_ipi/ipi-install-overview.adoc#ipi-install-overview[✓]
|
||||
|xref:../installing/installing_bare_metal_ipi/ipi-install-overview.adoc#ipi-install-overview[✓]
|
||||
|xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned.adoc#installing-vsphere-installer-provisioned[✓]
|
||||
@@ -160,7 +162,6 @@ ifndef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-gcp-customizations.adoc#installing-gcp-customizations[✓]
|
||||
|xref:../installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc#installing-nutanix-installer-provisioned[✓]
|
||||
|xref:../installing/installing_openstack/installing-openstack-installer-custom.adoc#installing-openstack-installer-custom[✓]
|
||||
|xref:../installing/installing_rhv/installing-rhv-customizations.adoc#installing-rhv-customizations[✓]
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned-customizations.adoc#installing-vsphere-installer-provisioned-customizations[✓]
|
||||
@@ -182,7 +183,6 @@ ifndef::openshift-origin[]
|
||||
|xref:../installing/installing_openstack/installing-openstack-installer-kuryr.adoc#installing-openstack-installer-kuryr[✓]
|
||||
|
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc#installing-vsphere-installer-provisioned-network-customizations[✓]
|
||||
|xref:../installing/installing_ibm_cloud_public/installing-ibm-cloud-network-customizations.adoc#installing-ibm-cloud-network-customizations[✓]
|
||||
|
|
||||
@@ -199,7 +199,6 @@ ifndef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.adoc#installing-restricted-networks-gcp-installer-provisioned[✓]
|
||||
|xref:../installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.adoc#installing-restricted-networks-nutanix-installer-provisioned[✓]
|
||||
|xref:../installing/installing_openstack/installing-openstack-installer-restricted.adoc#installing-openstack-installer-restricted[✓]
|
||||
|xref:../installing/installing_rhv/installing-rhv-restricted-network.adoc#installing-rhv-restricted-network[✓]
|
||||
|xref:../installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc#ipi-install-installation-workflow[✓]
|
||||
|xref:../installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc#ipi-install-installation-workflow[✓]
|
||||
|xref:../installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc#installing-restricted-networks-installer-provisioned-vsphere[✓]
|
||||
@@ -222,7 +221,6 @@ ifndef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_ibm_cloud_public/installing-ibm-cloud-private.adoc#installing-ibm-cloud-private[✓]
|
||||
|
|
||||
|
|
||||
@@ -242,7 +240,6 @@ ifndef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_ibm_cloud_public/installing-ibm-cloud-vpc.adoc#installing-ibm-cloud-vpc[✓]
|
||||
|
|
||||
|
|
||||
@@ -266,7 +263,6 @@ ifndef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|
||||
|Secret regions
|
||||
|
|
||||
@@ -286,7 +282,6 @@ ifndef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|
||||
|China regions
|
||||
|
|
||||
@@ -306,14 +301,14 @@ ifndef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|===
|
||||
endif::openshift-origin[]
|
||||
|
||||
//This table is for OKD only. A separate table is required because OKD does not support multiple AWS architecture types. Trying to maintain one table using conditions, while convenient, is very fragile and prone to publishing errors.
|
||||
ifdef::openshift-origin[]
|
||||
|===
|
||||
||Alibaba |AWS |Azure |Azure Stack Hub |GCP |Nutanix |{rh-openstack} |oVirt |Bare metal |vSphere |IBM Cloud VPC |{ibmzProductName} |{ibmpowerProductName}
|
||||
||Alibaba |AWS |Azure |Azure Stack Hub |GCP |Nutanix |{rh-openstack} |Bare metal |vSphere |VMC |IBM Cloud VPC |{ibmzProductName} |{ibmpowerProductName}
|
||||
|
||||
|
||||
|Default
|
||||
|xref:../installing/installing_alibaba/installing-alibaba-default.adoc#installing-alibaba-default[✓]
|
||||
@@ -323,7 +318,6 @@ ifdef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-gcp-default.adoc#installing-gcp-default[✓]
|
||||
|xref:../installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc#installing-nutanix-installer-provisioned[✓]
|
||||
|
|
||||
|xref:../installing/installing_rhv/installing-rhv-default.adoc#installing-rhv-default[✓]
|
||||
|xref:../installing/installing_bare_metal_ipi/ipi-install-overview.adoc#ipi-install-overview[✓]
|
||||
|xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned.adoc#installing-vsphere-installer-provisioned[✓]
|
||||
|xref:../installing/installing_ibm_cloud_public/installing-ibm-cloud-customizations.adoc#installing-ibm-cloud-customizations[✓]
|
||||
@@ -338,7 +332,6 @@ ifdef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-gcp-customizations.adoc#installing-gcp-customizations[✓]
|
||||
|xref:../installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc#installing-nutanix-installer-provisioned[✓]
|
||||
|xref:../installing/installing_openstack/installing-openstack-installer-custom.adoc#installing-openstack-installer-custom[✓]
|
||||
|xref:../installing/installing_rhv/installing-rhv-customizations.adoc#installing-rhv-customizations[✓]
|
||||
|
|
||||
|xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned-customizations.adoc#installing-vsphere-installer-provisioned-customizations[✓]
|
||||
|xref:../installing/installing_ibm_cloud_public/installing-ibm-cloud-customizations.adoc#installing-ibm-cloud-customizations[✓]
|
||||
@@ -354,7 +347,6 @@ ifdef::openshift-origin[]
|
||||
|
|
||||
|xref:../installing/installing_openstack/installing-openstack-installer-kuryr.adoc#installing-openstack-installer-kuryr[✓]
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc#installing-vsphere-installer-provisioned-network-customizations[✓]
|
||||
|xref:../installing/installing_ibm_cloud_public/installing-ibm-cloud-network-customizations.adoc#installing-ibm-cloud-network-customizations[✓]
|
||||
|
|
||||
@@ -368,7 +360,6 @@ ifdef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-restricted-networks-gcp-installer-provisioned.adoc#installing-restricted-networks-gcp-installer-provisioned[✓]
|
||||
|xref:../installing/installing_nutanix/installing-restricted-networks-nutanix-installer-provisioned.adoc#installing-restricted-networks-nutanix-installer-provisioned[✓]
|
||||
|xref:../installing/installing_openstack/installing-openstack-installer-restricted.adoc#installing-openstack-installer-restricted[✓]
|
||||
|xref:../installing/installing_rhv/installing-rhv-restricted-network.adoc#installing-rhv-restricted-network[✓]
|
||||
|
|
||||
|xref:../installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc#installing-restricted-networks-installer-provisioned-vsphere[✓]
|
||||
|
|
||||
@@ -386,7 +377,6 @@ ifdef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_ibm_cloud_public/installing-ibm-cloud-private.adoc#installing-ibm-cloud-private[✓]
|
||||
|
|
||||
|
|
||||
@@ -402,7 +392,6 @@ ifdef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_ibm_cloud_public/installing-ibm-cloud-vpc.adoc#installing-ibm-cloud-vpc[✓]
|
||||
|
|
||||
|
|
||||
@@ -421,7 +410,6 @@ ifdef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|
||||
|Secret regions
|
||||
|
|
||||
@@ -437,7 +425,6 @@ ifdef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|
||||
|China regions
|
||||
|
|
||||
@@ -453,7 +440,6 @@ ifdef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|===
|
||||
endif::openshift-origin[]
|
||||
|
||||
@@ -461,7 +447,8 @@ endif::openshift-origin[]
|
||||
//This table is for all flavors of OpenShift, except OKD. A separate table is required because OKD does not support multiple AWS architecture types. Trying to maintain one table using conditions, while convenient, is very fragile and prone to publishing errors.
|
||||
ifndef::openshift-origin[]
|
||||
|===
|
||||
||Alibaba |AWS (64-bit x86) |AWS (64-bit ARM) |Azure (64-bit x86) |Azure (64-bit ARM) |Azure Stack Hub |GCP |Nutanix |{rh-openstack} |RHV |Bare metal (64-bit x86) |Bare metal (64-bit ARM) |vSphere |IBM Cloud VPC |{ibmzProductName} |{ibmzProductName} with {op-system-base} KVM |{ibmpowerProductName} |Platform agnostic
|
||||
||Alibaba |AWS (x86_64) |AWS (arm64) |Azure (x86_64) |Azure (arm64) |Azure Stack Hub |GCP |Nutanix |{rh-openstack} |Bare metal (x86_64) |Bare metal (arm64) |vSphere |VMC |IBM Cloud VPC |{ibmzProductName} |{ibmzProductName} with {op-system-base} KVM |{ibmpowerProductName} |Platform agnostic
|
||||
|
||||
|
||||
|Custom
|
||||
|
|
||||
@@ -473,7 +460,6 @@ ifndef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-gcp-user-infra.adoc#installing-gcp-user-infra[✓]
|
||||
|
|
||||
|xref:../installing/installing_openstack/installing-openstack-user.adoc#installing-openstack-user[✓]
|
||||
|xref:../installing/installing_rhv/installing-rhv-user-infra.adoc#installing-rhv-user-infra[✓]
|
||||
|xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[✓]
|
||||
|xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[✓]
|
||||
|xref:../installing/installing_vsphere/installing-vsphere.adoc#installing-vsphere[✓]
|
||||
@@ -483,8 +469,6 @@ ifndef::openshift-origin[]
|
||||
|xref:../installing/installing_ibm_power/installing-ibm-power.adoc#installing-ibm-power[✓]
|
||||
|xref:../installing/installing_platform_agnostic/installing-platform-agnostic.adoc#installing-platform-agnostic[✓]
|
||||
|
||||
// Add RHV UPI link when docs are available: https://github.com/openshift/openshift-docs/pull/26484
|
||||
|
||||
|
||||
|Network customization
|
||||
|
|
||||
@@ -496,7 +480,6 @@ ifndef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_openstack/installing-openstack-user-kuryr.adoc#installing-openstack-user-kuryr[✓]
|
||||
|
|
||||
|xref:../installing/installing_bare_metal/installing-bare-metal-network-customizations.adoc#installing-bare-metal-network-customizations[✓]
|
||||
|xref:../installing/installing_bare_metal/installing-bare-metal-network-customizations.adoc#installing-bare-metal-network-customizations[✓]
|
||||
|xref:../installing/installing_vsphere/installing-vsphere-network-customizations.adoc#installing-vsphere-network-customizations[✓]
|
||||
@@ -516,7 +499,6 @@ ifndef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-restricted-networks-gcp.adoc#installing-restricted-networks-gcp[✓]
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_rhv/installing-rhv-restricted-network.adoc#installing-rhv-restricted-network[✓]
|
||||
|xref:../installing/installing_bare_metal/installing-restricted-networks-bare-metal.adoc#installing-restricted-networks-bare-metal[✓]
|
||||
|xref:../installing/installing_bare_metal/installing-restricted-networks-bare-metal.adoc#installing-restricted-networks-bare-metal[✓]
|
||||
|xref:../installing/installing_vsphere/installing-restricted-networks-vsphere.adoc#installing-restricted-networks-vsphere[✓]
|
||||
@@ -545,14 +527,14 @@ ifndef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|===
|
||||
endif::openshift-origin[]
|
||||
|
||||
//This table is for OKD only. A separate table is required because OKD does not support multiple AWS architecture types. Trying to maintain one table using conditions, while convenient, is very fragile and prone to publishing errors.
|
||||
ifdef::openshift-origin[]
|
||||
|===
|
||||
||Alibaba |AWS |Azure |Azure Stack Hub |GCP |Nutanix |{rh-openstack} |oVirt |Bare metal |vSphere |IBM Cloud VPC |{ibmzProductName} |{ibmzProductName} with {op-system-base} KVM |{ibmpowerProductName} |Platform agnostic
|
||||
||Alibaba |AWS |Azure |Azure Stack Hub |GCP |Nutanix |{rh-openstack}|Bare metal |vSphere |VMC |IBM Cloud VPC |{ibmzProductName} |{ibmzProductName} with {op-system-base} KVM |{ibmpowerProductName} |Platform agnostic
|
||||
|
||||
|
||||
|Custom
|
||||
|
|
||||
@@ -562,7 +544,6 @@ ifdef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-gcp-user-infra.adoc#installing-gcp-user-infra[✓]
|
||||
|
|
||||
|xref:../installing/installing_openstack/installing-openstack-user.adoc#installing-openstack-user[✓]
|
||||
|xref:../installing/installing_rhv/installing-rhv-user-infra.adoc#installing-rhv-user-infra[✓]
|
||||
|xref:../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[✓]
|
||||
|xref:../installing/installing_vsphere/installing-vsphere.adoc#installing-vsphere[✓]
|
||||
|
|
||||
@@ -571,7 +552,6 @@ ifdef::openshift-origin[]
|
||||
|xref:../installing/installing_ibm_power/installing-ibm-power.adoc#installing-ibm-power[✓]
|
||||
|xref:../installing/installing_platform_agnostic/installing-platform-agnostic.adoc#installing-platform-agnostic[✓]
|
||||
|
||||
// Add RHV UPI link when docs are available: https://github.com/openshift/openshift-docs/pull/26484
|
||||
|
||||
|Network customization
|
||||
|
|
||||
@@ -581,7 +561,6 @@ ifdef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_openstack/installing-openstack-user-kuryr.adoc#installing-openstack-user-kuryr[✓]
|
||||
|
|
||||
|xref:../installing/installing_bare_metal/installing-bare-metal-network-customizations.adoc#installing-bare-metal-network-customizations[✓]
|
||||
|xref:../installing/installing_vsphere/installing-vsphere-network-customizations.adoc#installing-vsphere-network-customizations[✓]
|
||||
|
|
||||
@@ -598,7 +577,6 @@ ifdef::openshift-origin[]
|
||||
|xref:../installing/installing_gcp/installing-restricted-networks-gcp.adoc#installing-restricted-networks-gcp[✓]
|
||||
|
|
||||
|
|
||||
|
|
||||
|xref:../installing/installing_bare_metal/installing-restricted-networks-bare-metal.adoc#installing-restricted-networks-bare-metal[✓]
|
||||
|xref:../installing/installing_vsphere/installing-restricted-networks-vsphere.adoc#installing-restricted-networks-vsphere[✓]
|
||||
|
|
||||
@@ -623,7 +601,6 @@ ifdef::openshift-origin[]
|
||||
|
|
||||
|
|
||||
|
|
||||
|
|
||||
|===
|
||||
endif::openshift-origin[]
|
||||
|
||||
|
||||
@@ -1,106 +0,0 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="installing-rhv-customizations"]
|
||||
= Installing a cluster on {rh-virtualization} with customizations
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: installing-rhv-customizations
|
||||
|
||||
toc::[]
|
||||
|
||||
You can customize and install an {product-title} cluster on {rh-virtualization-first}, similar to the one shown in the following diagram.
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
image::92_OpenShift_Cluster_Install_RHV_0520.png[Diagram of an {product-title} cluster on a {rh-virtualization} cluster]
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
image::193_OpenShift_Cluster_Install_updates_1021_oVirt.png[Diagram of an {product-title} cluster on a {rh-virtualization} cluster]
|
||||
endif::openshift-origin[]
|
||||
|
||||
The installation program uses installer-provisioned infrastructure to automate creating and deploying the cluster.
|
||||
|
||||
To install a customized cluster, you prepare the environment and perform the following steps:
|
||||
|
||||
. Create an installation configuration file, the `install-config.yaml` file, by running the installation program and answering its prompts.
|
||||
. Inspect and modify parameters in the `install-config.yaml` file.
|
||||
. Make a working copy of the `install-config.yaml` file.
|
||||
. Run the installation program with a copy of the `install-config.yaml` file.
|
||||
|
||||
Then, the installation program creates the {product-title} cluster.
|
||||
|
||||
For an alternative to installing a customized cluster, see xref:../../installing/installing_rhv/installing-rhv-default.adoc#installing-rhv-default[Installing a default cluster].
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
This installation program is available for Linux and macOS only.
|
||||
====
|
||||
|
||||
== Prerequisites
|
||||
|
||||
* You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes.
|
||||
* You have a supported combination of versions in the link:https://access.redhat.com/articles/5485861[Support Matrix for {product-title} on {rh-virtualization-first}].
|
||||
* You read the documentation on xref:../../installing/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users].
|
||||
* If you use a firewall, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to.
|
||||
|
||||
include::modules/cluster-entitlements.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-requirements.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-verifying-rhv-environment.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-preparing-network-environment.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-insecure-mode.adoc[leveloffset=+1]
|
||||
|
||||
// include::modules/installing-rhv-setting-up-ca-certificate.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ssh-agent-using.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-initializing.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-example-install-config-yaml.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-configuration-parameters.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-launching-installer.adoc[leveloffset=+1]
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You have completed the steps required to install the cluster. The remaining steps show you how to verify the cluster and troubleshoot the installation.
|
||||
====
|
||||
|
||||
include::modules/cli-installing-cli.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]
|
||||
|
||||
To learn more, see xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI].
|
||||
|
||||
include::modules/installation-osp-verifying-cluster-status.adoc[leveloffset=+1]
|
||||
|
||||
.Troubleshooting
|
||||
If the installation fails, the installation program times out and displays an error message. To learn more, see
|
||||
xref:../../installing/installing-troubleshooting.adoc#installing-troubleshooting[Troubleshooting installation issues].
|
||||
|
||||
include::modules/installing-rhv-accessing-ocp-web-console.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-telemetry.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service
|
||||
|
||||
include::modules/installation-common-issues.adoc[leveloffset=+1]
|
||||
|
||||
== Post-installation tasks
|
||||
|
||||
After the {product-title} cluster initializes, you can perform the following tasks.
|
||||
|
||||
* Optional: After deployment, add or replace SSH keys using the Machine Config Operator (MCO) in {product-title}.
|
||||
* Optional: Remove the `kubeadmin` user. Instead, use the authentication provider to create a user with cluster-admin privileges.
|
||||
|
||||
== Next steps
|
||||
|
||||
* xref:../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster].
|
||||
* If necessary, you can
|
||||
xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting].
|
||||
@@ -1,96 +0,0 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="installing-rhv-default"]
|
||||
= Installing a cluster quickly on {rh-virtualization}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: installing-rhv-default
|
||||
|
||||
toc::[]
|
||||
|
||||
You can quickly install a default, non-customized, {product-title} cluster on a {rh-virtualization-first} cluster, similar to the one shown in the following diagram.
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
image::92_OpenShift_Cluster_Install_RHV_0520.png[Diagram of an {product-title} cluster on a {rh-virtualization} cluster]
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
image::193_OpenShift_Cluster_Install_updates_1021_oVirt.png[Diagram of an {product-title} cluster on a {rh-virtualization} cluster]
|
||||
endif::openshift-origin[]
|
||||
|
||||
The installation program uses installer-provisioned infrastructure to automate creating and deploying the cluster.
|
||||
|
||||
To install a default cluster, you prepare the environment, run the installation program and answer its prompts. Then, the installation program creates the {product-title} cluster.
|
||||
|
||||
For an alternative to installing a default cluster, see xref:../../installing/installing_rhv/installing-rhv-customizations.adoc#installing-rhv-customizations[Installing a cluster with customizations].
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
This installation program is available for Linux and macOS only.
|
||||
====
|
||||
|
||||
== Prerequisites
|
||||
|
||||
* You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes.
|
||||
* You have a supported combination of versions in the link:https://access.redhat.com/articles/5485861[Support Matrix for {product-title} on {rh-virtualization-first}].
|
||||
* You read the documentation on xref:../../installing/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users].
|
||||
* If you use a firewall, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to.
|
||||
|
||||
include::modules/cluster-entitlements.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-requirements.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* xref:../../installing/installing_rhv/installing-rhv-customizations.adoc#installing-rhv-example-install-config-yaml_installing-rhv-customizations[Example: Removing all affinity groups for a non-production lab setup].
|
||||
|
||||
include::modules/installing-rhv-verifying-rhv-environment.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-preparing-network-environment.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-insecure-mode.adoc[leveloffset=+1]
|
||||
|
||||
// include::modules/installing-rhv-setting-up-ca-certificate.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ssh-agent-using.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-launching-installer.adoc[leveloffset=+1]
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
You have completed the steps required to install the cluster. The remaining steps show you how to verify the cluster and troubleshoot the installation.
|
||||
====
|
||||
|
||||
include::modules/cli-installing-cli.adoc[leveloffset=+1]
|
||||
|
||||
To learn more, see xref:../../cli_reference/openshift_cli/getting-started-cli.adoc#getting-started-cli[Getting started with the OpenShift CLI].
|
||||
|
||||
include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* See xref:../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console.
|
||||
|
||||
include::modules/installation-osp-verifying-cluster-status.adoc[leveloffset=+1]
|
||||
|
||||
.Troubleshooting
|
||||
If the installation fails, the installation program times out and displays an error message. To learn more, see
|
||||
xref:../../installing/installing-troubleshooting.adoc#installing-troubleshooting[Troubleshooting installation issues].
|
||||
|
||||
include::modules/installing-rhv-accessing-ocp-web-console.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-telemetry.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service
|
||||
|
||||
include::modules/installation-common-issues.adoc[leveloffset=+1]
|
||||
|
||||
== Post-installation tasks
|
||||
After the {product-title} cluster initializes, you can perform the following tasks.
|
||||
|
||||
* Optional: After deployment, add or replace SSH keys using the Machine Config Operator (MCO) in {product-title}.
|
||||
* Optional: Remove the `kubeadmin` user. Instead, use the authentication provider to create a user with cluster-admin privileges.
|
||||
@@ -1,93 +0,0 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="installing-rhv-restricted-network"]
|
||||
= Installing a cluster on {rh-virtualization} in a restricted network
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: installing-rhv-restricted-network
|
||||
|
||||
toc::[]
|
||||
|
||||
In {product-title} version {product-version}, you can install a
|
||||
customized {product-title} cluster on {rh-virtualization-first} in a restricted network by creating an internal mirror of the installation release content.
|
||||
|
||||
== Prerequisites
|
||||
|
||||
The following items are required to install an {product-title} cluster on a {rh-virtualization} environment.
|
||||
|
||||
* You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes.
|
||||
* You read the documentation on xref:../../installing/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users].
|
||||
* You have a supported combination of versions in the link:https://access.redhat.com/articles/5485861[Support Matrix for {product-title} on {rh-virtualization}].
|
||||
* You xref:../../installing/disconnected_install/installing-mirroring-installation-images.adoc#installing-mirroring-installation-images[created a registry on your mirror host] and obtained the `imageContentSources` data for your version of {product-title}.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
Because the installation media is on the mirror host, you can use that computer to complete all installation steps.
|
||||
====
|
||||
+
|
||||
* You provisioned xref:../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[persistent storage] for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
|
||||
* If you use a firewall and plan to use the Telemetry service, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured the firewall to allow the sites] that your cluster requires access to.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Be sure to also review this site list if you are configuring a proxy.
|
||||
====
|
||||
|
||||
include::modules/installation-about-restricted-network.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-entitlements.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-requirements.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-verifying-rhv-environment.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-network-user-infra.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-dns-user-infra.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-load-balancing-user-infra.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installing-rhv-setting-up-installation-machine.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-setting-up-ca-certificate.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ssh-agent-using.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-downloading-ansible-playbooks.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-about-inventory-yml.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-specifying-rhcos-image-settings.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-install-config-file.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-bare-metal-config-yaml.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-configure-proxy.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/installation-rhv-customizing-install-config-yaml.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-editing-manifests.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-making-control-plane-nodes-non-schedulable.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-building-ignition-files.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-templates-virtual-machines.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-bootstrap-machine.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-control-plane-nodes.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-osp-verifying-cluster-status.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-removing-bootstrap-machine.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-worker-nodes-completing-installation.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-telemetry.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service
|
||||
|
||||
include::modules/olm-restricted-networks-configuring-operatorhub.adoc[leveloffset=+1]
|
||||
@@ -1,84 +0,0 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="installing-rhv-user-infra"]
|
||||
= Installing a cluster on {rh-virtualization} with user-provisioned infrastructure
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: installing-rhv-user-infra
|
||||
|
||||
toc::[]
|
||||
|
||||
In {product-title} version {product-version}, you can install a
|
||||
customized {product-title} cluster on {rh-virtualization-first} and other infrastructure that you provide. The {product-title} documentation uses the term _user-provisioned infrastructure_ to refer to this infrastructure type.
|
||||
|
||||
The following diagram shows an example of a potential {product-title} cluster running on a {rh-virtualization} cluster.
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
image::92_OpenShift_Cluster_Install_RHV_0520.png[Diagram of an {product-title} cluster on a {rh-virtualization} cluster]
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
image::193_OpenShift_Cluster_Install_updates_1021_oVirt.png[Diagram of an {product-title} cluster on a {rh-virtualization} cluster]
|
||||
endif::openshift-origin[]
|
||||
|
||||
The {rh-virtualization} hosts run virtual machines that contain both control plane and compute pods. One of the hosts also runs a {rh-virtualization-engine-name} virtual machine and a bootstrap virtual machine that contains a temporary control plane pod.]
|
||||
|
||||
== Prerequisites
|
||||
|
||||
The following items are required to install an {product-title} cluster on a {rh-virtualization} environment.
|
||||
|
||||
* You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes.
|
||||
* You read the documentation on xref:../../installing/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users].
|
||||
* You have a supported combination of versions in the link:https://access.redhat.com/articles/5485861[Support Matrix for {product-title} on {rh-virtualization-first}].
|
||||
|
||||
include::modules/cluster-entitlements.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-requirements.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-verifying-rhv-environment.adoc[leveloffset=+1]
|
||||
|
||||
//include::modules/installing-rhv-network-infrastructure-configuration-upi.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-network-user-infra.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-setting-up-installation-machine.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installing-rhv-insecure-mode.adoc[leveloffset=+1]
|
||||
|
||||
// include::modules/installing-rhv-setting-up-ca-certificate.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/ssh-agent-using.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-obtaining-installer.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-downloading-ansible-playbooks.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-about-inventory-yml.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-specifying-rhcos-image-settings.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-install-config-file.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-customizing-install-config-yaml.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-editing-manifests.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-making-control-plane-nodes-non-schedulable.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-building-ignition-files.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-templates-virtual-machines.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-bootstrap-machine.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-control-plane-nodes.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-osp-verifying-cluster-status.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-removing-bootstrap-machine.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-creating-worker-nodes-completing-installation.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/cluster-telemetry.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service
|
||||
@@ -1,39 +0,0 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="preparing-to-install-on-rhv"]
|
||||
= Preparing to install on {rh-virtualization-first}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: preparing-to-install-on-rhv
|
||||
|
||||
toc::[]
|
||||
|
||||
[id="preparing-to-install-on-rhv-prerequisites"]
|
||||
== Prerequisites
|
||||
|
||||
* You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes.
|
||||
* You have a supported combination of versions in the link:https://access.redhat.com/articles/5485861[Support Matrix for {product-title} on {rh-virtualization-first}].
|
||||
* You read the documentation on xref:../../installing/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users].
|
||||
|
||||
[id="choosing-an-method-to-install-ocp-on-rhv"]
|
||||
== Choosing a method to install {product-title} on {rh-virtualization}
|
||||
|
||||
You can install {product-title} on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install {product-title} on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself.
|
||||
|
||||
See xref:../../architecture/architecture-installation.adoc#installation-process_architecture-installation[Installation process] for more information about installer-provisioned and user-provisioned installation processes.
|
||||
|
||||
[id="choosing-an-method-to-install-ocp-on-rhv-installer-provisioned"]
|
||||
=== Installing a cluster on installer-provisioned infrastructure
|
||||
|
||||
You can install a cluster on {rh-virtualization-first} virtual machines that are provisioned by the {product-title} installation program, by using one of the following methods:
|
||||
|
||||
* **xref:../../installing/installing_rhv/installing-rhv-default.adoc#installing-rhv-default[Installing a cluster quickly on {rh-virtualization}]**: You can quickly install {product-title} on {rh-virtualization} virtual machines that the {product-title} installation program provisions.
|
||||
|
||||
* **xref:../../installing/installing_rhv/installing-rhv-customizations.adoc#installing-rhv-customizations[Installing a cluster on {rh-virtualization} with customizations]**: You can install a customized {product-title} cluster on installer-provisioned guests on {rh-virtualization}. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available xref:../../post_installation_configuration/cluster-tasks.adoc#post-install-cluster-tasks[post-installation].
|
||||
|
||||
[id="choosing-an-method-to-install-ocp-on-rhv-user-provisioned"]
|
||||
=== Installing a cluster on user-provisioned infrastructure
|
||||
|
||||
You can install a cluster on {rh-virtualization} virtual machines that you provision, by using one of the following methods:
|
||||
|
||||
* **xref:../../installing/installing_rhv/installing-rhv-user-infra.adoc#installing-rhv-user-infra[Installing a cluster on {rh-virtualization} with user-provisioned infrastructure]**: You can install {product-title} on {rh-virtualization} virtual machines that you provision. You can use the provided Ansible playbooks to assist with the installation.
|
||||
|
||||
* **xref:../../installing/installing_rhv/installing-rhv-restricted-network.adoc#installing-rhv-restricted-network[Installing a cluster on {rh-virtualization} in a restricted network]**: You can install {product-title} on {rh-virtualization} in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a user-provisioned cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.
|
||||
@@ -1,13 +0,0 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="uninstalling-cluster-rhv"]
|
||||
= Uninstalling a cluster on {rh-virtualization}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: uninstalling-cluster-rhv
|
||||
|
||||
toc::[]
|
||||
|
||||
You can remove an {product-title} cluster from {rh-virtualization-first}.
|
||||
|
||||
include::modules/installation-uninstall-clouds.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/installation-rhv-removing-cluster-upi.adoc[leveloffset=+1]
|
||||
@@ -17,7 +17,7 @@ include::modules/infrastructure-components.adoc[leveloffset=+1]
|
||||
|
||||
For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the link:https://www.redhat.com/en/resources/openshift-subscription-sizing-guide[OpenShift sizing and subscription guide for enterprise Kubernetes] document.
|
||||
|
||||
To create an infrastructure node, you can xref:../machine_management/creating-infrastructure-machinesets.adoc#machineset-creating_creating-infrastructure-machinesets[use a machine set], xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-an-infra-node_creating-infrastructure-machinesets[label the node], or xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infra-machines_creating-infrastructure-machinesets[use a machine config pool].
|
||||
To create an infrastructure node, you can xref:../machine_management/creating-infrastructure-machinesets.adoc#machineset-creating_creating-infrastructure-machinesets[use a machine set], xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-an-infra-node_creating-infrastructure-machinesets[label the node], or xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infra-machines_creating-infrastructure-machinesets[use a machine config pool].
|
||||
|
||||
[id="creating-infrastructure-machinesets-production"]
|
||||
== Creating infrastructure machine sets for production environments
|
||||
@@ -66,8 +66,6 @@ include::modules/machineset-yaml-nutanix.adoc[leveloffset=+3]
|
||||
|
||||
include::modules/machineset-yaml-osp.adoc[leveloffset=+3]
|
||||
|
||||
include::modules/machineset-yaml-rhv.adoc[leveloffset=+3]
|
||||
|
||||
include::modules/machineset-yaml-vsphere.adoc[leveloffset=+3]
|
||||
|
||||
include::modules/machineset-creating.adoc[leveloffset=+2]
|
||||
@@ -104,7 +102,7 @@ include::modules/binding-infra-node-workloads-using-taints-tolerations.adoc[leve
|
||||
[id="moving-resources-to-infrastructure-machinesets"]
|
||||
== Moving resources to infrastructure machine sets
|
||||
|
||||
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown:
|
||||
Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="creating-machineset-rhv"]
|
||||
= Creating a compute machine set on {rh-virtualization}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: creating-machineset-rhv
|
||||
|
||||
toc::[]
|
||||
|
||||
You can create a different compute machine set to serve a specific purpose in your {product-title} cluster on {rh-virtualization-first}. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
|
||||
|
||||
include::modules/machine-user-provisioned-limitations.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/machineset-yaml-rhv.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/machineset-creating.adoc[leveloffset=+1]
|
||||
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use machine management to flexibly work with underlying infrastructure such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), {rh-openstack-first}, {rh-virtualization-first}, and VMware vSphere to manage the {product-title} cluster.
|
||||
You can use machine management to flexibly work with underlying infrastructure such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), {rh-openstack-first}, and VMware vSphere to manage the {product-title} cluster.
|
||||
You can control the cluster and perform auto-scaling, such as scaling up and down the cluster based on specific workload policies.
|
||||
|
||||
It is important to have a cluster that adapts to changing workloads. The {product-title} cluster can horizontally scale up and down when the load increases or decreases.
|
||||
@@ -39,8 +39,6 @@ As a cluster administrator, you can perform the following actions:
|
||||
|
||||
** xref:../machine_management/creating_machinesets/creating-machineset-osp.adoc#creating-machineset-osp[{rh-openstack}]
|
||||
|
||||
** xref:../machine_management/creating_machinesets/creating-machineset-rhv.adoc#creating-machineset-rhv[{rh-virtualization}]
|
||||
|
||||
** xref:../machine_management/creating_machinesets/creating-machineset-vsphere.adoc#creating-machineset-vsphere[vSphere]
|
||||
|
||||
* Create a machine set for a bare metal deployment: xref:../machine_management/creating_machinesets/creating-machineset-bare-metal.adoc#creating-machineset-bare-metal[Creating a compute machine set on bare metal]
|
||||
|
||||
@@ -8,8 +8,6 @@ toc::[]
|
||||
|
||||
You can modify a compute machine set, such as adding labels, changing the instance type, or changing block storage.
|
||||
|
||||
On {rh-virtualization-first}, you can also change a compute machine set to provision new nodes on a different storage domain.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you need to scale a compute machine set without making other changes, see xref:../machine_management/manually-scaling-machineset.adoc#manually-scaling-machineset[Manually scaling a compute machine set].
|
||||
@@ -21,17 +19,7 @@ include::modules/machineset-modifying.adoc[leveloffset=+1]
|
||||
.Additional resources
|
||||
* xref:../machine_management/deleting-machine.adoc#machine-lifecycle-hook-deletion_deleting-machine[Lifecycle hooks for the machine deletion phase]
|
||||
|
||||
[id="migrating-nodes-to-a-different-storage-domain-rhv_{context}"]
|
||||
== Migrating nodes to a different storage domain on {rh-virtualization}
|
||||
|
||||
You can migrate the {product-title} control plane and compute nodes to a different storage domain in a {rh-virtualization-first} cluster.
|
||||
|
||||
include::modules/machineset-migrating-compute-nodes-to-diff-sd-rhv.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* xref:../machine_management/creating_machinesets/creating-machineset-rhv.adoc#machineset-creating_creating-machineset-rhv[Creating a compute machine set]
|
||||
* xref:../machine_management/manually-scaling-machineset.adoc#machineset-manually-scaling_manually-scaling-machineset[Scaling a compute machine set manually]
|
||||
* xref:../nodes/scheduling/nodes-scheduler-about.adoc#nodes-scheduler-about[Controlling pod placement using the scheduler]
|
||||
|
||||
include::modules/machineset-migrating-control-plane-nodes-to-diff-sd-rhv.adoc[leveloffset=+2]
|
||||
|
||||
@@ -35,10 +35,6 @@ You can xref:../../machine_management/creating_machinesets/creating-machineset-v
|
||||
|
||||
To manually add more compute machines to your cluster, see xref:../../machine_management/user_infra/adding-vsphere-compute-user-infra.adoc#adding-vsphere-compute-user-infra[Adding compute machines to vSphere manually].
|
||||
|
||||
[id="upi-adding-compute-rhv"]
|
||||
== Adding compute machines to {rh-virtualization}
|
||||
|
||||
To add more compute machines to your {product-title} cluster on {rh-virtualization}, see xref:../../machine_management/user_infra/adding-rhv-compute-user-infra.adoc#adding-rhv-compute-user-infra[Adding compute machines to {rh-virtualization}].
|
||||
|
||||
[id="upi-adding-compute-bare-metal"]
|
||||
== Adding compute machines to bare metal
|
||||
|
||||
@@ -1,18 +0,0 @@
|
||||
// Assembly included in the following assemblies:
|
||||
// * machine_management/user_infra/adding-compute-user-infra-general.adoc
|
||||
|
||||
:_content-type: ASSEMBLY
|
||||
[id="adding-rhv-compute-user-infra"]
|
||||
= Adding compute machines to a cluster on {rh-virtualization}
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: adding-rhv-compute-user-infra
|
||||
|
||||
toc::[]
|
||||
|
||||
In {product-title} version {product-version}, you can add more compute machines to a user-provisioned {product-title} cluster on {rh-virtualization}.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You installed a cluster on {rh-virtualization} with user-provisioned infrastructure.
|
||||
|
||||
include::modules/machine-user-provisioned-rhv.adoc[leveloffset=+1]
|
||||
@@ -29,7 +29,7 @@ For platforms that support using the CCO in multiple modes, you must determine w
|
||||
.Credentials update requirements by platform type
|
||||
image::334_OpenShift_cluster_updating_and_CCO_workflows_0523_4.11_B.png[Decision tree showing the possible update paths for your cluster depending on the configured CCO credentials mode.]
|
||||
|
||||
{rh-openstack-first}, {rh-virtualization-first}, and VMware vSphere::
|
||||
{rh-openstack-first} and VMware vSphere::
|
||||
These platforms do not support using the CCO in manual mode. Clusters on these platforms handle changes in cloud provider resources automatically and do not require an update to the `upgradeable-to` annotation.
|
||||
+
|
||||
Administrators of clusters on these platforms should skip the manually maintained credentials section of the update process.
|
||||
|
||||
@@ -108,23 +108,6 @@ data:
|
||||
clouds.conf: <base64-encoded_cloud_creds_init>
|
||||
----
|
||||
|
||||
.{rh-virtualization-first} secret format
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
namespace: kube-system
|
||||
name: ovirt-credentials
|
||||
data:
|
||||
ovirt_url: <base64-encoded_url>
|
||||
ovirt_username: <base64-encoded_username>
|
||||
ovirt_password: <base64-encoded_password>
|
||||
ovirt_insecure: <base64-encoded_insecure>
|
||||
ovirt_ca_bundle: <base64-encoded_ca_bundle>
|
||||
----
|
||||
|
||||
.VMware vSphere secret format
|
||||
|
||||
[source,yaml]
|
||||
@@ -146,4 +129,4 @@ ifeval::["{context}" == "cco-mode-mint"]
|
||||
endif::[]
|
||||
ifeval::["{context}" == "cco-mode-passthrough"]
|
||||
:!passthrough:
|
||||
endif::[]
|
||||
endif::[]
|
||||
|
||||
@@ -41,8 +41,6 @@
|
||||
// * installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc
|
||||
// * installing/installing_ibm_z/installing-ibm-z.adoc
|
||||
// * openshift_images/samples-operator-alt-registry.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * updating/updating-restricted-network-cluster/mirroring-image-repository.adoc
|
||||
// * microshift_cli_ref/microshift-oc-cli-install.adoc
|
||||
// * updating/updating-restricted-network-cluster.adoc
|
||||
|
||||
@@ -50,8 +50,6 @@
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc
|
||||
// * installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc
|
||||
// * installing/installing_ibm_z/installing-ibm-z.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
|
||||
@@ -30,10 +30,6 @@
|
||||
// * installing/installing_azure/installing-azure-government-region.adoc
|
||||
// * installing/installing_azure/installing-azure-customizations.adoc
|
||||
// * installing/installing_azure/installing-azure-private.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing_rhv/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing_aws/installing-aws-network-customizations.adoc
|
||||
// * installing/installing_aws/installing-aws-user-infra.adoc
|
||||
// * installing/installing_aws/installing-restricted-networks-aws.adoc
|
||||
@@ -91,9 +87,6 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-aws"]
|
||||
:restricted:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-restricted-network"]
|
||||
:restricted:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer-restricted"]
|
||||
:restricted:
|
||||
endif::[]
|
||||
@@ -162,9 +155,6 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-aws"]
|
||||
:!restricted:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-restricted-network"]
|
||||
:!restricted:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer-restricted"]
|
||||
:!restricted:
|
||||
endif::[]
|
||||
|
||||
@@ -26,10 +26,6 @@
|
||||
// * installing/installing_azure/installing-azure-government-region.adoc
|
||||
// * installing/installing_azure/installing-azure-customizations.adoc
|
||||
// * installing/installing_azure/installing-azure-private.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing_rhv/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing_aws/installing-aws-network-customizations.adoc
|
||||
// * installing/installing_aws/installing-aws-user-infra.adoc
|
||||
// * installing/installing_aws/installing-restricted-networks-aws.adoc
|
||||
|
||||
@@ -12,7 +12,7 @@ Installing {product-title} on a single node alleviates some of the requirements
|
||||
|
||||
* *CPU Architecture:* Installing {product-title} on a single node supports `x86_64` and `arm64` CPU architectures.
|
||||
|
||||
* *Supported platforms:* Installing {product-title} on a single node is supported on bare metal, vSphere, AWS, Red Hat OpenStack, and Red Hat Virtualization platforms. In all cases, you must specify the `platform.none: {}` parameter in the `install-config.yaml` configuration file.
|
||||
* *Supported platforms:* Installing {product-title} on a single node is supported on bare metal, vSphere, AWS, and Red Hat OpenStack platforms. In all cases, you must specify the `platform.none: {}` parameter in the `install-config.yaml` configuration file.
|
||||
|
||||
* *Production-grade server:* Installing {product-title} on a single node requires a server with sufficient resources to run {product-title} services and a production workload.
|
||||
+
|
||||
|
||||
@@ -10,7 +10,6 @@
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc
|
||||
// * installing/installing_ibm_power/installing-restricted-networks-ibm-power.adoc
|
||||
// * installing/installing_ibm_powervs/installing-restricted-networks-ibm-power-vs.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing-restricted-networks-nutanix-installer-provisioned.adoc
|
||||
|
||||
ifeval::["{context}" == "installing-ibm-power"]
|
||||
@@ -28,9 +27,6 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer-restricted"]
|
||||
:ipi:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-restricted-network"]
|
||||
:ipi:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-installer-provisioned-vsphere"]
|
||||
:ipi:
|
||||
endif::[]
|
||||
@@ -97,9 +93,6 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer-restricted"]
|
||||
:!ipi:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-restricted-network"]
|
||||
:!ipi:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-installer-provisioned-vsphere"]
|
||||
:!ipi:
|
||||
endif::[]
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z-kvm.adoc
|
||||
// * installing/installing_platform_agnostic/installing-platform-agnostic.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
|
||||
ifeval::["{context}" == "installing-restricted-networks-bare-metal"]
|
||||
:restricted:
|
||||
@@ -39,16 +39,13 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-platform-agnostic"]
|
||||
:agnostic:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-restricted-network"]
|
||||
:rhv:
|
||||
endif::[]
|
||||
|
||||
:_content-type: CONCEPT
|
||||
// Assumption is that attribute once outside ifdef works for several level one headings.
|
||||
[id="installation-bare-metal-config-yaml_{context}"]
|
||||
ifndef::ibm-z,ibm-z-kvm,ibm-power,agnostic,rhv[]
|
||||
ifndef::ibm-z,ibm-z-kvm,ibm-power,agnostic[]
|
||||
= Sample install-config.yaml file for bare metal
|
||||
endif::ibm-z,ibm-z-kvm,ibm-power,agnostic,rhv[]
|
||||
endif::ibm-z,ibm-z-kvm,ibm-power,agnostic[]
|
||||
ifdef::ibm-z,ibm-z-kvm[]
|
||||
= Sample install-config.yaml file for {ibmzProductName}
|
||||
endif::ibm-z,ibm-z-kvm[]
|
||||
@@ -58,9 +55,6 @@ endif::ibm-power[]
|
||||
ifdef::agnostic[]
|
||||
= Sample install-config.yaml file for other platforms
|
||||
endif::agnostic[]
|
||||
ifdef::rhv[]
|
||||
= Sample install-config.yaml file for RHV
|
||||
endif::rhv[]
|
||||
|
||||
You can customize the `install-config.yaml` file to specify more details about your {product-title} cluster's platform or modify the values of the required parameters.
|
||||
|
||||
@@ -248,10 +242,9 @@ Class E CIDR range is reserved for a future use. To use the Class E CIDR range,
|
||||
<9> The cluster network plugin to install. The supported values are `OVNKubernetes` and `OpenShiftSDN`. The default value is `OVNKubernetes`.
|
||||
<10> The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic.
|
||||
<11> You must set the platform to `none`. You cannot provide additional platform configuration variables for
|
||||
ifndef::ibm-z,ibm-z-kvm,ibm-power,rhv[your platform.]
|
||||
ifndef::ibm-z,ibm-z-kvm,ibm-power[your platform.]
|
||||
ifdef::ibm-z,ibm-z-kvm[{ibmzProductName} infrastructure.]
|
||||
ifdef::ibm-power[{ibmpowerProductName} infrastructure.]
|
||||
ifdef::rhv[RHV infrastructure.]
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
@@ -342,6 +335,3 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-platform-agnostic"]
|
||||
:!agnostic:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-restricted-network"]
|
||||
:!rhv:
|
||||
endif::[]
|
||||
|
||||
@@ -1,58 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing/installing-troubleshooting.adoc
|
||||
|
||||
[id="installation-common-issues_{context}"]
|
||||
= Troubleshooting common issues with installing on {rh-virtualization-first}
|
||||
|
||||
Here are some common issues you might encounter, along with proposed causes and solutions.
|
||||
|
||||
[id="cpu-load-increases-and-nodes-go-into-a-not-ready-state_{context}"]
|
||||
== CPU load increases and nodes go into a `Not Ready` state
|
||||
|
||||
* *Symptom*: CPU load increases significantly and nodes start going into a `Not Ready` state.
|
||||
* *Cause*: The storage domain latency might be too high, especially for control plane nodes.
|
||||
* *Solution*:
|
||||
+
|
||||
Make the nodes ready again by restarting the kubelet service:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ systemctl restart kubelet
|
||||
----
|
||||
+
|
||||
Inspect the {product-title} metrics service, which automatically gathers and reports on some valuable data such as the etcd disk sync duration. If the cluster is operational, use this data to help determine whether storage latency or throughput is the root issue. If so, consider using a storage resource that has lower latency and higher throughput.
|
||||
+
|
||||
To get raw metrics, enter the following command as kubeadmin or user with cluster-admin privileges:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics
|
||||
----
|
||||
+
|
||||
To learn more, see https://access.redhat.com/articles/3793621[Exploring Application Endpoints for the purposes of Debugging with OpenShift 4.x]
|
||||
|
||||
[id="trouble-connecting-the-rhv-cluster-api_{context}"]
|
||||
== Trouble connecting the {product-title} cluster API
|
||||
|
||||
* *Symptom*: The installation program completes but the {product-title} cluster API is not available. The bootstrap virtual machine remains up after the bootstrap process is complete. When you enter the following command, the response will time out.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc login -u kubeadmin -p *** <apiurl>
|
||||
----
|
||||
|
||||
* *Cause*: The bootstrap VM was not deleted by the installation program and has not released the cluster's API IP address.
|
||||
* *Solution*: Use the `wait-for` subcommand to be notified when the bootstrap process is complete:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install wait-for bootstrap-complete
|
||||
----
|
||||
+
|
||||
When the bootstrap process is complete, delete the bootstrap virtual machine:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install destroy bootstrap
|
||||
----
|
||||
@@ -49,7 +49,15 @@
|
||||
// * installing/installing_openstack/installing-openstack-user-sr-iov-kuryr.adoc
|
||||
// * installing/installing_openstack/installing-openstack-user-sr-iov.adoc
|
||||
// * installing/installing_openstack/installing-openstack-user.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_vmc/installing-restricted-networks-vmc.adoc
|
||||
// * installing/installing_vmc/installing-vmc-customizations.adoc
|
||||
// * installing/installing_vmc/installing-vmc-network-customizations.adoc
|
||||
// * installing/installing_vmc/installing-vmc-user-infra.adoc
|
||||
// * installing/installing_vmc/installing-vmc-network-customizations-user-infra.adoc
|
||||
// * installing/installing_vmc/installing-restricted-networks-vmc-user-infra.adoc
|
||||
// * installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned-customizations.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc
|
||||
// * installing/installing_azure_stack_hub/installing-azure-stack-hub-default.adoc
|
||||
// * installing/installing_azure_stack_hub/installing-azure-stack-hub-customizations.adoc
|
||||
// * installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc
|
||||
@@ -171,8 +179,29 @@ ifeval::["{context}" == "installing-openstack-user-sr-iov-kuryr"]
|
||||
:osp:
|
||||
:osp-kuryr:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
:rhv:
|
||||
ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"]
|
||||
:vsphere:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vsphere-installer-provisioned-network-customizations"]
|
||||
:vsphere:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vmc-customizations"]
|
||||
:vmc:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vmc-network-customizations"]
|
||||
:vmc:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-vmc"]
|
||||
:vmc:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vmc-user-infra"]
|
||||
:vmc:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vmc-network-customizations-user-infra"]
|
||||
:vmc:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-vmc-user-infra"]
|
||||
:vmc:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer-restricted"]
|
||||
:osp:
|
||||
@@ -308,7 +337,7 @@ The string must be 14 characters or fewer long.
|
||||
endif::osp[]
|
||||
|
||||
|`platform`
|
||||
|The configuration for the specific platform upon which to perform the installation: `alibabacloud`, `aws`, `baremetal`, `azure`, `gcp`, `ibmcloud`, `nutanix`, `openstack`, `ovirt`, `powervs`, `vsphere`, or `{}`. For additional information about `platform.<platform>` parameters, consult the table for your specific platform that follows.
|
||||
|The configuration for the specific platform upon which to perform the installation: `alibabacloud`, `aws`, `baremetal`, `azure`, `gcp`, `ibmcloud`, `nutanix`, `openstack`, `powervs`, `vsphere`, or `{}`. For additional information about `platform.<platform>` parameters, consult the table for your specific platform that follows.
|
||||
|Object
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
@@ -595,9 +624,6 @@ Optional installation configuration parameters are described in the following ta
|
||||
|`compute`
|
||||
|The configuration for the machines that comprise the compute nodes.
|
||||
|Array of `MachinePool` objects.
|
||||
ifdef::rhv[]
|
||||
For details, see the "Additional RHV parameters for machine pools" table.
|
||||
endif::rhv[]
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
|
||||
@@ -650,7 +676,7 @@ accounts for the dramatically decreased machine performance.
|
||||
|
||||
|`compute.platform`
|
||||
|Required if you use `compute`. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the `controlPlane.platform` parameter value.
|
||||
|`alibabacloud`, `aws`, `azure`, `gcp`, `ibmcloud`, `nutanix`, `openstack`, `ovirt`, `powervs`, `vsphere`, or `{}`
|
||||
|`alibabacloud`, `aws`, `azure`, `gcp`, `ibmcloud`, `nutanix`, `openstack`, `powervs`, `vsphere`, or `{}`
|
||||
|
||||
|`compute.replicas`
|
||||
|The number of compute machines, which are also known as worker machines, to provision.
|
||||
@@ -663,9 +689,6 @@ accounts for the dramatically decreased machine performance.
|
||||
|`controlPlane`
|
||||
|The configuration for the machines that comprise the control plane.
|
||||
|Array of `MachinePool` objects.
|
||||
ifdef::rhv[]
|
||||
For details, see the "Additional RHV parameters for machine pools" table.
|
||||
endif::rhv[]
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
ifndef::aws,bare,ibm-z,ibm-power,azure,ibm-power-vs[]
|
||||
@@ -717,7 +740,7 @@ accounts for the dramatically decreased machine performance.
|
||||
|
||||
|`controlPlane.platform`
|
||||
|Required if you use `controlPlane`. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the `compute.platform` parameter value.
|
||||
|`alibabacloud`, `aws`, `azure`, `gcp`, `ibmcloud`, `nutanix`, `openstack`, `ovirt`, `powervs`, `vsphere`, or `{}`
|
||||
|`alibabacloud`, `aws`, `azure`, `gcp`, `ibmcloud`, `nutanix`, `openstack`, `powervs`, `vsphere`, or `{}`
|
||||
|
||||
|`controlPlane.replicas`
|
||||
|The number of control plane machines to provision.
|
||||
@@ -1724,166 +1747,8 @@ Additional {ibmpowerProductName} Virtual Server configuration parameters are des
|
||||
--
|
||||
endif::ibm-power-vs[]
|
||||
|
||||
ifdef::rhv[]
|
||||
[id="installation-configuration-parameters-additional-rhv_{context}"]
|
||||
== Additional {rh-virtualization-first} configuration parameters
|
||||
|
||||
Additional {rh-virtualization} configuration parameters are described in the following table:
|
||||
|
||||
[id="additional-virt-parameters-for-clusters_{context}"]
|
||||
.Additional {rh-virtualization-first} parameters for clusters
|
||||
[cols=".^2,.^3a,.^3a",options="header"]
|
||||
|====
|
||||
|Parameter|Description|Values
|
||||
|
||||
|`platform.ovirt.ovirt_cluster_id`
|
||||
|Required. The Cluster where the VMs will be created.
|
||||
|String. For example: `68833f9f-e89c-4891-b768-e2ba0815b76b`
|
||||
|
||||
|`platform.ovirt.ovirt_storage_domain_id`
|
||||
|Required. The Storage Domain ID where the VM disks will be created.
|
||||
|String. For example: `ed7b0f4e-0e96-492a-8fff-279213ee1468`
|
||||
|
||||
|`platform.ovirt.ovirt_network_name`
|
||||
|Required. The network name where the VM nics will be created.
|
||||
|String. For example: `ocpcluster`
|
||||
|
||||
|`platform.ovirt.vnicProfileID`
|
||||
|Required. The vNIC profile ID of the VM network interfaces. This can be inferred if the cluster network has a single profile.
|
||||
|String. For example: `3fa86930-0be5-4052-b667-b79f0a729692`
|
||||
|
||||
|`platform.ovirt.api_vips`
|
||||
|Required. An IP address on the machine network that will be assigned to the API virtual IP (VIP). You can access the OpenShift API at this endpoint. For dual-stack networks, assign up to two IP addresses. The primary IP address must be from the IPv4 network.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
In {product-title} 4.12 and later, the `api_vip` configuration setting is deprecated. Instead, use a list format to enter a value in the `api_vips` configuration setting. The order of the list indicates the primary and secondary VIP address for each service.
|
||||
====
|
||||
|
||||
|String. Example: `10.46.8.230`
|
||||
|
||||
|`platform.ovirt.ingress_vips`
|
||||
|Required. An IP address on the machine network that will be assigned to the Ingress virtual IP (VIP). For dual-stack networks, assign up to two IP addresses. The primary IP address must be from the IPv4 network.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
In {product-title} 4.12 and later, the `ingress_vip` configuration setting is deprecated. Instead, use a list format to enter a value in the `ingress_vips` configuration setting. The order of the list indicates the primary and secondary VIP address for each service.
|
||||
====
|
||||
|
||||
|String. Example: `10.46.8.232`
|
||||
|
||||
|`platform.ovirt.affinityGroups`
|
||||
|Optional. A list of affinity groups to create during the installation process.
|
||||
|List of objects.
|
||||
|
||||
|`platform.ovirt.affinityGroups.description`
|
||||
|Required if you include `platform.ovirt.affinityGroups`. A description of the affinity group.
|
||||
|String. Example: `AffinityGroup for spreading each compute machine to a different host`
|
||||
|
||||
|`platform.ovirt.affinityGroups.enforcing`
|
||||
|Required if you include `platform.ovirt.affinityGroups`. When set to `true`, {rh-virtualization} does not provision any machines if not enough hardware nodes are available. When set to `false`, {rh-virtualization} does provision machines even if not enough hardware nodes are available, resulting in multiple virtual machines being hosted on the same physical machine.
|
||||
|
||||
|String. Example: `true`
|
||||
|
||||
|`platform.ovirt.affinityGroups.name`
|
||||
|Required if you include `platform.ovirt.affinityGroups`. The name of the affinity group.
|
||||
|String. Example: `compute`
|
||||
|
||||
|`platform.ovirt.affinityGroups.priority`
|
||||
|Required if you include `platform.ovirt.affinityGroups`. The priority given to an affinity group when `platform.ovirt.affinityGroups.enforcing = false`. {rh-virtualization} applies affinity groups in the order of priority, where a greater number takes precedence over a lesser one. If multiple affinity groups have the same priority, the order in which they are applied is not guaranteed.
|
||||
|Integer. Example: `3`
|
||||
|====
|
||||
|
||||
[id="installation-configuration-parameters-additional-machine_{context}"]
|
||||
== Additional {rh-virtualization} parameters for machine pools
|
||||
|
||||
Additional {rh-virtualization} configuration parameters for machine pools are described in the following table:
|
||||
|
||||
.Additional {rh-virtualization} parameters for machine pools
|
||||
[cols=".^2,.^3a,.^3a",options="header"]
|
||||
|====
|
||||
|Parameter|Description|Values
|
||||
|
||||
|`<machine-pool>.platform.ovirt.cpu`
|
||||
|Optional. Defines the CPU of the VM.
|
||||
|Object
|
||||
|
||||
|`<machine-pool>.platform.ovirt.cpu.cores`
|
||||
|Required if you use `<machine-pool>.platform.ovirt.cpu`. The number of cores. Total virtual CPUs (vCPUs) is cores * sockets.
|
||||
|Integer
|
||||
|
||||
|`<machine-pool>.platform.ovirt.cpu.sockets`
|
||||
|Required if you use `<machine-pool>.platform.ovirt.cpu`. The number of sockets per core. Total virtual CPUs (vCPUs) is cores * sockets.
|
||||
|Integer
|
||||
|
||||
|`<machine-pool>.platform.ovirt.memoryMB`
|
||||
|Optional. Memory of the VM in MiB.
|
||||
|Integer
|
||||
|
||||
|`<machine-pool>.platform.ovirt.osDisk`
|
||||
|Optional. Defines the first and bootable disk of the VM.
|
||||
|String
|
||||
|
||||
|`<machine-pool>.platform.ovirt.osDisk.sizeGB`
|
||||
|Required if you use `<machine-pool>.platform.ovirt.osDisk`. Size of the disk in GiB.
|
||||
|Number
|
||||
|
||||
|`<machine-pool>.platform.ovirt.vmType`
|
||||
|Optional. The VM workload type, such as `high-performance`, `server`, or `desktop`. By default, control plane nodes use `high-performance`, and worker nodes use `server`. For details, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Virtual_Machine_General_settings_explained[Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows] and link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools[Configuring High Performance Virtual Machines, Templates, and Pools] in the _Virtual Machine Management Guide_.
|
||||
[NOTE]
|
||||
====
|
||||
`high_performance` improves performance on the VM, but there are limitations. For example, you cannot access the VM with a graphical console. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools[Configuring High Performance Virtual Machines, Templates, and Pools] in the _Virtual Machine Management Guide_.
|
||||
====
|
||||
|String
|
||||
|
||||
|`<machine-pool>.platform.ovirt.affinityGroupsNames`
|
||||
|Optional. A list of affinity group names that should be applied to the virtual machines. The affinity groups must exist in {rh-virtualization}, or be created during installation as described in _Additional {rh-virtualization} parameters for clusters_ in this topic. This entry can be empty.
|
||||
// xref:../../installing/installing_rhv/installing-rhv-customizations.adoc#additional-virt-parameters-for-clusters[Additional {rh-virtualization} parameters for clusters]. This entry can be empty.
|
||||
//xref:../../additional-virt-parameters-for-clusters[Additional {rh-virtualization} parameters for clusters]. This entry can be empty.
|
||||
|
||||
.Example with two affinity groups
|
||||
|
||||
This example defines two affinity groups, named `compute` and `clusterWideNonEnforcing`:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
<machine-pool>:
|
||||
platform:
|
||||
ovirt:
|
||||
affinityGroupNames:
|
||||
- compute
|
||||
- clusterWideNonEnforcing
|
||||
----
|
||||
|
||||
This example defines no affinity groups:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
<machine-pool>:
|
||||
platform:
|
||||
ovirt:
|
||||
affinityGroupNames: []
|
||||
----
|
||||
|String
|
||||
|`<machine-pool>.platform.ovirt.AutoPinningPolicy`
|
||||
| Optional. AutoPinningPolicy defines the policy to automatically set the CPU and NUMA settings, including pinning to the host for the instance. When the field is omitted, the default is `none`. Supported values: `none`, `resize_and_pin`. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Setting_NUMA_Nodes[Setting NUMA Nodes] in the _Virtual Machine Management Guide_.
|
||||
|
||||
|String
|
||||
|`<machine-pool>.platform.ovirt.hugepages`
|
||||
|Optional. Hugepages is the size in KiB for defining hugepages in a VM. Supported values: `2048` or `1048576`. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Configuring_Huge_Pages[Configuring Huge Pages] in the _Virtual Machine Management Guide_.
|
||||
|
||||
|Integer
|
||||
|
||||
|====
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You can replace `<machine-pool>` with `controlPlane` or `compute`.
|
||||
====
|
||||
|
||||
endif::rhv[]
|
||||
|
||||
ifdef::vsphere[]
|
||||
|
||||
[id="installation-configuration-parameters-additional-vsphere_{context}"]
|
||||
== Additional VMware vSphere configuration parameters
|
||||
|
||||
@@ -2454,8 +2319,20 @@ ifeval::["{context}" == "installing-openstack-user-sr-iov-kuryr"]
|
||||
:!osp:
|
||||
:!osp-kuryr:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
:!rhv:
|
||||
ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"]
|
||||
:!vsphere:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vsphere-installer-provisioned-network-customizations"]
|
||||
:!vsphere:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vmc-customizations"]
|
||||
:!vmc:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vmc-network-customizations"]
|
||||
:!vmc:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-vmc"]
|
||||
:!vmc:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer-restricted"]
|
||||
:!osp:
|
||||
|
||||
@@ -54,7 +54,6 @@
|
||||
// * installing/installing_ibm_powervs/installing-ibm-powervs-vpc.adoc
|
||||
// * installing/installing_platform_agnostic/installing-platform-agnostic.adoc
|
||||
// * networking/configuring-a-custom-pki.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing-nutanix-installer-provisioned.adoc
|
||||
// * installing/installing-restricted-networks-nutanix-installer-provisioned.adoc
|
||||
|
||||
|
||||
@@ -10,7 +10,9 @@
|
||||
// * installing/installing_ibm_z/installing-ibm-z.adoc
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc
|
||||
// * installing/installing_platform_agnostic/installing-platform-agnostic.adoc
|
||||
// * installing/installing_rhv/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing_vmc/installing-restricted-networks-vmc-user-infra.adoc
|
||||
// * installing/installing_vmc/installing-vmc-user-infra.adoc
|
||||
// * installing/installing_vmc/installing-vmc-network-customizations-user-infra.adoc
|
||||
// * installing/installing_vsphere/installing-restricted-networks-vsphere.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-network-customizations.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere.adoc
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-troubleshooting.adoc
|
||||
|
||||
[id="installing-getting-debug-information_{context}"]
|
||||
= Getting debug information from the installation program
|
||||
|
||||
@@ -32,7 +32,9 @@
|
||||
// * installing/installing_openstack/installing-openstack-installer-restricted.adoc
|
||||
// * installing/installing_openstack/installing-openstack-user-kuryr.adoc
|
||||
// * installing/installing_openstack/installing-openstack-user.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_vmc/installing-vmc-customizations.adoc
|
||||
// * installing/installing_vmc/installing-vmc-network-customizations.adoc
|
||||
// * installing/installing_vmc/installing-restricted-networks-vmc.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned-customizations.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc
|
||||
// * installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc
|
||||
@@ -153,12 +155,6 @@ ifeval::["{context}" == "installing-openstack-user-sr-iov-kuryr"]
|
||||
:osp:
|
||||
:osp-user:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
:rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-default"]
|
||||
:rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"]
|
||||
:vsphere:
|
||||
:three-node-cluster:
|
||||
@@ -208,9 +204,6 @@ endif::osp[]
|
||||
ifdef::vsphere[]
|
||||
VMware vSphere.
|
||||
endif::vsphere[]
|
||||
ifdef::rhv[]
|
||||
{rh-virtualization-first}.
|
||||
endif::rhv[]
|
||||
ifdef::nutanix[]
|
||||
Nutanix.
|
||||
endif::nutanix[]
|
||||
@@ -256,15 +249,6 @@ When specifying the directory:
|
||||
* Verify that the directory has the `execute` permission. This permission is required to run Terraform binaries under the installation directory.
|
||||
* Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier {product-title} version.
|
||||
|
||||
ifndef::rhv[]
|
||||
.. At the prompts, provide the configuration details for your cloud:
|
||||
... Optional: Select an SSH key to use to access your cluster machines.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your `ssh-agent` process uses.
|
||||
====
|
||||
endif::rhv[]
|
||||
ifdef::alibabacloud-default,alibabacloud-custom,alibabacloud-vpc[]
|
||||
... Select *alibabacloud* as the platform to target.
|
||||
... Select the region to deploy the cluster to.
|
||||
@@ -362,12 +346,13 @@ The installation program connects to Prism Central.
|
||||
... Enter the base domain. This base domain must be the same one that you configured in the DNS records.
|
||||
endif::nutanix[]
|
||||
ifndef::osp[]
|
||||
ifndef::rhv,alibabacloud-default,alibabacloud-custom,alibabacloud-vpc[]
|
||||
ifndef::alibabacloud-default,alibabacloud-custom,alibabacloud-vpc[]
|
||||
... Enter a descriptive name for your cluster.
|
||||
ifdef::vsphere,nutanix[]
|
||||
The cluster name you enter must match the cluster name you specified when configuring the DNS records.
|
||||
|
||||
endif::vsphere,nutanix[]
|
||||
endif::rhv,alibabacloud-default,alibabacloud-custom,alibabacloud-vpc[]
|
||||
endif::alibabacloud-default,alibabacloud-custom,alibabacloud-vpc[]
|
||||
endif::osp[]
|
||||
ifdef::osp[]
|
||||
... Enter a name for your cluster. The name must be 14 or fewer characters long.
|
||||
@@ -383,80 +368,6 @@ link:https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-mana
|
||||
in the Azure documentation.
|
||||
====
|
||||
endif::azure[]
|
||||
ifdef::rhv[]
|
||||
.. Respond to the installation program prompts.
|
||||
... For `SSH Public Key`, select a password-less public key, such as `~/.ssh/id_rsa.pub`. This key authenticates connections with the new {product-title} cluster.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, select an SSH key that your `ssh-agent` process uses.
|
||||
====
|
||||
... For `Platform`, select `ovirt`.
|
||||
... For `Enter oVirt's API endpoint URL`, enter the URL of the {rh-virtualization} API using this format:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
https://<engine-fqdn>/ovirt-engine/api <1>
|
||||
----
|
||||
<1> For `<engine-fqdn>`, specify the fully qualified domain name of the {rh-virtualization} environment.
|
||||
+
|
||||
For example:
|
||||
+
|
||||
ifndef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k -u ocpadmin@internal:pw123 \
|
||||
https://rhv-env.virtlab.example.com/ovirt-engine/api
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k -u admin@internal:pw123 \
|
||||
https://ovirtlab.example.com/ovirt-engine/api
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
+
|
||||
... For `Is the oVirt CA trusted locally?`, enter `Yes`, because you have already set up a CA certificate. Otherwise, enter `No`.
|
||||
|
||||
... For `oVirt's CA bundle`, if you entered `Yes` for the preceding question, copy the certificate content from `/etc/pki/ca-trust/source/anchors/ca.pem` and paste it here. Then, press `Enter` twice. Otherwise, if you entered `No` for the preceding question, this question does not appear.
|
||||
... For `oVirt engine username`, enter the user name and profile of the {rh-virtualization} administrator using this format:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
<username>@<profile> <1>
|
||||
----
|
||||
<1> For `<username>`, specify the user name of an {rh-virtualization} administrator. For `<profile>`, specify the login profile, which you can get by going to the {rh-virtualization} Administration Portal login page and reviewing the *Profile* dropdown list. Together, the user name and profile should look similar to this example:
|
||||
+
|
||||
ifndef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
ocpadmin@internal
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
admin@internal
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
+
|
||||
... For `oVirt engine password`, enter the {rh-virtualization} admin password.
|
||||
... For `oVirt cluster`, select the cluster for installing {product-title}.
|
||||
... For `oVirt storage domain`, select the storage domain for installing {product-title}.
|
||||
... For `oVirt network`, select a virtual network that has access to the {rh-virtualization} {rh-virtualization-engine-name} REST API.
|
||||
... For `Internal API Virtual IP`, enter the static IP address you set aside for the cluster's REST API.
|
||||
... For `Ingress virtual IP`, enter the static IP address you reserved for the wildcard apps domain.
|
||||
... For `Base Domain`, enter the base domain of the {product-title} cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: `virtlab.example.com`
|
||||
... For `Cluster Name`, enter the name of the cluster. For example, `my-cluster`. Use cluster name from the externally registered/resolvable DNS entries you created for the {product-title} REST API and apps domain names. The installation program also gives this name to the cluster in the {rh-virtualization} environment.
|
||||
... For `Pull Secret`, copy the pull secret from the `pull-secret.txt` file you downloaded earlier and paste it here. You can also get a copy of the same {cluster-manager-url-pull}.
|
||||
endif::rhv[]
|
||||
ifndef::rhv[]
|
||||
... Paste the {cluster-manager-url-pull}.
|
||||
ifdef::openshift-origin[]
|
||||
This field is optional.
|
||||
endif::[]
|
||||
endif::rhv[]
|
||||
|
||||
ifdef::aws-outposts[]
|
||||
. Modify the `install-config.yaml` file. The AWS Outposts installation has the following limitations which require manual modification of the `install-config.yaml` file:
|
||||
@@ -501,41 +412,6 @@ ifdef::alibabacloud-custom,alibabacloud-vpc[]
|
||||
the available parameters in the "Installation configuration parameters" section.
|
||||
endif::alibabacloud-custom,alibabacloud-vpc[]
|
||||
|
||||
ifndef::restricted[]
|
||||
|
||||
ifdef::rhv[]
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If you have any intermediate CA certificates on the {rh-virtualization-engine-name}, verify that the certificates appear in the `ovirt-config.yaml` file and the `install-config.yaml` file. If they do not appear, add them as follows:
|
||||
|
||||
. In the `~/.ovirt/ovirt-config.yaml` file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
[ovirt_ca_bundle]: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
<MY_TRUSTED_CA>
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN CERTIFICATE-----
|
||||
<INTERMEDIATE_CA>
|
||||
-----END CERTIFICATE-----
|
||||
----
|
||||
. In the `install-config.yaml` file:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
[additionalTrustBundle]: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
<MY_TRUSTED_CA>
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN CERTIFICATE-----
|
||||
<INTERMEDIATE_CA>
|
||||
-----END CERTIFICATE-----
|
||||
----
|
||||
====
|
||||
endif::rhv[]
|
||||
endif::restricted[]
|
||||
|
||||
ifdef::osp+restricted[]
|
||||
. In the `install-config.yaml` file, set the value of `platform.openstack.clusterOSImage` to the image location or name. For example:
|
||||
@@ -787,12 +663,6 @@ ifeval::["{context}" == "installing-openstack-user-sr-iov-kuryr"]
|
||||
:!osp:
|
||||
:!osp-user:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
:!rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-default"]
|
||||
:!rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vsphere-installer-provisioned-customizations"]
|
||||
:!vsphere:
|
||||
:!three-node-cluster:
|
||||
|
||||
@@ -30,8 +30,10 @@
|
||||
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-restricted.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing_vmc/installing-vmc.adoc
|
||||
// * installing/installing_vmc/installing-vmc-network-customizations.adoc
|
||||
// * installing/installing_vmc/installing-vmc-customizations.adoc
|
||||
// * installing/installing_vmc/installing-restricted-networks-vmc.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned-customizations.adoc
|
||||
@@ -190,15 +192,6 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer"]
|
||||
:osp:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
:custom-config:
|
||||
:rhv:
|
||||
:single-step:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-default"]
|
||||
:no-config:
|
||||
:rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vsphere-installer-provisioned"]
|
||||
:no-config:
|
||||
:vsphere:
|
||||
@@ -278,9 +271,7 @@ You can run the `create cluster` command of the installation program only once,
|
||||
|
||||
.Prerequisites
|
||||
|
||||
ifndef::osp,rhv,vsphere,nutanix[* Configure an account with the cloud platform that hosts your cluster.]
|
||||
|
||||
ifdef::rhv[* Open the `ovirt-imageio` port to the {rh-virtualization-engine-name} from the machine running the installer. By default, the port is `54322`.]
|
||||
ifndef::osp,vsphere,nutanix[* Configure an account with the cloud platform that hosts your cluster.]
|
||||
|
||||
* Obtain the {product-title} installation program and the pull secret for your
|
||||
cluster.
|
||||
@@ -326,7 +317,6 @@ When specifying the directory:
|
||||
* Verify that the directory has the `execute` permission. This permission is required to run Terraform binaries under the installation directory.
|
||||
* Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier {product-title} version.
|
||||
|
||||
ifndef::rhv[]
|
||||
. Provide values at the prompts:
|
||||
|
||||
.. Optional: Select an SSH key to use to access your cluster machines.
|
||||
@@ -443,56 +433,6 @@ ifdef::openshift-origin[]
|
||||
* If you do not have a {cluster-manager-url-pull}, you can paste the pull secret another private registry.
|
||||
* If you do not need the cluster to pull images from a private registry, you can paste `{"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}}` as the pull secret.
|
||||
endif::openshift-origin[]
|
||||
endif::rhv[]
|
||||
ifdef::rhv[]
|
||||
. Respond to the installation program prompts.
|
||||
|
||||
.. Optional: For `SSH Public Key`, select a password-less public key, such as `~/.ssh/id_rsa.pub`. This key authenticates connections with the new {product-title} cluster.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
For production {product-title} clusters on which you want to perform installation debugging or disaster recovery, select an SSH key that your `ssh-agent` process uses.
|
||||
====
|
||||
.. For `Platform`, select `ovirt`.
|
||||
.. For `Engine FQDN[:PORT]`, enter the fully qualified domain name (FQDN) of the {rh-virtualization} environment.
|
||||
+
|
||||
For example:
|
||||
+
|
||||
ifndef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
rhv-env.virtlab.example.com:443
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k -u admin@internal:pw123 \
|
||||
https://ovirtlab.example.com/ovirt-engine/api
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
+
|
||||
.. The installation program automatically generates a CA certificate. For `Would you like to use the above certificate to connect to the {rh-virtualization-engine-name}?`, answer `y` or `N`. If you answer `N`, you must install {product-title} in insecure mode.
|
||||
//TODO: Add this sentence with xref after it's OK to add xrefs: For information about insecure mode, see xref:installing-rhv-insecure-mode_installing-rhv-default[].
|
||||
.. For `Engine username`, enter the user name and profile of the {rh-virtualization} administrator using this format:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
<username>@<profile> <1>
|
||||
----
|
||||
+
|
||||
<1> For `<username>`, specify the user name of an {rh-virtualization} administrator. For `<profile>`, specify the login profile, which you can get by going to the {rh-virtualization} Administration Portal login page and reviewing the *Profile* dropdown list. For example: `admin@internal`.
|
||||
+
|
||||
.. For `Engine password`, enter the {rh-virtualization} admin password.
|
||||
.. For `Cluster`, select the {rh-virtualization} cluster for installing {product-title}.
|
||||
.. For `Storage domain`, select the storage domain for installing {product-title}.
|
||||
.. For `Network`, select a virtual network that has access to the {rh-virtualization} {rh-virtualization-engine-name} REST API.
|
||||
.. For `Internal API Virtual IP`, enter the static IP address you set aside for the cluster's REST API.
|
||||
.. For `Ingress virtual IP`, enter the static IP address you reserved for the wildcard apps domain.
|
||||
.. For `Base Domain`, enter the base domain of the {product-title} cluster. If this cluster is exposed to the outside world, this must be a valid domain recognized by DNS infrastructure. For example, enter: `virtlab.example.com`
|
||||
.. For `Cluster Name`, enter the name of the cluster. For example, `my-cluster`. Use cluster name from the externally registered/resolvable DNS entries you created for the {product-title} REST API and apps domain names. The installation program also gives this name to the cluster in the {rh-virtualization} environment.
|
||||
.. For `Pull Secret`, copy the pull secret from the `pull-secret.txt` file you downloaded earlier and paste it here. You can also get a copy of the same {cluster-manager-url-pull}.
|
||||
endif::rhv[]
|
||||
|
||||
endif::no-config[]
|
||||
|
||||
@@ -687,15 +627,6 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer"]
|
||||
:!osp:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
:!custom-config:
|
||||
:!rhv:
|
||||
:!single-step:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-default"]
|
||||
:!no-config:
|
||||
:!rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-vsphere-installer-provisioned"]
|
||||
:!no-config:
|
||||
:!vsphere:
|
||||
|
||||
@@ -12,7 +12,6 @@
|
||||
// * installing/installing_ibm_z/installing-ibm-z-kvm.adoc
|
||||
// * installing/installing_ibm_z/installing-ibm-power.adoc
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-power.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
|
||||
|
||||
@@ -288,4 +287,4 @@ ifeval::["{context}" == "installing-openstack-installer-custom"]
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-openstack-installer-kuryr"]
|
||||
:!user-managed-lb:
|
||||
endif::[]
|
||||
endif::[]
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
//
|
||||
// * installing/install_config/installing-restricted-networks-preparations.adoc
|
||||
// * openshift_images/samples-operator-alt-registry.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-mirror-repository_{context}"]
|
||||
|
||||
@@ -18,8 +18,7 @@
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z-kvm.adoc
|
||||
// * installing/installing_ibm_z/installing-ibm-power.adoc
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-power.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing-rhv-user-infra.adoc
|
||||
|
||||
|
||||
ifeval::["{context}" == "installing-vsphere"]
|
||||
:vsphere:
|
||||
@@ -65,12 +64,6 @@ ifeval::["{context}" == "installing-restricted-networks-gcp"]
|
||||
:gcp:
|
||||
:restricted:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-user-infra"]
|
||||
:rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-restricted-network"]
|
||||
:rhv:
|
||||
endif::[]
|
||||
|
||||
|
||||
:_content-type: CONCEPT
|
||||
@@ -105,41 +98,6 @@ node names. Another supported approach is to always refer to hosts by their
|
||||
fully-qualified domain names in both the node objects and all DNS requests.
|
||||
endif::azure,gcp[]
|
||||
|
||||
ifdef::rhv[]
|
||||
.Firewall
|
||||
|
||||
Configure your firewall so your cluster has access to required sites.
|
||||
|
||||
See also:
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
* link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/planning_and_prerequisites_guide/index#RHV-manager-firewall-requirements_RHV_planning[Red Hat Virtualization Manager firewall requirements]
|
||||
* link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/planning_and_prerequisites_guide#host-firewall-requirements_RHV_planning[Host firewall requirements]
|
||||
endif::[]
|
||||
ifdef::openshift-origin[]
|
||||
* link:https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/index.html#RHV-manager-firewall-requirements_SHE_cli_deploy[oVirt Engine firewall requirements]
|
||||
* link:https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/index.html#host-firewall-requirements_SHE_cli_deploy[Host firewall requirements]
|
||||
endif::[]
|
||||
|
||||
ifeval::["{context}" == "installing-rhv-user-infra"]
|
||||
.Load balancers
|
||||
|
||||
Configure one or preferably two layer-4 load balancers:
|
||||
|
||||
* Provide load balancing for ports `6443` and `22623` on the control plane and bootstrap machines. Port `6443` provides access to the Kubernetes API server and must be reachable both internally and externally. Port `22623` must be accessible to nodes within the cluster.
|
||||
|
||||
* Provide load balancing for port `443` and `80` for machines that run the Ingress router, which are usually compute nodes in the default configuration. Both ports must be accessible from within and outside the cluster.
|
||||
endif::[]
|
||||
|
||||
.DNS
|
||||
|
||||
Configure infrastructure-provided DNS to allow the correct resolution of the main components and services. If you use only one load balancer, these DNS records can point to the same IP address.
|
||||
|
||||
* Create DNS records for `api.<cluster_name>.<base_domain>` (internal and external resolution) and `api-int.<cluster_name>.<base_domain>` (internal resolution) that point to the load balancer for the control plane machines.
|
||||
|
||||
* Create a DNS record for `*.apps.<cluster_name>.<base_domain>` that points to the load balancer for the Ingress router. For example, ports `443` and `80` of the compute machines.
|
||||
endif::rhv[]
|
||||
|
||||
ifndef::ibm-z,azure[]
|
||||
[id="installation-host-names-dhcp-user-infra_{context}"]
|
||||
== Setting the cluster node hostnames through DHCP
|
||||
@@ -164,12 +122,7 @@ ifndef::restricted,origin[]
|
||||
In connected {product-title} environments, all nodes are required to have internet access to pull images
|
||||
for platform containers and provide telemetry data to Red Hat.
|
||||
====
|
||||
ifeval::["{context}" == "installing-rhv-restricted-network"]
|
||||
:!rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-user-infra"]
|
||||
:!rhv:
|
||||
endif::[]
|
||||
|
||||
endif::restricted,origin[]
|
||||
|
||||
ifdef::ibm-z-kvm[]
|
||||
|
||||
@@ -41,9 +41,6 @@
|
||||
// * installing/installing_vsphere/installing-vsphere-installer-provisioned-network-customizations.adoc
|
||||
// * installing/installing_ibm_z/installing-ibm-z.adoc
|
||||
// * installing/installing_ibm_z/installing-ibm-z-kvm.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc
|
||||
|
||||
|
||||
|
||||
@@ -4,14 +4,8 @@
|
||||
// * installing/installing_openstack/installing-openstack-installer-custom.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-restricted.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
ifeval::["{context}" == "installing-rhv-user-infra"]
|
||||
:rhv-user-infra:
|
||||
endif::[]
|
||||
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-osp-verifying-cluster-status_{context}"]
|
||||
@@ -23,19 +17,11 @@ You can verify your {product-title} cluster's status during or after installatio
|
||||
|
||||
. In the cluster environment, export the administrator's kubeconfig file:
|
||||
+
|
||||
ifdef::rhv-user-infra[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ export KUBECONFIG=$ASSETS_DIR/auth/kubeconfig
|
||||
----
|
||||
endif::rhv-user-infra[]
|
||||
ifndef::rhv-user-infra[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig <1>
|
||||
----
|
||||
<1> For `<installation_directory>`, specify the path to the directory that you stored the installation files in.
|
||||
endif::rhv-user-infra[]
|
||||
+
|
||||
The `kubeconfig` file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server.
|
||||
|
||||
@@ -66,7 +52,3 @@ $ oc get clusteroperator
|
||||
----
|
||||
$ oc get pods -A
|
||||
----
|
||||
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
:!rhv-user-infra:
|
||||
endif::[]
|
||||
|
||||
@@ -1,187 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
[id="installation-rhv-about-inventory-yml_{context}"]
|
||||
= The inventory.yml file
|
||||
|
||||
You use the `inventory.yml` file to define and create elements of the {product-title} cluster you are installing. This includes elements such as the {op-system-first} image, virtual machine templates, bootstrap machine, control plane nodes, and worker nodes. You also use `inventory.yml` to destroy the cluster.
|
||||
|
||||
The following `inventory.yml` example shows you the parameters and their default values. The quantities and numbers in these default values meet the requirements for running a production {product-title} cluster in a {rh-virtualization} environment.
|
||||
|
||||
.Example `inventory.yml` file
|
||||
[source,yaml]
|
||||
----
|
||||
---
|
||||
all:
|
||||
vars:
|
||||
|
||||
ovirt_cluster: "Default"
|
||||
ocp:
|
||||
assets_dir: "{{ lookup('env', 'ASSETS_DIR') }}"
|
||||
ovirt_config_path: "{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml"
|
||||
|
||||
# ---
|
||||
# {op-system} section
|
||||
# ---
|
||||
rhcos:
|
||||
image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz"
|
||||
local_cmp_image_path: "/tmp/rhcos.qcow2.gz"
|
||||
local_image_path: "/tmp/rhcos.qcow2"
|
||||
|
||||
# ---
|
||||
# Profiles section
|
||||
# ---
|
||||
control_plane:
|
||||
cluster: "{{ ovirt_cluster }}"
|
||||
memory: 16GiB
|
||||
sockets: 4
|
||||
cores: 1
|
||||
template: rhcos_tpl
|
||||
operating_system: "rhcos_x64"
|
||||
type: high_performance
|
||||
graphical_console:
|
||||
headless_mode: false
|
||||
protocol:
|
||||
- spice
|
||||
- vnc
|
||||
disks:
|
||||
- size: 120GiB
|
||||
name: os
|
||||
interface: virtio_scsi
|
||||
storage_domain: depot_nvme
|
||||
nics:
|
||||
- name: nic1
|
||||
network: lab
|
||||
profile: lab
|
||||
|
||||
compute:
|
||||
cluster: "{{ ovirt_cluster }}"
|
||||
memory: 16GiB
|
||||
sockets: 4
|
||||
cores: 1
|
||||
template: worker_rhcos_tpl
|
||||
operating_system: "rhcos_x64"
|
||||
type: high_performance
|
||||
graphical_console:
|
||||
headless_mode: false
|
||||
protocol:
|
||||
- spice
|
||||
- vnc
|
||||
disks:
|
||||
- size: 120GiB
|
||||
name: os
|
||||
interface: virtio_scsi
|
||||
storage_domain: depot_nvme
|
||||
nics:
|
||||
- name: nic1
|
||||
network: lab
|
||||
profile: lab
|
||||
|
||||
# ---
|
||||
# Virtual machines section
|
||||
# ---
|
||||
vms:
|
||||
- name: "{{ metadata.infraID }}-bootstrap"
|
||||
ocp_type: bootstrap
|
||||
profile: "{{ control_plane }}"
|
||||
type: server
|
||||
- name: "{{ metadata.infraID }}-master0"
|
||||
ocp_type: master
|
||||
profile: "{{ control_plane }}"
|
||||
- name: "{{ metadata.infraID }}-master1"
|
||||
ocp_type: master
|
||||
profile: "{{ control_plane }}"
|
||||
- name: "{{ metadata.infraID }}-master2"
|
||||
ocp_type: master
|
||||
profile: "{{ control_plane }}"
|
||||
- name: "{{ metadata.infraID }}-worker0"
|
||||
ocp_type: worker
|
||||
profile: "{{ compute }}"
|
||||
- name: "{{ metadata.infraID }}-worker1"
|
||||
ocp_type: worker
|
||||
profile: "{{ compute }}"
|
||||
- name: "{{ metadata.infraID }}-worker2"
|
||||
ocp_type: worker
|
||||
profile: "{{ compute }}"
|
||||
----
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
Enter values for parameters whose descriptions begin with "Enter." Otherwise, you can use the default value or replace it with a new value.
|
||||
====
|
||||
|
||||
.General section
|
||||
|
||||
* `ovirt_cluster`: Enter the name of an existing {rh-virtualization} cluster in which to install the {product-title} cluster.
|
||||
* `ocp.assets_dir`: The path of a directory the `openshift-install` installation program creates to store the files that it generates.
|
||||
* `ocp.ovirt_config_path`: The path of the `ovirt-config.yaml` file the installation program generates, for example, `./wrk/install-config.yaml`. This file contains the credentials required to interact with the REST API of the {rh-virtualization-engine-name}.
|
||||
|
||||
.{op-system-first} section
|
||||
|
||||
* `image_url`: Enter the URL of the {op-system} image you specified for download.
|
||||
* `local_cmp_image_path`: The path of a local download directory for the compressed {op-system} image.
|
||||
* `local_image_path`: The path of a local directory for the extracted {op-system} image.
|
||||
|
||||
.Profiles section
|
||||
|
||||
This section consists of two profiles:
|
||||
|
||||
* `control_plane`: The profile of the bootstrap and control plane nodes.
|
||||
* `compute`: The profile of workers nodes in the compute plane.
|
||||
|
||||
These profiles have the following parameters. The default values of the parameters meet the minimum requirements for running a production cluster. You can increase or customize these values to meet your workload requirements.
|
||||
|
||||
* `cluster`: The value gets the cluster name from `ovirt_cluster` in the General Section.
|
||||
* `memory`: The amount of memory, in GB, for the virtual machine.
|
||||
* `sockets`: The number of sockets for the virtual machine.
|
||||
* `cores`: The number of cores for the virtual machine.
|
||||
* `template`: The name of the virtual machine template. If plan to install multiple clusters, and these clusters use templates that contain different specifications, prepend the template name with the ID of the cluster.
|
||||
* `operating_system`: The type of guest operating system in the virtual machine. With oVirt/{rh-virtualization} version 4.4, this value must be `rhcos_x64` so the value of `Ignition script` can be passed to the VM.
|
||||
* `type`: Enter `server` as the type of the virtual machine.
|
||||
+
|
||||
[IMPORTANT]
|
||||
====
|
||||
You must change the value of the `type` parameter from `high_performance` to `server`.
|
||||
====
|
||||
* `disks`: The disk specifications. The `control_plane` and `compute` nodes can have different storage domains.
|
||||
* `size`: The minimum disk size.
|
||||
* `name`: Enter the name of a disk connected to the target cluster in {rh-virtualization}.
|
||||
* `interface`: Enter the interface type of the disk you specified.
|
||||
* `storage_domain`: Enter the storage domain of the disk you specified.
|
||||
* `nics`: Enter the `name` and `network` the virtual machines use. You can also specify the virtual network interface profile. By default, NICs obtain their MAC addresses from the oVirt/{rh-virtualization} MAC pool.
|
||||
|
||||
.Virtual machines section
|
||||
|
||||
This final section, `vms`, defines the virtual machines you plan to create and deploy in the cluster. By default, it provides the minimum number of control plane and worker nodes for a production environment.
|
||||
|
||||
`vms` contains three required elements:
|
||||
|
||||
* `name`: The name of the virtual machine. In this case, `metadata.infraID` prepends the virtual machine name with the infrastructure ID from the `metadata.yml` file.
|
||||
* `ocp_type`: The role of the virtual machine in the {product-title} cluster. Possible values are `bootstrap`, `master`, `worker`.
|
||||
* `profile`: The name of the profile from which each virtual machine inherits specifications. Possible values in this example are `control_plane` or `compute`.
|
||||
+
|
||||
You can override the value a virtual machine inherits from its profile. To do this, you add the name of the profile attribute to the virtual machine in `inventory.yml` and assign it an overriding value. To see an example of this, examine the `name: "{{ metadata.infraID }}-bootstrap"` virtual machine in the preceding `inventory.yml` example: It has a `type` attribute whose value, `server`, overrides the value of the `type` attribute this virtual machine would otherwise inherit from the `control_plane` profile.
|
||||
|
||||
// TBD https://issues.redhat.com/browse/OCPRHV-414
|
||||
// Consider documenting *additional* optional attributes in https://github.com/oVirt/ovirt-ansible-vm-infra that aren't already covered here. Hypothetically, it seems like a user could add these attributes to a profile and then want to override them in the inventory.yml.
|
||||
|
||||
// TBD - Consider adding a topic on how related to: Configure DHCP to assign permanent IP addresses to the virtual machines, consider using the `mac_address` attribute to assign a fixed MAC address to each virtual machine. However, avoid using the same MAC address if you are deploying more than one cluster. We should consider creating a new topic to document this/these scenario(s).
|
||||
|
||||
.Metadata variables
|
||||
|
||||
For virtual machines, `metadata.infraID` prepends the name of the virtual machine with the infrastructure ID from the `metadata.json` file you create when you build the Ignition files.
|
||||
|
||||
The playbooks use the following code to read `infraID` from the specific file located in the `ocp.assets_dir`.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
---
|
||||
- name: include metadata.json vars
|
||||
include_vars:
|
||||
file: "{{ ocp.assets_dir }}/metadata.json"
|
||||
name: metadata
|
||||
|
||||
...
|
||||
----
|
||||
@@ -1,46 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-building-ignition-files_{context}"]
|
||||
= Building the Ignition files
|
||||
|
||||
To build the Ignition files from the manifest files you just generated and modified, you run the installation program. This action creates a {op-system-first} machine, `initramfs`, which fetches the Ignition files and performs the configurations needed to create a node.
|
||||
|
||||
In addition to the Ignition files, the installation program generates the following:
|
||||
|
||||
* An `auth` directory that contains the admin credentials for connecting to the cluster with the `oc` and `kubectl` utilities.
|
||||
* A `metadata.json` file that contains information such as the {product-title} cluster name, cluster ID, and infrastructure ID for the current installation.
|
||||
|
||||
The Ansible playbooks for this installation process use the value of `infraID` as a prefix for the virtual machines they create. This prevents naming conflicts when there are multiple installations in the same oVirt/{rh-virtualization} cluster.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Certificates in Ignition configuration files expire after 24 hours. Complete the cluster installation and keep the cluster running in a non-degraded state for 24 hours so that the first certificate rotation can finish.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
. To build the Ignition files, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ openshift-install create ignition-configs --dir $ASSETS_DIR
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
$ tree
|
||||
.
|
||||
└── wrk
|
||||
├── auth
|
||||
│ ├── kubeadmin-password
|
||||
│ └── kubeconfig
|
||||
├── bootstrap.ign
|
||||
├── master.ign
|
||||
├── metadata.json
|
||||
└── worker.ign
|
||||
----
|
||||
@@ -1,40 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-creating-bootstrap-machine_{context}"]
|
||||
= Creating the bootstrap machine
|
||||
|
||||
You create a bootstrap machine by running the `bootstrap.yml` playbook. This playbook starts the bootstrap virtual machine, and passes it the `bootstrap.ign` Ignition file from the assets directory. The bootstrap node configures itself so it can serve Ignition files to the control plane nodes.
|
||||
|
||||
To monitor the bootstrap process, you use the console in the {rh-virtualization} Administration Portal or connect to the virtual machine by using SSH.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create the bootstrap machine:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ansible-playbook -i inventory.yml bootstrap.yml
|
||||
----
|
||||
|
||||
. Connect to the bootstrap machine using a console in the Administration Portal or SSH. Replace `<bootstrap_ip>` with the bootstrap node IP address. To use SSH, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ssh core@<boostrap.ip>
|
||||
----
|
||||
|
||||
. Collect `bootkube.service` journald unit logs for the release image service from the bootstrap node:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
[core@ocp4-lk6b4-bootstrap ~]$ journalctl -b -f -u release-image.service -u bootkube.service
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The `bootkube.service` log on the bootstrap node outputs etcd `connection refused` errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop.
|
||||
====
|
||||
@@ -1,41 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-creating-control-plane-nodes_{context}"]
|
||||
= Creating the control plane nodes
|
||||
|
||||
You create the control plane nodes by running the `masters.yml` playbook. This playbook passes the `master.ign` Ignition file to each of the virtual machines. The Ignition file contains a directive for the control plane node to get the Ignition from a URL such as `https://api-int.ocp4.example.org:22623/config/master`. The port number in this URL is managed by the load balancer, and is accessible only inside the cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create the control plane nodes:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ansible-playbook -i inventory.yml masters.yml
|
||||
----
|
||||
|
||||
. While the playbook creates your control plane, monitor the bootstrapping process:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ openshift-install wait-for bootstrap-complete --dir $ASSETS_DIR
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
INFO API v1.26.0 up
|
||||
INFO Waiting up to 40m0s for bootstrapping to complete...
|
||||
----
|
||||
|
||||
. When all the pods on the control plane nodes and etcd are up and running, the installation program displays the following output.
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
INFO It is now safe to remove the bootstrap resources
|
||||
----
|
||||
@@ -1,76 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-creating-install-config-file_{context}"]
|
||||
= Creating the install config file
|
||||
|
||||
You create an installation configuration file by running the installation program, `openshift-install`, and responding to its prompts with information you specified or gathered earlier.
|
||||
|
||||
When you finish responding to the prompts, the installation program creates an initial version of the `install-config.yaml` file in the assets directory you specified earlier, for example, `./wrk/install-config.yaml`
|
||||
|
||||
The installation program also creates a file, `$HOME/.ovirt/ovirt-config.yaml`, that contains all the connection parameters that are required to reach the {rh-virtualization-engine-name} and use its REST API.
|
||||
|
||||
**NOTE:**
|
||||
The installation process does not use values you supply for some parameters, such as `Internal API virtual IP` and `Ingress virtual IP`, because you have already configured them in your infrastructure DNS.
|
||||
|
||||
It also uses the values you supply for parameters in `inventory.yml`, like the ones for `oVirt cluster`, `oVirt storage`, and `oVirt network`. And uses a script to remove or replace these same values from `install-config.yaml` with the previously mentioned `virtual IPs`.
|
||||
//For details, see xref:set-platform-to-none[].
|
||||
|
||||
.Procedure
|
||||
|
||||
. Run the installation program:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ openshift-install create install-config --dir $ASSETS_DIR
|
||||
----
|
||||
|
||||
. Respond to the installation program's prompts with information about your system.
|
||||
+
|
||||
ifndef::openshift-origin[]
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
? SSH Public Key /home/user/.ssh/id_dsa.pub
|
||||
? Platform <ovirt>
|
||||
? Engine FQDN[:PORT] [? for help] <engine.fqdn>
|
||||
? Enter ovirt-engine username <ocpadmin@internal>
|
||||
? Enter password <******>
|
||||
? oVirt cluster <cluster>
|
||||
? oVirt storage <storage>
|
||||
? oVirt network <net>
|
||||
? Internal API virtual IP <172.16.0.252>
|
||||
? Ingress virtual IP <172.16.0.251>
|
||||
? Base Domain <example.org>
|
||||
? Cluster Name <ocp4>
|
||||
? Pull Secret [? for help] <********>
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifndef::openshift-origin[]
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
? SSH Public Key /home/user/.ssh/id_dsa.pub
|
||||
? Platform <ovirt>
|
||||
? Engine FQDN[:PORT] [? for help] <engine.fqdn>
|
||||
? Enter ovirt-engine username <ocpadmin@internal>
|
||||
? Enter password <******>
|
||||
? oVirt cluster <cluster>
|
||||
? oVirt storage <storage>
|
||||
? oVirt network <net>
|
||||
? Internal API virtual IP <172.16.0.252>
|
||||
? Ingress virtual IP <172.16.0.251>
|
||||
? Base Domain <example.org>
|
||||
? Cluster Name <ocp4>
|
||||
? Pull Secret [? for help] <********>
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
|
||||
For `Internal API virtual IP` and `Ingress virtual IP`, supply the IP addresses you specified when you configured the DNS service.
|
||||
|
||||
Together, the values you enter for the `oVirt cluster` and `Base Domain` prompts form the FQDN portion of URLs for the REST API and any applications you create, such as `\https://api.ocp4.example.org:6443/` and `\https://console-openshift-console.apps.ocp4.example.org`.
|
||||
|
||||
You can get the {cluster-manager-url-pull}.
|
||||
@@ -1,44 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-creating-templates-virtual-machines_{context}"]
|
||||
= Creating templates and virtual machines
|
||||
|
||||
After confirming the variables in the `inventory.yml`, you run the first Ansible provisioning playbook, `create-templates-and-vms.yml`.
|
||||
|
||||
This playbook uses the connection parameters for the {rh-virtualization} {rh-virtualization-engine-name} from `$HOME/.ovirt/ovirt-config.yaml` and reads `metadata.json` in the assets directory.
|
||||
|
||||
If a local {op-system-first} image is not already present, the playbook downloads one from the URL you specified for `image_url` in `inventory.yml`. It extracts the image and uploads it to {rh-virtualization} to create templates.
|
||||
|
||||
The playbook creates a template based on the `control_plane` and `compute` profiles in the `inventory.yml` file. If these profiles have different names, it creates two templates.
|
||||
|
||||
When the playbook finishes, the virtual machines it creates are stopped. You can get information from them to help configure other infrastructure elements. For example, you can get the virtual machines' MAC addresses to configure DHCP to assign permanent IP addresses to the virtual machines.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
. In `inventory.yml`, under the `control_plane` and `compute` variables, change both instances of `type: high_performance` to `type: server`.
|
||||
|
||||
. Optional: If you plan to perform multiple installations to the same cluster, create different templates for each {product-title} installation. In the `inventory.yml` file, prepend the value of `template` with `infraID`. For example:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
control_plane:
|
||||
cluster: "{{ ovirt_cluster }}"
|
||||
memory: 16GiB
|
||||
sockets: 4
|
||||
cores: 1
|
||||
template: "{{ metadata.infraID }}-rhcos_tpl"
|
||||
operating_system: "rhcos_x64"
|
||||
...
|
||||
----
|
||||
|
||||
. Create the templates and virtual machines:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ansible-playbook -i inventory.yml create-templates-and-vms.yml
|
||||
----
|
||||
@@ -1,106 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-creating-worker-nodes-completing-installation_{context}"]
|
||||
= Creating the worker nodes and completing the installation
|
||||
|
||||
Creating worker nodes is similar to creating control plane nodes. However, worker nodes workers do not automatically join the cluster. To add them to the cluster, you review and approve the workers' pending CSRs (Certificate Signing Requests).
|
||||
|
||||
After approving the first requests, you continue approving CSR until all of the worker nodes are approved. When you complete this process, the worker nodes become `Ready` and can have pods scheduled to run on them.
|
||||
|
||||
Finally, monitor the command line to see when the installation process completes.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create the worker nodes:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ansible-playbook -i inventory.yml workers.yml
|
||||
----
|
||||
|
||||
|
||||
. To list all of the CSRs, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get csr -A
|
||||
----
|
||||
+
|
||||
Eventually, this command displays one CSR per node. For example:
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
NAME AGE SIGNERNAME REQUESTOR CONDITION
|
||||
csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued
|
||||
csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
|
||||
csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued
|
||||
csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
|
||||
csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
|
||||
csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued
|
||||
csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued
|
||||
csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
|
||||
----
|
||||
|
||||
. To filter the list and see only pending CSRs, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ watch "oc get csr -A | grep pending -i"
|
||||
----
|
||||
+
|
||||
This command refreshes the output every two seconds and displays only pending CSRs. For example:
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Every 2.0s: oc get csr -A | grep pending -i
|
||||
|
||||
csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
|
||||
csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
|
||||
----
|
||||
|
||||
. Inspect each pending request. For example:
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc describe csr csr-m724n
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
Name: csr-m724n
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200
|
||||
Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper
|
||||
Signer: kubernetes.io/kube-apiserver-client-kubelet
|
||||
Status: Pending
|
||||
Subject:
|
||||
Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org
|
||||
Serial Number:
|
||||
Organization: system:nodes
|
||||
Events: <none>
|
||||
----
|
||||
|
||||
. If the CSR information is correct, approve the request:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc adm certificate approve csr-m724n
|
||||
----
|
||||
|
||||
. Wait for the installation process to finish:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ openshift-install wait-for install-complete --dir $ASSETS_DIR --log-level debug
|
||||
----
|
||||
+
|
||||
When the installation completes, the command line displays the URL of the {product-title} web console and the administrator user name and password.
|
||||
@@ -1,73 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-customizing-install-config-yaml_{context}"]
|
||||
= Customizing install-config.yaml
|
||||
|
||||
Here, you use three Python scripts to override some of the installation program's default behaviors:
|
||||
|
||||
* By default, the installation program uses the machine API to create nodes. To override this default behavior, you set the number of compute nodes to zero replicas. Later, you use Ansible playbooks to create the compute nodes.
|
||||
|
||||
* By default, the installation program sets the IP range of the machine network for nodes. To override this default behavior, you set the IP range to match your infrastructure.
|
||||
|
||||
* By default, the installation program sets the platform to `ovirt`. However, installing a cluster on user-provisioned infrastructure is more similar to installing a cluster on bare metal. Therefore, you delete the ovirt platform section from `install-config.yaml` and change the platform to `none`. Instead, you use `inventory.yml` to specify all of the required settings.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
These snippets work with Python 3 and Python 2.
|
||||
====
|
||||
|
||||
// TBD - https://issues.redhat.com/browse/OCPRHV-414
|
||||
// Please discuss with engineering whether these three scripts can/should be combined into a single script.
|
||||
// Also consider combining this topic with other customization topics.
|
||||
|
||||
.Procedure
|
||||
//TBD - Should we combine these into one script?
|
||||
|
||||
. Set the number of compute nodes to zero replicas:
|
||||
+
|
||||
[source,python]
|
||||
----
|
||||
$ python3 -c 'import os, yaml
|
||||
path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"]
|
||||
conf = yaml.safe_load(open(path))
|
||||
conf["compute"][0]["replicas"] = 0
|
||||
open(path, "w").write(yaml.dump(conf, default_flow_style=False))'
|
||||
----
|
||||
|
||||
|
||||
. Set the IP range of the machine network. For example, to set the range to `172.16.0.0/16`, enter:
|
||||
+
|
||||
[source,python]
|
||||
----
|
||||
$ python3 -c 'import os, yaml
|
||||
path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"]
|
||||
conf = yaml.safe_load(open(path))
|
||||
conf["networking"]["machineNetwork"][0]["cidr"] = "172.16.0.0/16"
|
||||
open(path, "w").write(yaml.dump(conf, default_flow_style=False))'
|
||||
----
|
||||
|
||||
|
||||
. Remove the `ovirt` section and change the platform to `none`:
|
||||
+
|
||||
[source,python]
|
||||
----
|
||||
$ python3 -c 'import os, yaml
|
||||
path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"]
|
||||
conf = yaml.safe_load(open(path))
|
||||
platform = conf["platform"]
|
||||
del platform["ovirt"]
|
||||
platform["none"] = {}
|
||||
open(path, "w").write(yaml.dump(conf, default_flow_style=False))'
|
||||
----
|
||||
+
|
||||
[WARNING]
|
||||
====
|
||||
Red Hat Virtualization does not currently support installation with user-provisioned infrastructure on the oVirt platform. Therefore, you must set the platform to `none`, allowing {product-title} to identify each node as a bare-metal node and the cluster as a bare-metal cluster. This is the same as xref:../../installing/installing_platform_agnostic/installing-platform-agnostic.adoc#installing-platform-agnostic[installing a cluster on any platform], and has the following limitations:
|
||||
|
||||
. There will be no cluster provider so you must manually add each machine and there will be no node scaling capabilities.
|
||||
. The oVirt CSI driver will not be installed and there will be no CSI capabilities.
|
||||
====
|
||||
@@ -1,25 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
|
||||
[id="installation-rhv-destroying-cluster_{context}"]
|
||||
= Destroying the cluster
|
||||
|
||||
When you are finished using the cluster, you can destroy it and remove related configurations from your infrastructure.
|
||||
|
||||
.Prerequisites
|
||||
* You preserved the original files you used to create the cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Optional: To remove the cluster, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ansible-playbook -i inventory.yml \
|
||||
retire-bootstrap.yml \
|
||||
retire-masters.yml \
|
||||
retire-workers.yml
|
||||
----
|
||||
|
||||
. Remove any previous configurations you added to DNS, load balancers, and any other infrastructure.
|
||||
@@ -1,43 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-downloading-ansible-playbooks_{context}"]
|
||||
= Downloading the Ansible playbooks
|
||||
|
||||
Download the Ansible playbooks for installing {product-title} version {product-version} on {rh-virtualization}.
|
||||
|
||||
.Procedure
|
||||
|
||||
* On your installation machine, run the following commands:
|
||||
+
|
||||
[source,terminal,subs=attributes+]
|
||||
----
|
||||
$ mkdir playbooks
|
||||
----
|
||||
+
|
||||
[source,terminal,subs=attributes+]
|
||||
----
|
||||
$ cd playbooks
|
||||
----
|
||||
+
|
||||
[source,terminal,subs=attributes+]
|
||||
----
|
||||
$ xargs -n 1 curl -O <<< '
|
||||
https://raw.githubusercontent.com/openshift/installer/release-{product-version}/upi/ovirt/bootstrap.yml
|
||||
https://raw.githubusercontent.com/openshift/installer/release-{product-version}/upi/ovirt/common-auth.yml
|
||||
https://raw.githubusercontent.com/openshift/installer/release-{product-version}/upi/ovirt/create-templates-and-vms.yml
|
||||
https://raw.githubusercontent.com/openshift/installer/release-{product-version}/upi/ovirt/inventory.yml
|
||||
https://raw.githubusercontent.com/openshift/installer/release-{product-version}/upi/ovirt/masters.yml
|
||||
https://raw.githubusercontent.com/openshift/installer/release-{product-version}/upi/ovirt/retire-bootstrap.yml
|
||||
https://raw.githubusercontent.com/openshift/installer/release-{product-version}/upi/ovirt/retire-masters.yml
|
||||
https://raw.githubusercontent.com/openshift/installer/release-{product-version}/upi/ovirt/retire-workers.yml
|
||||
https://raw.githubusercontent.com/openshift/installer/release-{product-version}/upi/ovirt/workers.yml'
|
||||
|
||||
----
|
||||
|
||||
|
||||
.Next steps
|
||||
|
||||
* After you download these Ansible playbooks, you must also create the environment variable for the assets directory and customize the `inventory.yml` file before you create an installation configuration file by running the installation program.
|
||||
@@ -1,89 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-editing-mantifests_{context}"]
|
||||
= Generate manifest files
|
||||
|
||||
Use the installation program to generate a set of manifest files in the assets directory.
|
||||
|
||||
The command to generate the manifest files displays a warning message before it consumes the `install-config.yaml` file.
|
||||
|
||||
If you plan to reuse the `install-config.yaml` file, create a backup copy of it before you back it up before you generate the manifest files.
|
||||
|
||||
// TBD There isn't a clear reason to generate the manifest files. Is this step necessary? It seem like normally the user only does this if they need to edit the files to customize something. Unfortunately, the lead developer on this project has left the organization. Looking at similar commands/topics in the openshift-docs, it seems like this step is only taken when the user needs to perform a specific customization.
|
||||
|
||||
|
||||
.Procedure
|
||||
|
||||
. Optional: Create a backup copy of the `install-config.yaml` file:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ cp install-config.yaml install-config.yaml.backup
|
||||
----
|
||||
|
||||
. Generate a set of manifests in your assets directory:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ openshift-install create manifests --dir $ASSETS_DIR
|
||||
----
|
||||
+
|
||||
This command displays the following messages.
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
INFO Consuming Install Config from target directory
|
||||
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
|
||||
----
|
||||
+
|
||||
The command generates the following manifest files:
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
$ tree
|
||||
.
|
||||
└── wrk
|
||||
├── manifests
|
||||
│ ├── 04-openshift-machine-config-operator.yaml
|
||||
│ ├── cluster-config.yaml
|
||||
│ ├── cluster-dns-02-config.yml
|
||||
│ ├── cluster-infrastructure-02-config.yml
|
||||
│ ├── cluster-ingress-02-config.yml
|
||||
│ ├── cluster-network-01-crd.yml
|
||||
│ ├── cluster-network-02-config.yml
|
||||
│ ├── cluster-proxy-01-config.yaml
|
||||
│ ├── cluster-scheduler-02-config.yml
|
||||
│ ├── cvo-overrides.yaml
|
||||
│ ├── etcd-ca-bundle-configmap.yaml
|
||||
│ ├── etcd-client-secret.yaml
|
||||
│ ├── etcd-host-service-endpoints.yaml
|
||||
│ ├── etcd-host-service.yaml
|
||||
│ ├── etcd-metric-client-secret.yaml
|
||||
│ ├── etcd-metric-serving-ca-configmap.yaml
|
||||
│ ├── etcd-metric-signer-secret.yaml
|
||||
│ ├── etcd-namespace.yaml
|
||||
│ ├── etcd-service.yaml
|
||||
│ ├── etcd-serving-ca-configmap.yaml
|
||||
│ ├── etcd-signer-secret.yaml
|
||||
│ ├── kube-cloud-config.yaml
|
||||
│ ├── kube-system-configmap-root-ca.yaml
|
||||
│ ├── machine-config-server-tls-secret.yaml
|
||||
│ └── openshift-config-secret-pull-secret.yaml
|
||||
└── openshift
|
||||
├── 99_kubeadmin-password-secret.yaml
|
||||
├── 99_openshift-cluster-api_master-user-data-secret.yaml
|
||||
├── 99_openshift-cluster-api_worker-user-data-secret.yaml
|
||||
├── 99_openshift-machineconfig_99-master-ssh.yaml
|
||||
├── 99_openshift-machineconfig_99-worker-ssh.yaml
|
||||
└── openshift-install-manifests.yaml
|
||||
----
|
||||
|
||||
.Next steps
|
||||
|
||||
* Make control plane nodes non-schedulable.
|
||||
@@ -1,30 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-making-control-plane-nodes-non-schedulable_{context}"]
|
||||
= Making control-plane nodes non-schedulable
|
||||
|
||||
// TBD - https://issues.redhat.com/browse/OCPRHV-414
|
||||
// Here's my version of the intro text from https://github.com/openshift/installer/blob/master/docs/user/ovirt/install_upi.md#set-control-plane-nodes-unschedulable . This information is confusing. Please discuss with engineering and provide a good concise explanation of why the user is doing this.
|
||||
|
||||
// "Earlier, when you set the compute `replicas` to zero, it also made control-plane nodes schedulable, which is something you do not want at this stage in the process.""
|
||||
//
|
||||
// "NOTE: Router pods can run also on control-plane nodes but there are some Kubernetes limitations that prevent the ingress load balancer from reaching those pods.""
|
||||
|
||||
Because you are manually creating and deploying the control plane machines, you must configure a manifest file to make the control plane nodes non-schedulable.
|
||||
|
||||
.Procedure
|
||||
|
||||
. To make the control plane nodes non-schedulable, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ python3 -c 'import os, yaml
|
||||
path = "%s/manifests/cluster-scheduler-02-config.yml" % os.environ["ASSETS_DIR"]
|
||||
data = yaml.safe_load(open(path))
|
||||
data["spec"]["mastersSchedulable"] = False
|
||||
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
|
||||
----
|
||||
@@ -1,22 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-removing-bootstrap-machine_{context}"]
|
||||
= Removing the bootstrap machine
|
||||
|
||||
After the `wait-for` command shows that the bootstrap process is complete, you must remove the bootstrap virtual machine to free up compute, memory, and storage resources. Also, remove settings for the bootstrap machine from the load balancer directives.
|
||||
|
||||
.Procedure
|
||||
|
||||
|
||||
. To remove the bootstrap machine from the cluster, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ansible-playbook -i inventory.yml retire-bootstrap.yml
|
||||
----
|
||||
|
||||
. Remove settings for the bootstrap machine from the load balancer directives.
|
||||
@@ -1,27 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-removing-cluster-upi_{context}"]
|
||||
= Removing a cluster that uses user-provisioned infrastructure
|
||||
|
||||
When you are finished using the cluster, you can remove a cluster that uses user-provisioned infrastructure from your cloud.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* Have the original playbook files, assets directory and files, and `$ASSETS_DIR` environment variable that you used to you install the cluster. Typically, you can achieve this by using the same computer you used when you installed the cluster.
|
||||
|
||||
.Procedure
|
||||
|
||||
. To remove the cluster, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ansible-playbook -i inventory.yml \
|
||||
retire-bootstrap.yml \
|
||||
retire-masters.yml \
|
||||
retire-workers.yml
|
||||
----
|
||||
|
||||
. Remove any configurations you added to DNS, load balancers, and any other infrastructure for this cluster.
|
||||
@@ -1,42 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installation-rhv-specifying-rhcos-image-settings_{context}"]
|
||||
= Specifying the {op-system} image settings
|
||||
|
||||
Update the {op-system-first} image settings of the `inventory.yml` file. Later, when you run this file one of the playbooks, it downloads a compressed {op-system-first} image from the `image_url` URL to the `local_cmp_image_path` directory. The playbook then uncompresses the image to the `local_image_path` directory and uses it to create oVirt/{rh-virtualization} templates.
|
||||
|
||||
// TBD - https://issues.redhat.com/browse/OCPRHV-414
|
||||
// Consider combining this topic with another one after we've resolved the issue of getting the files.
|
||||
|
||||
.Procedure
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
. Locate the {op-system} image download page for the version of {product-title} you are installing, such as link:https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/[Index of /pub/openshift-v4/dependencies/rhcos/latest/latest].
|
||||
|
||||
. From that download page, copy the URL of an OpenStack `qcow2` image, such as `\https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz`.
|
||||
|
||||
. Edit the `inventory.yml` playbook you downloaded earlier. In it, paste the URL as the value for `image_url`. For example:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
rhcos:
|
||||
"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz"
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
. Locate the {op-system} image download page, such as link:https://getfedora.org/coreos/download?tab=cloud_operators&stream=stable[Download Fedora CoreOS].
|
||||
|
||||
. From that download page, copy the URL of an OpenStack `qcow2` image, such as `\https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/34.20210611.3.0/x86_64/fedora-coreos-34.20210611.3.0-openstack.x86_64.qcow2.xz`.
|
||||
|
||||
. Edit the `inventory.yml` playbook you downloaded earlier. In it, replace the `rhcos` stanza and paste the URL as the value for `image_url`. For example:
|
||||
+
|
||||
[source,yaml]
|
||||
----
|
||||
rhcos:
|
||||
image_url: "https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/34.20210611.3.0/x86_64/fedora-coreos-34.20210611.3.0-openstack.x86_64.qcow2.xz"
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
@@ -7,7 +7,7 @@
|
||||
// * installing/installing_ibm_cloud_public/uninstalling-cluster-ibm-cloud.adoc
|
||||
// * installing/installing_ibm_powervs/uninstalling-cluster-ibm-power-vs.adoc
|
||||
// * installing/installing_osp/uninstalling-cluster-openstack.adoc
|
||||
// * installing/installing_rhv/uninstalling-cluster-rhv.adoc
|
||||
// * installing/installing_vmc/uninstalling-cluster-vmc.adoc
|
||||
// * installing/installing_vsphere/uninstalling-cluster-vsphere-installer-provisioned.adoc
|
||||
// * installing/installing_nutanix/uninstalling-cluster-nutanix.adoc
|
||||
|
||||
|
||||
@@ -9,7 +9,9 @@
|
||||
// * installing/installing_ibm_z/installing-ibm-z.adoc
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc
|
||||
// * installing/installing_platform_agnostic/installing-platform-agnostic.adoc
|
||||
// * installing/installing_rhv/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing_vmc/installing-restricted-networks-vmc-user-infra.adoc
|
||||
// * installing/installing_vmc/installing-vmc-network-customizations-user-infra.adoc
|
||||
// * installing/installing_vmc/installing-vmc-user-infra.adoc
|
||||
// * installing/installing_vsphere/installing-restricted-networks-vsphere.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere-network-customizations.adoc
|
||||
// * installing/installing_vsphere/installing-vsphere.adoc
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installing-rhv-accessing-ocp-web-console_{context}"]
|
||||
= Accessing the {product-title} web console on {rh-virtualization}
|
||||
|
||||
After the {product-title} cluster initializes, you can log in to the {product-title} web console.
|
||||
|
||||
.Procedure
|
||||
. Optional: In the {rh-virtualization-first} Administration Portal, open *Compute* -> *Cluster*.
|
||||
. Verify that the installation program creates the virtual machines.
|
||||
. Return to the command line where the installation program is running. When the installation program finishes, it displays the user name and temporary password for logging into the {product-title} web console.
|
||||
. In a browser, open the URL of the {product-title} web console. The URL uses this format:
|
||||
+
|
||||
----
|
||||
console-openshift-console.apps.<clustername>.<basedomain> <1>
|
||||
----
|
||||
<1> For `<clustername>.<basedomain>`, specify the cluster name and base domain.
|
||||
+
|
||||
For example:
|
||||
+
|
||||
----
|
||||
console-openshift-console.apps.my-cluster.virtlab.example.com
|
||||
----
|
||||
@@ -1,242 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
|
||||
[id="installing-rhv-example-install-config-yaml_{context}"]
|
||||
= Example `install-config.yaml` files for {rh-virtualization-first}
|
||||
|
||||
You can customize the {product-title} cluster the installation program creates by changing the parameters and parameter values in the `install-config.yaml` file.
|
||||
|
||||
The following examples are specific to installing {product-title} on {rh-virtualization}.
|
||||
|
||||
`install-config.yaml` is located in `<installation_directory>`, which you specified when you ran the following command.
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./openshift-install create install-config --dir <installation_directory>
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
* These example files are provided for reference only. You must obtain your
|
||||
`install-config.yaml` file by using the installation program.
|
||||
* Changing the `install-config.yaml` file can increase the resources your cluster requires. Verify that your {rh-virtualization} environment has those additional resources. Otherwise, the installation or cluster will fail.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
== Example default `install-config.yaml` file
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
baseDomain: example.com
|
||||
compute:
|
||||
- architecture: amd64
|
||||
hyperthreading: Enabled
|
||||
name: worker
|
||||
platform:
|
||||
ovirt:
|
||||
sparse: false <1>
|
||||
format: raw <2>
|
||||
replicas: 3
|
||||
controlPlane:
|
||||
architecture: amd64
|
||||
hyperthreading: Enabled
|
||||
name: master
|
||||
platform:
|
||||
ovirt:
|
||||
sparse: false <1>
|
||||
format: raw <2>
|
||||
replicas: 3
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: my-cluster
|
||||
networking:
|
||||
clusterNetwork:
|
||||
- cidr: 10.128.0.0/14
|
||||
hostPrefix: 23
|
||||
machineNetwork:
|
||||
- cidr: 10.0.0.0/16
|
||||
networkType: OVNKubernetes <3>
|
||||
serviceNetwork:
|
||||
- 172.30.0.0/16
|
||||
platform:
|
||||
ovirt:
|
||||
api_vips:
|
||||
- 10.0.0.10
|
||||
ingress_vips:
|
||||
- 10.0.0.11
|
||||
ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b
|
||||
ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468
|
||||
ovirt_network_name: ovirtmgmt
|
||||
vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692
|
||||
publish: External
|
||||
pullSecret: '{"auths": ...}'
|
||||
sshKey: ssh-ed12345 AAAA...
|
||||
----
|
||||
|
||||
<1> Setting this option to `false` enables preallocation of disks. The default is `true`. Setting `sparse` to `true` with `format` set to `raw` is not available for block storage domains. The `raw` format writes the entire virtual disk to the underlying physical disk.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Preallocating disks on file storage domains writes zeroes to the file. This might not actually preallocate disks depending on the underlying storage.
|
||||
====
|
||||
<2> Can be set to `cow` or `raw`. The default is `cow`. The `cow` format is optimized for virtual machines.
|
||||
<3> The cluster network plugin to install. The supported values are `OVNKubernetes` and `OpenShiftSDN`. The default value is `OVNKubernetes`.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
In {product-title} 4.12 and later, the `api_vip` and `ingress_vip` configuration settings are deprecated. Instead, use a list format to enter values in the `api_vips` and `ingress_vips` configuration settings.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
== Example minimal `install-config.yaml` file
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
baseDomain: example.com
|
||||
metadata:
|
||||
name: test-cluster
|
||||
platform:
|
||||
ovirt:
|
||||
api_vips:
|
||||
- 10.46.8.230
|
||||
ingress_vips:
|
||||
- 10.46.8.232
|
||||
ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b
|
||||
ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468
|
||||
ovirt_network_name: ovirtmgmt
|
||||
vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692
|
||||
pullSecret: '{"auths": ...}'
|
||||
sshKey: ssh-ed12345 AAAA...
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
In {product-title} 4.12 and later, the `api_vip` and `ingress_vip` configuration settings are deprecated. Instead, use a list format to enter values in the `api_vips` and `ingress_vips` configuration settings.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
== Example Custom machine pools in an `install-config.yaml` file
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
baseDomain: example.com
|
||||
controlPlane:
|
||||
name: master
|
||||
platform:
|
||||
ovirt:
|
||||
cpu:
|
||||
cores: 4
|
||||
sockets: 2
|
||||
memoryMB: 65536
|
||||
osDisk:
|
||||
sizeGB: 100
|
||||
vmType: server
|
||||
replicas: 3
|
||||
compute:
|
||||
- name: worker
|
||||
platform:
|
||||
ovirt:
|
||||
cpu:
|
||||
cores: 4
|
||||
sockets: 4
|
||||
memoryMB: 65536
|
||||
osDisk:
|
||||
sizeGB: 200
|
||||
vmType: server
|
||||
replicas: 5
|
||||
metadata:
|
||||
name: test-cluster
|
||||
platform:
|
||||
ovirt:
|
||||
api_vips:
|
||||
- 10.46.8.230
|
||||
ingress_vips:
|
||||
- 10.46.8.232
|
||||
ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b
|
||||
ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468
|
||||
ovirt_network_name: ovirtmgmt
|
||||
vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692
|
||||
pullSecret: '{"auths": ...}'
|
||||
sshKey: ssh-ed25519 AAAA...
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
In {product-title} 4.12 and later, the `api_vip` and `ingress_vip` configuration settings are deprecated. Instead, use a list format to enter values in the `api_vips` and `ingress_vips` configuration settings.
|
||||
====
|
||||
|
||||
[discrete]
|
||||
== Example non-enforcing affinity group
|
||||
|
||||
It is recommended to add a non-enforcing affinity group to distribute the control plane and workers, if possible, to use as much of the cluster as possible.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
platform:
|
||||
ovirt:
|
||||
affinityGroups:
|
||||
- description: AffinityGroup to place each compute machine on a separate host
|
||||
enforcing: true
|
||||
name: compute
|
||||
priority: 3
|
||||
- description: AffinityGroup to place each control plane machine on a separate host
|
||||
enforcing: true
|
||||
name: controlplane
|
||||
priority: 5
|
||||
- description: AffinityGroup to place worker nodes and control plane nodes on separate hosts
|
||||
enforcing: false
|
||||
name: openshift
|
||||
priority: 5
|
||||
compute:
|
||||
- architecture: amd64
|
||||
hyperthreading: Enabled
|
||||
name: worker
|
||||
platform:
|
||||
ovirt:
|
||||
affinityGroupsNames:
|
||||
- compute
|
||||
- openshift
|
||||
replicas: 3
|
||||
controlPlane:
|
||||
architecture: amd64
|
||||
hyperthreading: Enabled
|
||||
name: master
|
||||
platform:
|
||||
ovirt:
|
||||
affinityGroupsNames:
|
||||
- controlplane
|
||||
- openshift
|
||||
replicas: 3
|
||||
----
|
||||
|
||||
[discrete]
|
||||
== Example removing all affinity groups for a non-production lab setup
|
||||
|
||||
For non-production lab setups, you must remove all affinity groups to concentrate the {product-title} cluster on the few hosts you have.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
platform:
|
||||
ovirt:
|
||||
affinityGroups: []
|
||||
compute:
|
||||
- architecture: amd64
|
||||
hyperthreading: Enabled
|
||||
name: worker
|
||||
platform:
|
||||
ovirt:
|
||||
affinityGroupsNames: []
|
||||
replicas: 3
|
||||
controlPlane:
|
||||
architecture: amd64
|
||||
hyperthreading: Enabled
|
||||
name: master
|
||||
platform:
|
||||
ovirt:
|
||||
affinityGroupsNames: []
|
||||
replicas: 3
|
||||
----
|
||||
@@ -1,52 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installing-rhv-insecure-mode_{context}"]
|
||||
= Installing {product-title} on {rh-virtualization} in insecure mode
|
||||
|
||||
By default, the installer creates a CA certificate, prompts you for confirmation, and stores the certificate to use during installation. You do not need to create or install one manually.
|
||||
|
||||
Although it is not recommended, you can override this functionality and install {product-title} without verifying a certificate by installing {product-title} on {rh-virtualization} in *insecure* mode.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
Installing in *insecure* mode is not recommended, because it enables a potential attacker to perform a Man-in-the-Middle attack and capture sensitive credentials on the network.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
. Create a file named `~/.ovirt/ovirt-config.yaml`.
|
||||
|
||||
. Add the following content to `ovirt-config.yaml`:
|
||||
+
|
||||
ifndef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
ovirt_url: https://ovirt.example.com/ovirt-engine/api <1>
|
||||
ovirt_fqdn: ovirt.example.com <2>
|
||||
ovirt_pem_url: ""
|
||||
ovirt_username: ocpadmin@internal
|
||||
ovirt_password: super-secret-password <3>
|
||||
ovirt_insecure: true
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
ovirt_url: https://ovirt.example.com/ovirt-engine/api <1>
|
||||
ovirt_fqdn: ovirt.example.com <2>
|
||||
ovirt_pem_url: ""
|
||||
ovirt_username: admin@internal
|
||||
ovirt_password: super-secret-password <3>
|
||||
ovirt_insecure: true
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
<1> Specify the hostname or address of your oVirt engine.
|
||||
<2> Specify the fully qualified domain name of your oVirt engine.
|
||||
<3> Specify the admin password for your oVirt engine.
|
||||
|
||||
. Run the installer.
|
||||
@@ -1,44 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
|
||||
[id="installing-rhv-network-infrastructure-configuration-upi_{context}"]
|
||||
= Network infrastructure configuration for installing {product-title} on {rh-virtualization-first}
|
||||
|
||||
Before installing {product-title}, configure your network environment to meet the following requirements.
|
||||
|
||||
When they boot, virtual machines must have IP addresses get the Ignition config files. Consider configuring DHCP to provide persistent IP addresses and hostnames to the cluster machines.
|
||||
// TBD - Day 0 versus day 2? Alternatives?
|
||||
|
||||
.Firewall
|
||||
|
||||
Configure your firewall so your cluster has access to required sites.
|
||||
|
||||
.Network connectivity
|
||||
|
||||
// TBD - What do we mean by "machine" here? Where do we configure this? Can this be done at this stage in the process?
|
||||
Configure your network to enable the following connections:
|
||||
|
||||
* Grant every machine access to every other machine on ports `30000`-`32767`. This provides connectivity to {product-title} components.
|
||||
|
||||
* Grant every machine access to reserved ports `10250`-`10259` and `9000`-`9999`.
|
||||
|
||||
* Grant every machine access on ports `2379`-`2380`. This provides access to etcd, peers, and metrics on the control plane.
|
||||
|
||||
* Grant every machine access to the Kubernetes API on port `6443`.
|
||||
|
||||
.Load balancers
|
||||
|
||||
Configure one or two (preferred) layer-4 load balancers:
|
||||
|
||||
* Provide load balancing for ports `6443` and `22623` on the control plane and bootstrap machines. Port `6443` provides access to the Kubernetes API server and must be reachable both internally and externally. Port `22623` must be accessible to nodes within the cluster.
|
||||
|
||||
* Provide load balancing for port `443` and `80` for machines that run the Ingress router, which are usually worker nodes in the default configuration. Both ports must be accessible from within and outside the cluster.
|
||||
|
||||
.DNS
|
||||
|
||||
Configure infrastructure-provided DNS to allow the correct resolution of the main components and services. If you use only one load balancer, these DNS records can point to the same IP address.
|
||||
|
||||
* Create DNS records for `api.<cluster_name>.<base_domain>` (internal and external resolution) and `api-int.<cluster_name>.<base_domain>` (internal resolution) that point to the load balancer for the control plane machines.
|
||||
|
||||
* Create a DNS record for `*.apps.<cluster_name>.<base_domain>` that points to the load balancer for the Ingress router. For example, ports `443` and `80` of the compute machines.
|
||||
@@ -1,48 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installing-rhv-preparing-network-environment_{context}"]
|
||||
= Preparing the network environment on {rh-virtualization}
|
||||
|
||||
Configure two static IP addresses for the {product-title} cluster and create DNS entries using these addresses.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Reserve two static IP addresses
|
||||
.. On the network where you plan to install {product-title}, identify two static IP addresses that are outside the DHCP lease pool.
|
||||
.. Connect to a host on this network and verify that each of the IP addresses is not in use. For example, use Address Resolution Protocol (ARP) to check that none of the IP addresses have entries:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ arp 10.35.1.19
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
[source,terminal]
|
||||
----
|
||||
10.35.1.19 (10.35.1.19) -- no entry
|
||||
----
|
||||
|
||||
.. Reserve two static IP addresses following the standard practices for your network environment.
|
||||
.. Record these IP addresses for future reference.
|
||||
|
||||
. Create DNS entries for the {product-title} REST API and apps domain names using this format:
|
||||
+
|
||||
[source,dns]
|
||||
----
|
||||
api.<cluster-name>.<base-domain> <ip-address> <1>
|
||||
*.apps.<cluster-name>.<base-domain> <ip-address> <2>
|
||||
----
|
||||
<1> For `<cluster-name>`, `<base-domain>`, and `<ip-address>`, specify the cluster name, base domain, and static IP address of your {product-title} API.
|
||||
<2> Specify the cluster name, base domain, and static IP address of your {product-title} apps for Ingress and the load balancer.
|
||||
+
|
||||
For example:
|
||||
+
|
||||
[source,dns]
|
||||
----
|
||||
api.my-cluster.virtlab.example.com 10.35.1.19
|
||||
*.apps.my-cluster.virtlab.example.com 10.35.1.20
|
||||
----
|
||||
@@ -1,65 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing_rhv/installing-rhv-restricted-network.adoc
|
||||
|
||||
[id="installing-rhv-requirements_{context}"]
|
||||
= Requirements for the {rh-virtualization} environment
|
||||
|
||||
To install and run an {product-title} version {product-version} cluster, the {rh-virtualization} environment must meet the following requirements.
|
||||
|
||||
Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the {product-title} cluster to fail days or weeks after installation.
|
||||
|
||||
The following requirements for CPU, memory, and storage resources are based on *default* values multiplied by the default number of virtual machines the installation program creates. These resources must be available *in addition to* what the {rh-virtualization} environment uses for non-{product-title} operations.
|
||||
|
||||
By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the {product-title} cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources.
|
||||
|
||||
If you increase the number of virtual machines in the {rh-virtualization} environment, you must increase the resources accordingly.
|
||||
|
||||
.Requirements
|
||||
|
||||
* The {rh-virtualization} version is 4.4.
|
||||
* The {rh-virtualization} environment has one data center whose state is *Up*.
|
||||
* The {rh-virtualization} data center contains an {rh-virtualization} cluster.
|
||||
* The {rh-virtualization} cluster has the following resources exclusively for the {product-title} cluster:
|
||||
** Minimum 28 vCPUs: four for each of the seven virtual machines created during installation.
|
||||
** 112 GiB RAM or more, including:
|
||||
*** 16 GiB or more for the bootstrap machine, which provides the temporary control plane.
|
||||
*** 16 GiB or more for each of the three control plane machines which provide the control plane.
|
||||
*** 16 GiB or more for each of the three compute machines, which run the application workloads.
|
||||
* The {rh-virtualization} storage domain must meet link:https://access.redhat.com/solutions/4770281[these etcd backend performance requirements].
|
||||
ifeval::["{context}" == "installing-rhv-default"]
|
||||
* For affinity group support:
|
||||
Three or more hosts in the {rh-virtualization} cluster. If necessary, you can disable affinity groups. For details, see _Example: Removing all affinity groups for a non-production lab setup_ in _Installing a cluster on {rh-virtualization} with customizations_
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
* For affinity group support:
|
||||
+
|
||||
One physical machine per worker or control plane. Workers and control planes can be on the same physical machine. For example, if you have three workers and three control planes, you need three physical machines. If you have four workers and three control planes, you need four physical machines.
|
||||
|
||||
** For hard anti-affinity (default): A minimum of three physical machines. For more than three worker nodes, one physical machine per worker or control plane. Workers and control planes can be on the same physical machine.
|
||||
** For custom affinity groups: Ensure that the resources are appropriate for the affinity group rules that you define.
|
||||
////
|
||||
** Production setup: For hard anti-affinity, you need a minimum of three physical machines. For more than three worker nodes, one physical machine per worker or control plane. Workers and control planes can be on the same physical machine. For example, if you have three workers and three control planes, you need three physical machines. If you have four workers and three control planes, you need four physical machines.
|
||||
** Non-production setup, such as a lab: Remove all affinity groups to enable putting multiple workers or control planes on as few physical machines as possible. This setup does not guarantee redundancy so it is not appropriate for production.
|
||||
////
|
||||
endif::[]
|
||||
* In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default {product-title} cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default {product-title} cluster.
|
||||
* To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the {rh-virtualization} cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process.
|
||||
// TBD - What about the disconnected installation alternative?
|
||||
* The {rh-virtualization} cluster must have a virtual network with access to the REST API on the {rh-virtualization} {rh-virtualization-engine-name}. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP.
|
||||
* A user account and group with the following least privileges for installing and managing an {product-title} cluster on the target {rh-virtualization} cluster:
|
||||
** `DiskOperator`
|
||||
** `DiskCreator`
|
||||
** `UserTemplateBasedVm`
|
||||
** `TemplateOwner`
|
||||
** `TemplateCreator`
|
||||
** `ClusterAdmin` on the target cluster
|
||||
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
Apply the principle of least privilege: Avoid using an administrator account with `SuperUser` privileges on {rh-virtualization} during the installation process. The installation program saves the credentials you provide to a temporary `ovirt-config.yaml` file that might be compromised.
|
||||
====
|
||||
@@ -1,57 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installing-rhv-setting-up-ca-certificate_{context}"]
|
||||
= Setting up the CA certificate for {rh-virtualization}
|
||||
|
||||
Download the CA certificate from the {rh-virtualization-first} Manager and set it up on the installation machine.
|
||||
|
||||
You can download the certificate from a webpage on the {rh-virtualization} {rh-virtualization-engine-name} or by using a `curl` command.
|
||||
|
||||
Later, you provide the certificate to the installation program.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Use either of these two methods to download the CA certificate:
|
||||
** Go to the {rh-virtualization-engine-name}'s webpage, `\https://<engine-fqdn>/ovirt-engine/`. Then, under *Downloads*, click the *CA Certificate* link.
|
||||
** Run the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' -o /tmp/ca.pem <1>
|
||||
----
|
||||
<1> For `<engine-fqdn>`, specify the fully qualified domain name of the {rh-virtualization} {rh-virtualization-engine-name}, such as `rhv-env.virtlab.example.com`.
|
||||
|
||||
. Configure the CA file to grant rootless user access to the {rh-virtualization-engine-name}. Set the CA file permissions to have an octal value of `0644` (symbolic value: `-rw-r--r--`):
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ sudo chmod 0644 /tmp/ca.pem
|
||||
----
|
||||
. For Linux, copy the CA certificate to the directory for server certificates. Use `-p` to preserve the permissions:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ sudo cp -p /tmp/ca.pem /etc/pki/ca-trust/source/anchors/ca.pem
|
||||
----
|
||||
. Add the certificate to the certificate manager for your operating system:
|
||||
** For macOS, double-click the certificate file and use the *Keychain Access* utility to add the file to the *System* keychain.
|
||||
** For Linux, update the CA trust:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ sudo update-ca-trust
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If you use your own certificate authority, make sure the system trusts it.
|
||||
====
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* To learn more, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.0/html/rest_api_guide/documents-002_authentication_and_security[Authentication and Security] in the {rh-virtualization} documentation.
|
||||
@@ -1,50 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-user-infra.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installing-rhv-setting-up-installation-machine_{context}"]
|
||||
= Setting up the installation machine
|
||||
|
||||
|
||||
To run the binary `openshift-install` installation program and Ansible scripts, set up the {rh-virtualization} {rh-virtualization-engine-name} or an {op-system-base-full} computer with network access to the {rh-virtualization} environment and the REST API on the {rh-virtualization-engine-name}.
|
||||
|
||||
// The following steps include creating an `ASSETS_DIR` environment variable, which the installation program uses to create a directory of asset files. Later, the installation process reuses this variable to locate these asset files.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Update or install Python3 and Ansible. For example:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# dnf update python3 ansible
|
||||
----
|
||||
|
||||
. link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/python_sdk_guide/chap-overview#Installing_the_Software_Development_Kit[Install the `python3-ovirt-engine-sdk4` package] to get the Python Software Development Kit.
|
||||
|
||||
. Install the `ovirt.image-template` Ansible role. On the {rh-virtualization} {rh-virtualization-engine-name} and other {op-system-base-full} machines, this role is distributed as the `ovirt-ansible-image-template` package. For example, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# dnf install ovirt-ansible-image-template
|
||||
----
|
||||
|
||||
. Install the `ovirt.vm-infra` Ansible role. On the {rh-virtualization} {rh-virtualization-engine-name} and other {op-system-base} machines, this role is distributed as the `ovirt-ansible-vm-infra` package.
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
# dnf install ovirt-ansible-vm-infra
|
||||
----
|
||||
|
||||
. Create an environment variable and assign an absolute or relative path to it. For example, enter:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ export ASSETS_DIR=./wrk
|
||||
----
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The installation program uses this variable to create a directory where it saves important installation-related files. Later, the installation process reuses this variable to locate those asset files. Avoid deleting this assets directory; it is required for uninstalling the cluster.
|
||||
====
|
||||
@@ -1,75 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * installing/installing_rhv/installing-rhv-customizations.adoc
|
||||
// * installing/installing_rhv/installing-rhv-default.adoc
|
||||
// * installing/installing_rhv/installing-rhv-restricted-network.adoc
|
||||
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="installing-rhv-verifying-rhv-environment_{context}"]
|
||||
= Verifying the requirements for the {rh-virtualization} environment
|
||||
|
||||
Verify that the {rh-virtualization} environment meets the requirements to install and run an {product-title} cluster. Not meeting these requirements can cause failures.
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of {product-title} machines, adjust these requirements accordingly.
|
||||
====
|
||||
|
||||
.Procedure
|
||||
|
||||
. Check that the {rh-virtualization} version supports installation of {product-title} version {product-version}.
|
||||
.. In the {rh-virtualization} Administration Portal, click the *?* help icon in the upper-right corner and select *About*.
|
||||
.. In the window that opens, make a note of the **{rh-virtualization} Software Version**.
|
||||
.. Confirm that the {rh-virtualization} version is 4.4. For more information about supported version combinations, see link:https://access.redhat.com/articles/5485861[Support Matrix for {product-title} on {rh-virtualization}].
|
||||
|
||||
. Inspect the data center, cluster, and storage.
|
||||
.. In the {rh-virtualization} Administration Portal, click *Compute* -> *Data Centers*.
|
||||
.. Confirm that the data center where you plan to install {product-title} is accessible.
|
||||
.. Click the name of that data center.
|
||||
.. In the data center details, on the *Storage* tab, confirm the storage domain where you plan to install {product-title} is *Active*.
|
||||
.. Record the *Domain Name* for use later on.
|
||||
.. Confirm *Free Space* has at least 230 GiB.
|
||||
.. Confirm that the storage domain meets link:https://access.redhat.com/solutions/4770281[these etcd backend performance requirements], which you link:https://access.redhat.com/solutions/3780861[can measure by using the fio performance benchmarking tool].
|
||||
.. In the data center details, click the *Clusters* tab.
|
||||
.. Find the {rh-virtualization} cluster where you plan to install {product-title}. Record the cluster name for use later on.
|
||||
|
||||
. Inspect the {rh-virtualization} host resources.
|
||||
.. In the {rh-virtualization} Administration Portal, click *Compute > Clusters*.
|
||||
.. Click the cluster where you plan to install {product-title}.
|
||||
.. In the cluster details, click the *Hosts* tab.
|
||||
.. Inspect the hosts and confirm they have a combined total of at least 28 *Logical CPU Cores* available _exclusively_ for the {product-title} cluster.
|
||||
.. Record the number of available *Logical CPU Cores* for use later on.
|
||||
.. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores.
|
||||
.. Confirm that, all together, the hosts have 112 GiB of *Max free Memory for scheduling new virtual machines* distributed to meet the requirements for each of the following {product-title} machines:
|
||||
** 16 GiB required for the bootstrap machine
|
||||
** 16 GiB required for each of the three control plane machines
|
||||
** 16 GiB for each of the three compute machines
|
||||
.. Record the amount of *Max free Memory for scheduling new virtual machines* for use later on.
|
||||
+
|
||||
. Verify that the virtual network for installing {product-title} has access to the {rh-virtualization} {rh-virtualization-engine-name}'s REST API. From a virtual machine on this network, use curl to reach the {rh-virtualization} {rh-virtualization-engine-name}'s REST API:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k -u <username>@<profile>:<password> \ <1>
|
||||
https://<engine-fqdn>/ovirt-engine/api <2>
|
||||
----
|
||||
<1> For `<username>`, specify the user name of an {rh-virtualization} account with privileges to create and manage an {product-title} cluster on {rh-virtualization}. For `<profile>`, specify the login profile, which you can get by going to the {rh-virtualization} Administration Portal login page and reviewing the *Profile* dropdown list. For `<password>`, specify the password for that user name.
|
||||
<2> For `<engine-fqdn>`, specify the fully qualified domain name of the {rh-virtualization} environment.
|
||||
+
|
||||
For example:
|
||||
+
|
||||
ifndef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k -u ocpadmin@internal:pw123 \
|
||||
https://rhv-env.virtlab.example.com/ovirt-engine/api
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
ifdef::openshift-origin[]
|
||||
[source,terminal]
|
||||
----
|
||||
$ curl -k -u admin@internal:pw123 \
|
||||
https://ovirtlab.example.com/ovirt-engine/api
|
||||
----
|
||||
endif::openshift-origin[]
|
||||
@@ -6,7 +6,6 @@
|
||||
// * machine_management/creating_machinesets/creating-machineset-azure-stack-hub.adoc
|
||||
// * machine_management/creating_machinesets/creating-machineset-gcp.adoc
|
||||
// * machine_management/creating_machinesets/creating-machineset-osp.adoc
|
||||
// * machine_management/creating_machinesets/creating-machineset-rhv.adoc
|
||||
// * machine_management/creating_machinesets/creating-machineset-vsphere.adoc
|
||||
// * machine_management/deploying-machine-health-checks.adoc
|
||||
// * machine_management/manually-scaling-machinesets.adoc
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
// * machine_management/user_infra/adding-rhv-compute-user-infra.adoc
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="machine-user-provisioned-rhv_{context}"]
|
||||
= Adding more compute machines to a cluster on {rh-virtualization}
|
||||
|
||||
.Procedure
|
||||
|
||||
. Modify the `inventory.yml` file to include the new workers.
|
||||
. Run the `create-templates-and-vms` Ansible playbook to create the disks and virtual machines:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ansible-playbook -i inventory.yml create-templates-and-vms.yml
|
||||
----
|
||||
. Run the `workers.yml` Ansible playbook to start the virtual machines:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ansible-playbook -i inventory.yml workers.yml
|
||||
----
|
||||
. CSRs for new workers joining the cluster must be approved by the administrator.
|
||||
The following command helps to approve all pending requests:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve
|
||||
----
|
||||
@@ -1,54 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/modifying-machineset.adoc
|
||||
:_content-type: PROCEDURE
|
||||
[id="machineset-migrating-compute-nodes-to-diff-sd-rhv_{context}"]
|
||||
= Migrating compute nodes to a different storage domain in {rh-virtualization}
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You are logged in to the {rh-virtualization-engine-name}.
|
||||
* You have the name of the target storage domain.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Identify the virtual machine template by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{"\n"}' machineset -A
|
||||
----
|
||||
|
||||
. Create a new virtual machine in the {rh-virtualization-engine-name}, based on the template you identified. Leave all other settings unchanged. For details, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Creating_a_Virtual_Machine_Based_on_a_Template[Creating a Virtual Machine Based on a Template] in the Red Hat Virtualization _Virtual Machine Management Guide_.
|
||||
+
|
||||
[TIP]
|
||||
====
|
||||
You do not need to start the new virtual machine.
|
||||
====
|
||||
|
||||
. Create a new template from the new virtual machine. Specify the target storage domain under *Target*. For details, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Creating_a_template_from_an_existing_virtual_machine[Creating a Template] in the Red Hat Virtualization _Virtual Machine Management Guide_.
|
||||
|
||||
. Add a new compute machine set to the {product-title} cluster with the new template.
|
||||
.. Get the details of the current compute machine set by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get machineset -o yaml
|
||||
----
|
||||
.. Use these details to create a compute machine set. For more information see _Creating a compute machine set_.
|
||||
+
|
||||
Enter the new virtual machine template name in the *template_name* field. Use the same template name you used in the *New template* dialog in the {rh-virtualization-engine-name}.
|
||||
.. Note the names of both the old and new compute machine sets. You need to refer to them in subsequent steps.
|
||||
|
||||
. Migrate the workloads.
|
||||
.. Scale up the new compute machine set. For details on manually scaling compute machine sets, see _Scaling a compute machine set manually_.
|
||||
+
|
||||
{product-title} moves the pods to an available worker when the old machine is removed.
|
||||
.. Scale down the old compute machine set.
|
||||
|
||||
. Remove the old compute machine set by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc delete machineset <machineset-name>
|
||||
----
|
||||
@@ -1,42 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/modifying-machineset.adoc
|
||||
:_content-type: PROCEDURE
|
||||
[id="machineset-migrating-control-plane-nodes-to-diff-sd-rhv_{context}"]
|
||||
= Migrating control plane nodes to a different storage domain on {rh-virtualization}
|
||||
|
||||
{product-title} does not manage control plane nodes, so they are easier to migrate than compute nodes. You can migrate them like any other virtual machine on {rh-virtualization-first}.
|
||||
|
||||
Perform this procedure for each node separately.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You are logged in to the {rh-virtualization-engine-name}.
|
||||
* You have identified the control plane nodes. They are labeled *master* in the {rh-virtualization-engine-name}.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Select the virtual machine labeled *master*.
|
||||
|
||||
. Shut down the virtual machine.
|
||||
|
||||
. Click the *Disks* tab.
|
||||
|
||||
. Click the virtual machine's disk.
|
||||
|
||||
. Click *More Actions*{kebab} and select *Move*.
|
||||
|
||||
. Select the target storage domain and wait for the migration process to complete.
|
||||
|
||||
. Start the virtual machine.
|
||||
|
||||
. Verify that the {product-title} cluster is stable:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get nodes
|
||||
----
|
||||
+
|
||||
The output should display the node with the status `Ready`.
|
||||
|
||||
. Repeat this procedure for each control plane node.
|
||||
@@ -1,133 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * machine_management/creating-infrastructure-machinesets.adoc
|
||||
// * machine_management/creating_machinesets/creating-machineset-rhv.adoc
|
||||
|
||||
[id="machineset-yaml-rhv_{context}"]
|
||||
= Sample YAML for a compute machine set custom resource on {rh-virtualization}
|
||||
|
||||
This sample YAML defines a compute machine set that runs on {rh-virtualization} and creates nodes that are labeled with `node-role.kubernetes.io/<node_role>: ""`.
|
||||
|
||||
In this sample, `<infrastructure_id>` is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and `<role>` is the node label to add.
|
||||
|
||||
[source,yaml,subs="+quotes"]
|
||||
----
|
||||
apiVersion: machine.openshift.io/v1beta1
|
||||
kind: MachineSet
|
||||
metadata:
|
||||
labels:
|
||||
machine.openshift.io/cluster-api-cluster: <infrastructure_id> <1>
|
||||
machine.openshift.io/cluster-api-machine-role: <role> <2>
|
||||
machine.openshift.io/cluster-api-machine-type: <role> <2>
|
||||
name: <infrastructure_id>-<role> <3>
|
||||
namespace: openshift-machine-api
|
||||
spec:
|
||||
replicas: <number_of_replicas> <4>
|
||||
Selector: <5>
|
||||
matchLabels:
|
||||
machine.openshift.io/cluster-api-cluster: <infrastructure_id> <1>
|
||||
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> <3>
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
machine.openshift.io/cluster-api-cluster: <infrastructure_id> <1>
|
||||
machine.openshift.io/cluster-api-machine-role: <role> <2>
|
||||
machine.openshift.io/cluster-api-machine-type: <role> <2>
|
||||
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> <3>
|
||||
spec:
|
||||
metadata:
|
||||
labels:
|
||||
node-role.kubernetes.io/<role>: "" <2>
|
||||
providerSpec:
|
||||
value:
|
||||
apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1
|
||||
cluster_id: <ovirt_cluster_id> <6>
|
||||
template_name: <ovirt_template_name> <7>
|
||||
sparse: <boolean_value> <8>
|
||||
format: <raw_or_cow> <9>
|
||||
cpu: <10>
|
||||
sockets: <number_of_sockets> <11>
|
||||
cores: <number_of_cores> <12>
|
||||
threads: <number_of_threads> <13>
|
||||
memory_mb: <memory_size> <14>
|
||||
guaranteed_memory_mb: <memory_size> <15>
|
||||
os_disk: <16>
|
||||
size_gb: <disk_size> <17>
|
||||
storage_domain_id: <storage_domain_UUID> <18>
|
||||
network_interfaces: <19>
|
||||
vnic_profile_id: <vnic_profile_id> <20>
|
||||
credentialsSecret:
|
||||
name: ovirt-credentials <21>
|
||||
kind: OvirtMachineProviderSpec
|
||||
type: <workload_type> <22>
|
||||
auto_pinning_policy: <auto_pinning_policy> <23>
|
||||
hugepages: <hugepages> <24>
|
||||
affinityGroupsNames:
|
||||
- compute <25>
|
||||
userDataSecret:
|
||||
name: worker-user-data
|
||||
----
|
||||
<1> Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (`oc`) installed, you can obtain the infrastructure ID by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
||||
----
|
||||
|
||||
<2> Specify the node label to add.
|
||||
|
||||
<3> Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters.
|
||||
|
||||
<4> Specify the number of machines to create.
|
||||
|
||||
<5> Selector for the machines.
|
||||
|
||||
<6> Specify the UUID for the {rh-virtualization} cluster to which this VM instance belongs.
|
||||
|
||||
<7> Specify the {rh-virtualization} VM template to use to create the machine.
|
||||
|
||||
<8> Setting this option to `false` enables preallocation of disks. The default is `true`. Setting `sparse` to `true` with `format` set to `raw` is not available for block storage domains. The `raw` format writes the entire virtual disk to the underlying physical disk.
|
||||
|
||||
<9> Can be set to `cow` or `raw`. The default is `cow`. The `cow` format is optimized for virtual machines.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
Preallocating disks on file storage domains writes zeroes to the file. This might not actually preallocate disks depending on the underlying storage.
|
||||
====
|
||||
<10> Optional: The CPU field contains the CPU configuration, including sockets, cores, and threads.
|
||||
|
||||
<11> Optional: Specify the number of sockets for a VM.
|
||||
|
||||
<12> Optional: Specify the number of cores per socket.
|
||||
|
||||
<13> Optional: Specify the number of threads per core.
|
||||
|
||||
<14> Optional: Specify the size of a VM's memory in MiB.
|
||||
|
||||
<15> Optional: Specify the size of a virtual machine's guaranteed memory in MiB. This is the amount of memory that is guaranteed not to be drained by the ballooning mechanism. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/administration_guide#memory_ballooning[Memory Ballooning] and link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/administration_guide#Cluster_Optimization_Settings_Explained[Optimization Settings Explained].
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
If you are using a version earlier than {rh-virtualization} 4.4.8, see link:https://access.redhat.com/articles/6454811[Guaranteed memory requirements for OpenShift on Red Hat Virtualization clusters].
|
||||
====
|
||||
<16> Optional: Root disk of the node.
|
||||
|
||||
<17> Optional: Specify the size of the bootable disk in GiB.
|
||||
|
||||
<18> Optional: Specify the UUID of the storage domain for the compute node's disks. If none is provided, the compute node is created on the same storage domain as the control nodes. (default)
|
||||
|
||||
<19> Optional: List of the network interfaces of the VM. If you include this parameter, {product-title} discards all network interfaces from the template and creates new ones.
|
||||
|
||||
<20> Optional: Specify the vNIC profile ID.
|
||||
|
||||
<21> Specify the name of the secret object that holds the {rh-virtualization} credentials.
|
||||
|
||||
<22> Optional: Specify the workload type for which the instance is optimized. This value affects the `{rh-virtualization} VM` parameter. Supported values: `desktop`, `server` (default), `high_performance`. `high_performance` improves performance on the VM. Limitations exist, for example, you cannot access the VM with a graphical console. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools[Configuring High Performance Virtual Machines, Templates, and Pools] in the _Virtual Machine Management Guide_.
|
||||
<23> Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: `none`, `resize_and_pin`. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Setting_NUMA_Nodes[Setting NUMA Nodes] in the _Virtual Machine Management Guide_.
|
||||
<24> Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: `2048` or `1048576`. For more information, see link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index#Configuring_Huge_Pages[Configuring Huge Pages] in the _Virtual Machine Management Guide_.
|
||||
<25> Optional: A list of affinity group names to be applied to the VMs. The affinity groups must exist in oVirt.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Because {rh-virtualization} uses a template when creating a VM, if you do not specify a value for an optional parameter, {rh-virtualization} uses the value for that parameter that is specified in the template.
|
||||
====
|
||||
@@ -38,7 +38,7 @@ ifndef::passthrough[]
|
||||
endif::passthrough[]
|
||||
|
||||
ifndef::mint[]
|
||||
** For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), {rh-openstack-first}, {rh-virtualization-first}, and VMware vSphere are supported.
|
||||
** For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), {rh-openstack-first}, and VMware vSphere are supported.
|
||||
endif::mint[]
|
||||
|
||||
* You have changed the credentials that are used to interface with your cloud provider.
|
||||
@@ -71,9 +71,6 @@ ifndef::mint[]
|
||||
|{rh-openstack}
|
||||
|`openstack-credentials`
|
||||
|
||||
|{rh-virtualization}
|
||||
|`ovirt-credentials`
|
||||
|
||||
|VMware vSphere
|
||||
|`vsphere-creds`
|
||||
endif::mint[]
|
||||
|
||||
@@ -104,10 +104,6 @@ $ openstack port set --allowed-address \
|
||||
ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>
|
||||
----
|
||||
|
||||
{rh-virtualization-first}::
|
||||
|
||||
If you are using link:https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/administration_guide/chap-logical_networks#Explanation_of_Settings_in_the_VM_Interface_Profile_Window[{rh-virtualization}], you must select *No Network Filter* for the Virtual network interface controller (vNIC).
|
||||
|
||||
VMware vSphere::
|
||||
|
||||
If you are using VMware vSphere, see the link:https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-3507432E-AFEA-4B6B-B404-17A020575358.html[VMware documentation for securing vSphere standard switches]. View and change VMware vSphere default settings by selecting the host virtual switch from the vSphere Web Client.
|
||||
|
||||
@@ -15,7 +15,6 @@ A migration to the OVN-Kubernetes network plugin is supported on the following p
|
||||
* IBM Cloud
|
||||
* Microsoft Azure
|
||||
* {rh-openstack-first}
|
||||
* {rh-virtualization-first}
|
||||
* VMware vSphere
|
||||
|
||||
[IMPORTANT]
|
||||
|
||||
@@ -11,7 +11,8 @@
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc
|
||||
// * installing/installing_openstack/installing-openstack-installer-restricted.adoc
|
||||
// * installing/installing_platform_agnostic/installing-platform-agnostic.adoc
|
||||
// * installing/installing_rhv/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing_vmc/installing-restricted-networks-vmc-user-infra.adoc
|
||||
// * installing/installing_vmc/installing-restricted-networks-vmc.adoc
|
||||
// * installing/installing_vsphere/installing-restricted-networks-installer-provisioned-vsphere.adoc
|
||||
// * installing/installing_vsphere/installing-restricted-networks-vsphere.adoc
|
||||
// * operators/admin/olm-restricted-networks.adoc
|
||||
|
||||
@@ -1,44 +0,0 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * storage/container_storage_interface/persistent-storage-csi-ovirt.adoc
|
||||
// * post_installation_configuration/storage-configuration.adoc
|
||||
|
||||
[id="ovirt-csi-driver-storage-class_{context}"]
|
||||
ifeval::["{context}" == "post-install-storage-configuration"]
|
||||
= {rh-virtualization-first} object definition
|
||||
endif::[]
|
||||
ifeval::["{context}" == "persistent-storage-csi-ovirt"]
|
||||
= {rh-virtualization-first} CSI driver storage class
|
||||
endif::[]
|
||||
|
||||
|
||||
{product-title} creates a default object of type `StorageClass` named `ovirt-csi-sc` which is used for creating dynamically provisioned persistent volumes.
|
||||
|
||||
To create additional storage classes for different configurations, create and save a file with the `StorageClass` object described by the following sample YAML:
|
||||
|
||||
.ovirt-storageclass.yaml
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: <storage_class_name> <1>
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "<boolean>" <2>
|
||||
provisioner: csi.ovirt.org
|
||||
allowVolumeExpansion: <boolean> <3>
|
||||
reclaimPolicy: Delete <4>
|
||||
volumeBindingMode: Immediate <5>
|
||||
parameters:
|
||||
storageDomainName: <rhv-storage-domain-name> <6>
|
||||
thinProvisioning: "<boolean>" <7>
|
||||
csi.storage.k8s.io/fstype: <file_system_type> <8>
|
||||
----
|
||||
<1> Name of the storage class.
|
||||
<2> Set to `false` if the storage class is the default storage class in the cluster. If set to `true`, the existing default storage class must be edited and set to `false`.
|
||||
<3> `true` enables dynamic volume expansion, `false` prevents it. `true` is recommended.
|
||||
<4> Dynamically provisioned persistent volumes of this storage class are created with this reclaim policy. This default policy is `Delete`.
|
||||
<5> Indicates how to provision and bind `PersistentVolumeClaims`. When not set, `VolumeBindingImmediate` is used. This field is only applied by servers that enable the `VolumeScheduling` feature.
|
||||
<6> The {rh-virtualization} storage domain name to use.
|
||||
<7> If `true`, the disk is thin provisioned. If `false`, the disk is preallocated. Thin provisioning is recommended.
|
||||
<8> Optional: File system type to be created. Possible values: `ext4` (default) or `xfs`.
|
||||
@@ -2,7 +2,6 @@
|
||||
//
|
||||
// * storage/container_storage_interface/persistent-storage-csi-ebs.adoc
|
||||
// * storage/container_storage_interface/persistent-storage-csi-manila.adoc
|
||||
// * storage/container_storage_interface/persistent-storage-csi-ovirt.adoc
|
||||
// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc
|
||||
// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc
|
||||
|
||||
|
||||
@@ -33,7 +33,6 @@ ifndef::openshift-dedicated,openshift-rosa[]
|
||||
|OpenStack Cinder | ✅ | ✅ | ✅| -
|
||||
|OpenShift Data Foundation | ✅ | ✅ | ✅| -
|
||||
|OpenStack Manila | ✅ | - | -| -
|
||||
|Red Hat Virtualization (oVirt) | - | - | ✅| -
|
||||
|Shared Resource | - | - | - | ✅
|
||||
|VMware vSphere | ✅^[1]^ | - | ✅^[2]^| -
|
||||
endif::openshift-dedicated,openshift-rosa[]
|
||||
|
||||
@@ -1,75 +0,0 @@
|
||||
:_content-type: PROCEDURE
|
||||
[id="persistent-storage-rhv_{context}"]
|
||||
= Creating a persistent volume on RHV
|
||||
|
||||
When you create a `PersistentVolumeClaim` (PVC) object, {product-title} provisions a new persistent volume (PV) and creates a `PersistentVolume` object.
|
||||
|
||||
.Prerequisites
|
||||
* You are logged in to a running {product-title} cluster.
|
||||
* You provided the correct {rh-virtualization} credentials in `ovirt-credentials` secret.
|
||||
* You have installed the oVirt CSI driver.
|
||||
* You have defined at least one storage class.
|
||||
|
||||
.Procedure
|
||||
|
||||
* If you are using the web console to dynamically create a persistent volume on {rh-virtualization}:
|
||||
+
|
||||
. In the {product-title} console, click *Storage* -> *Persistent Volume Claims*.
|
||||
. In the persistent volume claims overview, click *Create Persistent Volume Claim*.
|
||||
. Define the required options on the resulting page.
|
||||
. Select the appropriate `StorageClass` object, which is `ovirt-csi-sc` by default.
|
||||
. Enter a unique name for the storage claim.
|
||||
. Select the access mode. Currently, RWO (ReadWriteOnce) is the only supported access mode.
|
||||
. Define the size of the storage claim.
|
||||
. Select the Volume Mode:
|
||||
+
|
||||
`Filesystem`: Mounted into pods as a directory. This mode is the default.
|
||||
+
|
||||
`Block`: Block device, without any file system on it
|
||||
+
|
||||
. Click *Create* to create the `PersistentVolumeClaim` object and generate a `PersistentVolume` object.
|
||||
|
||||
* If you are using the command-line interface (CLI) to dynamically create a {rh-virtualization} CSI volume:
|
||||
+
|
||||
. Create and save a file with the `PersistentVolumeClaim` object described by the following sample YAML:
|
||||
+
|
||||
.pvc-ovirt.yaml
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pvc-ovirt
|
||||
spec:
|
||||
storageClassName: ovirt-csi-sc <1>
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: <volume size> <2>
|
||||
volumeMode: <volume mode> <3>
|
||||
----
|
||||
<1> Name of the required storage class.
|
||||
<2> Volume size in GiB.
|
||||
<3> Supported options:
|
||||
** `Filesystem`: Mounted into pods as a directory. This mode is the default.
|
||||
** `Block`: Block device, without any file system on it.
|
||||
+
|
||||
. Create the object you saved in the previous step by running the following command:
|
||||
+
|
||||
----
|
||||
$ oc create -f pvc-ovirt.yaml
|
||||
----
|
||||
+
|
||||
. To verify that the volume was created and is ready, run the following command:
|
||||
+
|
||||
----
|
||||
$ oc get pvc pvc-ovirt
|
||||
----
|
||||
+
|
||||
The `pvc-ovirt` shows that it is Bound.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
If you need to update the Operator credentials, see the instructions in link:https://access.redhat.com/solutions/6115581[How to modify the RHV credentials in OCP 4].
|
||||
====
|
||||
@@ -53,7 +53,6 @@
|
||||
// * installing/installing_ibm_z/installing-ibm-z-kvm.adoc
|
||||
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z-kvm.adoc
|
||||
// * installing/installing_ibm_z/installing-ibm-power.adoc
|
||||
// * installing/installing-rhv-restricted-network.adoc
|
||||
// * installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc
|
||||
// * installing/installing-restricted-networks-nutanix-installer-provisioned.adoc
|
||||
|
||||
@@ -127,12 +126,6 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-ibm-z-kvm"]
|
||||
:ibm-z:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-default"]
|
||||
:rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
:rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-platform-agnostic"]
|
||||
:user-infra:
|
||||
endif::[]
|
||||
@@ -152,7 +145,7 @@ If you want to SSH in to your cluster nodes to perform installation debugging or
|
||||
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
|
||||
====
|
||||
|
||||
ifndef::osp,ibm-z,rhv[]
|
||||
ifndef::osp,ibm-z[]
|
||||
[NOTE]
|
||||
====
|
||||
You must use a local key, not one that you configured with platform-specific
|
||||
@@ -311,18 +304,12 @@ endif::[]
|
||||
ifeval::["{context}" == "installing-ibm-z-kvm"]
|
||||
:!ibm-z:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-default"]
|
||||
:!rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-ibm-z"]
|
||||
:!ibm-z:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-restricted-networks-ibm-z-kvm"]
|
||||
:!ibm-z:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-rhv-customizations"]
|
||||
:!rhv:
|
||||
endif::[]
|
||||
ifeval::["{context}" == "installing-platform-agnostic"]
|
||||
:!user-infra:
|
||||
endif::[]
|
||||
|
||||
@@ -17,7 +17,6 @@ In {product-title} {product-version}, you can install a cluster that uses instal
|
||||
** The latest {product-title} release supports both the latest {rh-openstack} long-life release and intermediate release. For complete {rh-openstack} release compatibility, see the link:https://access.redhat.com/articles/4679401[{product-title} on {rh-openstack} support matrix].
|
||||
* IBM Cloud VPC
|
||||
* Nutanix
|
||||
* {rh-virtualization-first}
|
||||
* VMware vSphere
|
||||
* Alibaba Cloud
|
||||
* Bare metal
|
||||
@@ -39,7 +38,6 @@ In {product-title} {product-version}, you can install a cluster that uses user-p
|
||||
* Azure Stack Hub
|
||||
* GCP
|
||||
* {rh-openstack} versions 16.1 and 16.2
|
||||
* {rh-virtualization}
|
||||
* VMware vSphere
|
||||
* VMware Cloud on AWS
|
||||
* Bare metal
|
||||
|
||||
@@ -54,8 +54,6 @@ include::modules/dynamic-provisioning-gce-definition.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/dynamic-provisioning-vsphere-definition.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/ovirt-csi-driver-storage-class.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/dynamic-provisioning-change-default-class.adoc[leveloffset=+1]
|
||||
|
||||
[id="post-install-optimizing-storage"]
|
||||
|
||||
@@ -1,32 +0,0 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="persistent-storage-csi-ovirt"]
|
||||
= Red Hat Virtualization CSI Driver Operator
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
:context: persistent-storage-csi-ovirt
|
||||
|
||||
toc::[]
|
||||
|
||||
== Overview
|
||||
|
||||
{product-title} is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for {rh-virtualization-first}.
|
||||
|
||||
Familiarity with xref:../../storage/understanding-persistent-storage.adoc#understanding-persistent-storage[persistent storage] and xref:../../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-csi[configuring CSI volumes] is recommended when working with a Container Storage Interface (CSI) Operator and driver.
|
||||
|
||||
To create CSI-provisioned PVs that mount to {rh-virtualization} storage assets, {product-title} installs the oVirt CSI Driver Operator and the oVirt CSI driver by default in the `openshift-cluster-csi-drivers` namespace.
|
||||
|
||||
* The _oVirt CSI Driver Operator_ provides a default `StorageClass` object that you can use to create Persistent Volume Claims (PVCs). You can disable this default storage class if desired (see xref:../../storage/container_storage_interface/persistent-storage-csi-sc-manage.adoc#persistent-storage-csi-sc-manage[Managing the default storage class]).
|
||||
|
||||
* The _oVirt CSI driver_ enables you to create and mount oVirt PVs.
|
||||
|
||||
include::modules/persistent-storage-csi-about.adoc[leveloffset=+1]
|
||||
[NOTE]
|
||||
====
|
||||
The oVirt CSI driver does not support snapshots.
|
||||
====
|
||||
include::modules/ovirt-csi-driver-storage-class.adoc[leveloffset=+1]
|
||||
include::modules/persistent-storage-rhv.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* xref:../../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-csi[Configuring CSI volumes]
|
||||
* xref:../../storage/container_storage_interface/persistent-storage-csi.adoc#csi-dynamic-provisioning_persistent-storage-csi[Dynamic Provisioning]
|
||||
@@ -41,8 +41,6 @@ Managing the default storage classes is supported by the following Container Sto
|
||||
|
||||
* xref:../../storage/container_storage_interface/persistent-storage-csi-cinder.adoc#persistent-storage-csi-cinder[OpenStack Cinder]
|
||||
|
||||
* xref:../../storage/container_storage_interface/persistent-storage-csi-ovirt.adoc#persistent-storage-csi-ovirt[Red Hat Virtualization]
|
||||
|
||||
* xref:../../storage/container_storage_interface/persistent-storage-csi-vsphere.adoc#persistent-storage-vsphere[VMware vSphere]
|
||||
endif::openshift-rosa,openshift-dedicated[]
|
||||
|
||||
|
||||
@@ -8,11 +8,6 @@ toc::[]
|
||||
|
||||
You can edit a virtual machine template in the web console.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
You cannot edit a template provided by the Red Hat Virtualization Operator. If you clone the template, you can edit it.
|
||||
====
|
||||
|
||||
include::modules/virt-editing-vm-web.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/virt-add-nic-to-vm.adoc[leveloffset=+2]
|
||||
|
||||
@@ -120,10 +120,6 @@ To troubleshoot OpenStack installation issues, you can
|
||||
xref:../installing/installing_openstack/installing-openstack-troubleshooting.adoc#installing-openstack-troubleshooting[view instance logs and ssh to an instance].
|
||||
////
|
||||
|
||||
- **Install a cluster on {rh-virtualization-first}**: You can deploy clusters on {rh-virtualization-first} with a
|
||||
xref:../installing/installing_rhv/installing-rhv-default.adoc#installing-rhv-default[quick install] or an
|
||||
xref:../installing/installing_rhv/installing-rhv-customizations.adoc#installing-rhv-customizations[install with customizations].
|
||||
|
||||
ifndef::openshift-origin[]
|
||||
- **Install a cluster in a restricted network**: If your cluster that uses
|
||||
user-provisioned infrastructure on
|
||||
|
||||
Reference in New Issue
Block a user