diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index f5997e50d0..3ced2cc28b 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2424,7 +2424,7 @@ Topics: File: hcp-deploy-aws - Name: Deploying hosted control planes on bare metal File: hcp-deploy-bm - - Name: Deploying hosted conrol planes on OpenShift Virtualization + - Name: Deploying hosted control planes on OpenShift Virtualization File: hcp-deploy-virt - Name: Deploying hosted control planes on non-bare metal agent machines File: hcp-deploy-non-bm @@ -2445,10 +2445,17 @@ Topics: File: hcp-manage-non-bm - Name: Managing hosted control planes on IBM Power File: hcp-manage-ibmpower -- Name: Preparing to deploy hosted control planes in a disconnected environment - File: hcp-prepare-disconnected - Name: Deploying hosted control planes in a disconnected environment - File: hcp-deploy-disconnected + Dir: hcp-disconnected + Topics: + - Name: Introduction to hosted control planes in a disconnected environment + File: hcp-deploy-dc + - Name: Deploying hosted control planes on OpenShift Virtualization in a disconnected environment + File: hcp-deploy-dc-virt + - Name: Deploying hosted control planes on bare metal in a disconnected environment + File: hcp-deploy-dc-bm + - Name: Monitoring user workload in a disconnected environment + File: hcp-dc-monitor - Name: Updating hosted control planes File: hcp-updating - Name: High availability for hosted control planes diff --git a/hosted_control_planes/hcp-deploy-disconnected.adoc b/hosted_control_planes/hcp-deploy-disconnected.adoc deleted file mode 100644 index fea2b7ace9..0000000000 --- a/hosted_control_planes/hcp-deploy-disconnected.adoc +++ /dev/null @@ -1,7 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="hcp-deploy-disconnected"] -include::_attributes/common-attributes.adoc[] -= Deploying {hcp} in a disconnected environment -:context: hcp-deploy-disconnected - -toc::[] \ No newline at end of file diff --git a/hosted_control_planes/hcp-disconnected/_attributes b/hosted_control_planes/hcp-disconnected/_attributes new file mode 120000 index 0000000000..20cc1dcb77 --- /dev/null +++ b/hosted_control_planes/hcp-disconnected/_attributes @@ -0,0 +1 @@ +../../_attributes/ \ No newline at end of file diff --git a/hosted_control_planes/hcp-disconnected/hcp-dc-monitor.adoc b/hosted_control_planes/hcp-disconnected/hcp-dc-monitor.adoc new file mode 100644 index 0000000000..ae143cb6bf --- /dev/null +++ b/hosted_control_planes/hcp-disconnected/hcp-dc-monitor.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: ASSEMBLY +[id="hcp-dc-monitor"] +include::_attributes/common-attributes.adoc[] += Monitoring user workload in a disconnected environment +:context: hcp-dc-monitor + +toc::[] + +The `hypershift-addon` managed cluster add-on enables the `--enable-uwm-telemetry-remote-write` option in the HyperShift Operator. By enabling that option, you ensure that user workload monitoring is enabled and that it can remotely write telemetry metrics from control planes. + +include::modules/hcp-dc-usr-wkld.adoc[leveloffset=+1] +include::modules/hcp-dc-verify.adoc[leveloffset=+1] +include::modules/hcp-dc-addon.adoc[leveloffset=+1] \ No newline at end of file diff --git a/hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc b/hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc new file mode 100644 index 0000000000..e2884d74dc --- /dev/null +++ b/hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc @@ -0,0 +1,54 @@ +:_mod-docs-content-type: ASSEMBLY +[id="hcp-deploy-dc-bm"] +include::_attributes/common-attributes.adoc[] += Deploying {hcp} on bare metal in a disconnected environment +:context: hcp-deploy-dc-bm + +toc::[] + +When you provision hosted control planes on bare metal, you use the Agent platform. The Agent platform and {mce} work together to enable disconnected deployments. The Agent platform uses the central infrastructure management service to add worker nodes to a hosted cluster. For an introduction to the central infrastructure management service, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#enable-cim[Enabling the central infrastructure management service]. + +include::modules/hcp-dc-bm-arch.adoc[leveloffset=+1] +include::modules/hcp-dc-bm-reqs.adoc[leveloffset=+1] +include::modules/hcp-dc-extract.adoc[leveloffset=+1] +include::modules/hcp-dc-hypervisor.adoc[leveloffset=+1] +include::modules/hcp-bm-dns.adoc[leveloffset=+1] +include::modules/hcp-dc-registry.adoc[leveloffset=+1] +include::modules/hcp-dc-mgmt-cluster.adoc[leveloffset=+1] +include::modules/hcp-dc-web-server.adoc[leveloffset=+1] +include::modules/hcp-dc-image-mirror.adoc[leveloffset=+1] +include::modules/hcp-dc-apply-objects.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../disconnected/mirroring/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]. + +[id="hcp-dc-mce-bm"] +== Deploying {mce-short} for a disconnected installation of {hcp} + +The {mce} plays a crucial role in deploying clusters across providers. If you do not have {mce-short} installed, review the following documentation to understand the prerequisites and steps to install it: + +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#mce-intro[About cluster lifecycle with multicluster engine operator] +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#mce-install-intro[Installing and upgrading multicluster engine operator] + +include::modules/hcp-agentserviceconfig.adoc[leveloffset=+2] + +[id="hcp-dc-tls-bm"] +== Configuring TLS certificates for a disconnected installation of {hcp} + +To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster. + +include::modules/hcp-dc-tls-mgmt.adoc[leveloffset=+2] +include::modules/hcp-dc-tls-hosted.adoc[leveloffset=+2] + +[id="hcp-dc-bm-hosted"] +== Creating a hosted cluster on bare metal + +A hosted cluster is an {product-title} cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane. + +include::modules/hcp-hc-objects.adoc[leveloffset=+2] +include::modules/hcp-nodepool-hc.adoc[leveloffset=+2] +include::modules/hcp-dc-infraenv.adoc[leveloffset=+2] +include::modules/hcp-worker-hc.adoc[leveloffset=+2] +include::modules/hcp-bm-hosts.adoc[leveloffset=+2] +include::modules/hcp-dc-scale-np.adoc[leveloffset=+2] \ No newline at end of file diff --git a/hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc b/hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc new file mode 100644 index 0000000000..21ea540def --- /dev/null +++ b/hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc @@ -0,0 +1,67 @@ +:_mod-docs-content-type: ASSEMBLY +[id="hcp-deploy-dc-virt"] +include::_attributes/common-attributes.adoc[] += Deploying {hcp} on {VirtProductName} in a disconnected environment +:context: hcp-deploy-dc-virt + +toc::[] + +When you deploy {hcp} in a disconnected environment, some of the steps differ depending on the platform you use. The following procedures are specific to deployments on {VirtProductName}. + +:FeatureName: {hcp-capital} on {VirtProductName} in a disconnected environment +include::snippets/technology-preview.adoc[] + +[id="prerequisites_{context}"] +== Prerequisites + +* You have a disconnected {product-title} environment serving as your management cluster. +* You have an internal registry to mirror images on. For more information, see xref:../../disconnected/mirroring/index.adoc#installing-mirroring-disconnected-about[About disconnected installation mirroring]. + +include::modules/hcp-dc-image-mirror.adoc[leveloffset=+1] +include::modules/hcp-dc-apply-objects.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources +* xref:../../disconnected/mirroring/installing-mirroring-disconnected.adoc#installing-mirroring-disconnected[Mirroring images for a disconnected installation using the oc-mirror plugin]. + +[id="hcp-dc-mce-virt"] +== Deploying {mce-short} for a disconnected installation of {hcp} + +The {mce} plays a crucial role in deploying clusters across providers. If you do not have {mce-short} installed, review the following documentation to understand the prerequisites and steps to install it: + +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#mce-intro[About cluster lifecycle with multicluster engine operator] +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#mce-install-intro[Installing and upgrading multicluster engine operator] + +[id="hcp-dc-tls-virt"] +== Configuring TLS certificates for a disconnected installation of {hcp} + +To ensure proper function in a disconnected deployment, you need to configure the registry CA certificates in the management cluster and the worker nodes for the hosted cluster. + +include::modules/hcp-dc-tls-mgmt.adoc[leveloffset=+2] +include::modules/hcp-dc-tls-hosted.adoc[leveloffset=+2] + +[id="hcp-dc-virt-hosted"] +== Creating a hosted cluster on {VirtProductName} + +A hosted cluster is an {product-title} cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane. + +include::modules/hcp-virt-reqs.adoc[leveloffset=+2] +include::modules/hcp-virt-create-hc-cli.adoc[leveloffset=+2] +include::modules/hcp-virt-ingress-dns.adoc[leveloffset=+2] + +[id="hcp-dc-virt-ingress-dns-custom"] +=== Customizing ingress and DNS behavior + +If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration. + +include::modules/hcp-virt-hc-base-domain.adoc[leveloffset=+3] +include::modules/hcp-virt-load-balancer.adoc[leveloffset=+3] +include::modules/hcp-virt-wildcard-dns.adoc[leveloffset=+3] + +[id="hcp-dc-finish"] +== Finishing the deployment + +You can monitor the deployment of a hosted cluster from two perspectives: the control plane and the data plane. + +include::modules/hcp-monitor-cp.adoc[leveloffset=+2] +include::modules/hcp-monitor-dp.adoc[leveloffset=+2] \ No newline at end of file diff --git a/hosted_control_planes/hcp-disconnected/hcp-deploy-dc.adoc b/hosted_control_planes/hcp-disconnected/hcp-deploy-dc.adoc new file mode 100644 index 0000000000..95b1af3077 --- /dev/null +++ b/hosted_control_planes/hcp-disconnected/hcp-deploy-dc.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: ASSEMBLY +[id="hcp-deploy-dc"] +include::_attributes/common-attributes.adoc[] += Introduction to {hcp} in a disconnected environment +:context: hcp-deploy-dc + +toc::[] + +In the context of {hcp}, a disconnected environment is an {product-title} deployment that is not connected to the internet and that uses {hcp} as a base. You can deploy {hcp} in a disconnected environment on bare metal or {VirtProductName}. + +{hcp-capital} in disconnected environments function differently than in standalone {product-title}: + +* The control plane is in the management cluster. The control plane is where the pods of the hosted control plane are run and managed by the Control Plane Operator. +* The data plane is in the workers of the hosted cluster. The data plane is where the workloads and other pods run, all managed by the HostedClusterConfig Operator. + +Depending on where the pods are running, they are affected by the `ImageDigestMirrorSet` (IDMS) or `ImageContentSourcePolicy` (ICSP) that is created in the management cluster or by the `ImageContentSource` that is set in the `spec` field of the manifest for the hosted cluster. The `spec` field is translated into an IDMS object on the hosted cluster. + +You can deploy {hcp} in a disconnected environment on IPv4, IPv6, and dual-stack networks. IPv4 is one of the simplest network configurations to deploy {hcp} in a disconnected environment. IPv4 ranges require fewer external components than IPv6 or dual-stack setups. For {hcp} on {VirtProductName} in a disconnected environment, use either an IPv4 or a dual-stack network. + +:FeatureName: {hcp-capital} in a disconnected environment on a dual-stack network +include::snippets/technology-preview.adoc[] \ No newline at end of file diff --git a/hosted_control_planes/hcp-disconnected/images b/hosted_control_planes/hcp-disconnected/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/hosted_control_planes/hcp-disconnected/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/hosted_control_planes/hcp-disconnected/modules b/hosted_control_planes/hcp-disconnected/modules new file mode 120000 index 0000000000..8b0e854007 --- /dev/null +++ b/hosted_control_planes/hcp-disconnected/modules @@ -0,0 +1 @@ +../../modules \ No newline at end of file diff --git a/hosted_control_planes/hcp-disconnected/snippets b/hosted_control_planes/hcp-disconnected/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/hosted_control_planes/hcp-disconnected/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/hosted_control_planes/hcp-prepare-disconnected.adoc b/hosted_control_planes/hcp-prepare-disconnected.adoc deleted file mode 100644 index 4ffad73e78..0000000000 --- a/hosted_control_planes/hcp-prepare-disconnected.adoc +++ /dev/null @@ -1,7 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="hcp-prepare-disconnected"] -include::_attributes/common-attributes.adoc[] -= Preparing to deploy {hcp} in a disconnected environment -:context: hcp-prepare-disconnected - -toc::[] \ No newline at end of file diff --git a/images/489_RHACM_HyperShift_on_bare_metal_1223.png b/images/489_RHACM_HyperShift_on_bare_metal_1223.png new file mode 100644 index 0000000000..6aeddb25ec Binary files /dev/null and b/images/489_RHACM_HyperShift_on_bare_metal_1223.png differ diff --git a/modules/hcp-agentserviceconfig.adoc b/modules/hcp-agentserviceconfig.adoc new file mode 100644 index 0000000000..d5539b52f3 --- /dev/null +++ b/modules/hcp-agentserviceconfig.adoc @@ -0,0 +1,124 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-agentserviceconfig_{context}"] += Deploying AgentServiceConfig resources + +The `AgentServiceConfig` custom resource is an essential component of the Assisted Service add-on that is part of {mce-short}. It is responsible for bare metal cluster deployment. When the add-on is enabled, you deploy the `AgentServiceConfig` resource to configure the add-on. + +In addition to configuring the `AgentServiceConfig` resource, you need to include additional config maps to ensure that {mce-short} functions properly in a disconnected environment. + +.Procedure + +. Configure the custom registries by adding the following config map, which contains the disconnected details to customize the deployment: ++ +[source,yaml] +---- +apiVersion: v1 +kind: ConfigMap +metadata: + name: custom-registries + namespace: multicluster-engine + labels: + app: assisted-service +data: + ca-bundle.crt: | + -----BEGIN CERTIFICATE----- + -----END CERTIFICATE----- + registries.conf: | + unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] + + [[registry]] + prefix = "" + location = "registry.redhat.io/openshift4" + mirror-by-digest-only = true + + [[registry.mirror]] + location = "registry.dns.base.domain.name:5000/openshift4" <1> + + [[registry]] + prefix = "" + location = "registry.redhat.io/rhacm2" + mirror-by-digest-only = true + # ... + # ... +---- ++ +<1> Replace `dns.base.domain.name` with the DNS base domain name. ++ +The object contains two fields: + +* Custom CAs: This field contains the Certificate Authorities (CAs) that are loaded into the various processes of the deployment. +* Registries: The `Registries.conf` field contains information about images and namespaces that need to be consumed from a mirror registry rather than the original source registry. + +. Configure the Assisted Service by adding the `AssistedServiceConfig` object, as shown in the following example: ++ +[source,yaml] +---- +apiVersion: agent-install.openshift.io/v1beta1 +kind: AgentServiceConfig +metadata: + annotations: + unsupported.agent-install.openshift.io/assisted-service-configmap: assisted-service-config <1> + name: agent + namespace: multicluster-engine +spec: + mirrorRegistryRef: + name: custom-registries <2> + databaseStorage: + storageClassName: lvms-vg1 + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi + filesystemStorage: + storageClassName: lvms-vg1 + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 20Gi + osImages: <3> + - cpuArchitecture: x86_64 <4> + openshiftVersion: "4.14" + rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live-rootfs.x86_64.img <5> + url: http://registry.dns.base.domain.name:8080/images/rhcos-414.92.202308281054-0-live.x86_64.iso + version: 414.92.202308281054-0 + - cpuArchitecture: x86_64 + openshiftVersion: "4.15" + rootFSUrl: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live-rootfs.x86_64.img + url: http://registry.dns.base.domain.name:8080/images/rhcos-415.92.202403270524-0-live.x86_64.iso + version: 415.92.202403270524-0 +---- ++ +<1> The `metadata.annotations["unsupported.agent-install.openshift.io/assisted-service-configmap"]` annotation references the config map name that the Operator consumes to customize behavior. +<2> The `spec.mirrorRegistryRef.name` annotation points to the config map that contains disconnected registry information that the Assisted Service Operator consumes. This config map adds those resources during the deployment process. +<3> The `spec.osImages` field contains different versions available for deployment by this Operator. This field is mandatory. This example assumes that you already downloaded the `RootFS` and `LiveISO` files. +<4> Add a `cpuArchitecture` subsection for every {product-title} release that you want to deploy. In this example, `cpuArchitecture` subsections are included for 4.14 and 4.15. +<5> In the `rootFSUrl` and `url` fields, replace `dns.base.domain.name` with the DNS base domain name. + +. Deploy all of the objects by concatenating them into a single file and applying them to the management cluster. To do so, enter the following command: ++ +[source,terminal] +---- +$ oc apply -f agentServiceConfig.yaml +---- ++ +The command triggers two pods. ++ +.Example output +[source,terminal] +---- +assisted-image-service-0 1/1 Running 2 11d <1> +assisted-service-668b49548-9m7xw 2/2 Running 5 11d <2> +---- ++ +<1> The `assisted-image-service` pod is responsible for creating the Red Hat Enterprise Linux CoreOS (RHCOS) boot image template, which is customized for each cluster that you deploy. +<2> The `assisted-service` refers to the Operator. + +.Next steps + +Configure TLS certificates by completing the steps in _Configuring TLS certificates for a disconnected installation of {hcp}_. diff --git a/modules/hcp-bm-hosts.adoc b/modules/hcp-bm-hosts.adoc new file mode 100644 index 0000000000..8f2ffe2502 --- /dev/null +++ b/modules/hcp-bm-hosts.adoc @@ -0,0 +1,121 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-bm-hosts_{context}"] += Creating bare metal hosts for the hosted cluster + +A _bare metal host_ is an `openshift-machine-api` object that encompasses physical and logical details so that it can be identified by a Metal3 Operator. Those details are associated with other Assisted Service objects, known as _agents_. + +.Prerequisites + +Before you create the bare metal host and destination nodes, you must have the destination machines ready. + +.Procedure + +To create a bare metal host, complete the following steps: + +. Create a YAML file with the following information: ++ +Because you have at least one secret that holds the bare metal host credentials, you need to create at least two objects for each worker node. ++ +[source,yaml] +---- +apiVersion: v1 +kind: Secret +metadata: + name: -worker0-bmc-secret \// <1> + namespace: - \// <2> +data: + password: YWRtaW4= \// <3> + username: YWRtaW4= \// <4> +type: Opaque +# ... +apiVersion: metal3.io/v1alpha1 +kind: BareMetalHost +metadata: + name: -worker0 + namespace: - \// <2> + labels: + infraenvs.agent-install.openshift.io: \// <5> + annotations: + inspect.metal3.io: disabled + bmac.agent-install.openshift.io/hostname: -worker0 \// <6> +spec: + automatedCleaningMode: disabled \// <7> + bmc: + disableCertificateVerification: true \// <8> + address: redfish-virtualmedia://[192.168.126.1]:9000/redfish/v1/Systems/local/-worker0 \// <9> + credentialsName: -worker0-bmc-secret \// <10> + bootMACAddress: aa:aa:aa:aa:02:11 \// <11> + online: true // <12> +---- ++ +<1> Replace `` with your hosted cluster. +<2> Replace `` with your hosted cluster. Replace `` with the name of your hosted cluster namespace. +<3> Specify the password of the baseboard management controller (BMC) in Base64 format. +<4> Specify the user name of the BMC in Base64 format. +<5> Replace `` with your hosted cluster. The `infraenvs.agent-install.openshift.io` field serves as the link between the Assisted Installer and the `BareMetalHost` objects. +<6> Replace `` with your hosted cluster. The `bmac.agent-install.openshift.io/hostname` field represents the node name that is adopted during deployment. +<7> The `automatedCleaningMode` field prevents the node from being erased by the Metal3 Operator. +<8> The `disableCertificateVerification` field is set to `true` to bypass certificate validation from the client. +<9> Replace `` with your hosted cluster. The `address` field denotes the BMC address of the worker node. +<10> Replace `` with your hosted cluster. The `credentialsName` field points to the secret where the user and password credentials are stored. +<11> The `bootMACAddress` field indicates the interface MAC address that the node starts from. +<12> The `online` field defines the state of the node after the `BareMetalHost` object is created. + +. Deploy the `BareMetalHost` object by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f 04-bmh.yaml +---- ++ +During the process, you can view the following output: ++ +* This output indicates that the process is trying to reach the nodes: ++ +.Example output +[source,terminal] +---- +NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE +clusters-hosted hosted-worker0 registering true 2s +clusters-hosted hosted-worker1 registering true 2s +clusters-hosted hosted-worker2 registering true 2s +---- ++ +* This output indicates that the nodes are starting: ++ +.Example output +[source,terminal] +---- +NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE +clusters-hosted hosted-worker0 provisioning true 16s +clusters-hosted hosted-worker1 provisioning true 16s +clusters-hosted hosted-worker2 provisioning true 16s +---- ++ +* This output indicates that the nodes started successfully: ++ +.Example output +[source,terminal] +---- +NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE +clusters-hosted hosted-worker0 provisioned true 67s +clusters-hosted hosted-worker1 provisioned true 67s +clusters-hosted hosted-worker2 provisioned true 67s +---- + +. After the nodes start, notice the agents in the namespace, as shown in this example: ++ +.Example output +[source,terminal] +---- +NAMESPACE NAME CLUSTER APPROVED ROLE STAGE +clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 true auto-assign +clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 true auto-assign +clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 true auto-assign +---- ++ +The agents represent nodes that are available for installation. To assign the nodes to a hosted cluster, scale up the node pool. \ No newline at end of file diff --git a/modules/hcp-dc-addon.adoc b/modules/hcp-dc-addon.adoc new file mode 100644 index 0000000000..a3d50b6ba3 --- /dev/null +++ b/modules/hcp-dc-addon.adoc @@ -0,0 +1,41 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-dc-monitor.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-addon_{context}"] += Configuring the hypershift-addon managed cluster add-on to run on an infrastructure node + +By default, no node placement preference is specified for the `hypershift-addon` managed cluster add-on. Consider running the add-ons on the infrastructure nodes, because by doing so, you can prevent incurring billing costs against subscription counts and separate maintenance and management tasks. + +.Procedure + +. Log in to the hub cluster. + +. Open the `hypershift-addon-deploy-config` add-on deployment configuration specification for editing by entering the following command: ++ +[source,terminal] +---- +$ oc edit addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine +---- + +. Add the `nodePlacement` field to the specification, as shown in the following example: ++ +[source,yaml] +---- +apiVersion: addon.open-cluster-management.io/v1alpha1 +kind: AddOnDeploymentConfig +metadata: + name: hypershift-addon-deploy-config + namespace: multicluster-engine +spec: + nodePlacement: + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - effect: NoSchedule + key: node-role.kubernetes.io/infra + operator: Exists +---- + +. Save the changes. The `hypershift-addon` managed cluster add-on is deployed on an infrastructure node for new and existing managed clusters. \ No newline at end of file diff --git a/modules/hcp-dc-apply-objects.adoc b/modules/hcp-dc-apply-objects.adoc new file mode 100644 index 0000000000..cb5e7be60e --- /dev/null +++ b/modules/hcp-dc-apply-objects.adoc @@ -0,0 +1,63 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-apply-objects_{context}"] += Applying objects in the management cluster + +After the mirroring process is complete, you need to apply two objects in the management cluster: + +* `ImageContentSourcePolicy` (ICSP) or `ImageDigestMirrorSet` (IDMS) +* Catalog sources + +When you use the `oc-mirror` tool, the output artifacts are in a folder named `oc-mirror-workspace/results-XXXXXX/`. + +The ICSP or IDMS initiates a `MachineConfig` change that does not restart your nodes but restarts the kubelet on each of them. After the nodes are marked as `READY`, you need to apply the newly generated catalog sources. + +The catalog sources initiate actions in the `openshift-marketplace` Operator, such as downloading the catalog image and processing it to retrieve all the `PackageManifests` that are included in that image. + +.Procedure + +. To check the new sources, run the following command by using the new `CatalogSource` as a source: ++ +[source,terminal] +---- +$ oc get packagemanifest +---- + +. To apply the artifacts, complete the following steps: + +.. Create the ICSP or IDMS artifacts by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f oc-mirror-workspace/results-XXXXXX/imageContentSourcePolicy.yaml +---- + +.. Wait for the nodes to become ready, and then enter the following command: ++ +[source,terminal] +---- +$ oc apply -f catalogSource-XXXXXXXX-index.yaml +---- + +. Mirror the OLM catalogs and configure the hosted cluster to point to the mirror. ++ +When you use the `management` (default) OLMCatalogPlacement mode, the image stream that is used for OLM catalogs is not automatically amended with override information from the ICSP on the management cluster. ++ +.. If the OLM catalogs are properly mirrored to an internal registry by using the original name and tag, add the `hypershift.openshift.io/olm-catalogs-is-registry-overrides` annotation to the `HostedCluster` resource. The format is `"sr1=dr1,sr2=dr2"`, where the source registry string is a key and the destination registry is a value. + +.. To bypass the OLM catalog image stream mechanism, use the following four annotations on the `HostedCluster` resource to directly specify the addresses of the four images to use for OLM Operator catalogs: + +** `hypershift.openshift.io/certified-operators-catalog-image` +** `hypershift.openshift.io/community-operators-catalog-image` +** `hypershift.openshift.io/redhat-marketplace-catalog-image` +** `hypershift.openshift.io/redhat-operators-catalog-image` + +In this case, the image stream is not created, and you must update the value of the annotations when the internal mirror is refreshed to pull in Operator updates. + +.Next steps + +Deploy the {mce-short} by completing the steps in _Deploying {mce-short} for a disconnected installation of {hcp}_. \ No newline at end of file diff --git a/modules/hcp-dc-bm-arch.adoc b/modules/hcp-dc-bm-arch.adoc new file mode 100644 index 0000000000..a39b669d11 --- /dev/null +++ b/modules/hcp-dc-bm-arch.adoc @@ -0,0 +1,70 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: CONCEPT +[id="hcp-dc-bm-arch_{context}"] += Disconnected environment architecture for bare metal + +The following diagram illustrates an example architecture of a disconnected environment: + +image:../images/489_RHACM_HyperShift_on_bare_metal_1223.png[Disconnected architecture diagram] + +. Configure infrastructure services, including the registry certificate deployment with TLS support, web server, and DNS, to ensure that the disconnected deployment works. +. Create a config map in the `openshift-config` namespace. In this example, the config map is named `registry-config`. The content of the config map is the Registry CA certificate. The data field of the config map must contain the following key/value: + +* Key: `..`, for example, `registry.hypershiftdomain.lab..5000:`. Ensure that you place `..` after the registry DNS domain name when you specify a port. +* Value: The certificate content ++ +For more information about creating a config map, see _Configuring TLS certificates for a disconnected installation of {hcp}_. +. Modify the `images.config.openshift.io` custom resource (CR) specification and adds a new field named `additionalTrustedCA` with a value of `name: registry-config`. +. Create a config map that contains two data fields. One field contains the `registries.conf` file in `RAW` format, and the other field contains the Registry CA and is named `ca-bundle.crt`. The config map belongs to the `multicluster-engine` namespace, and the config map name is referenced in other objects. For an example of a config map, see the following sample configuration: ++ +[source,yaml] +---- +apiVersion: v1 +kind: ConfigMap +metadata: + name: custom-registries + namespace: multicluster-engine + labels: + app: assisted-service +data: + ca-bundle.crt: | + -----BEGIN CERTIFICATE----- + # ... + -----END CERTIFICATE----- + registries.conf: | + unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] + + [[registry]] + prefix = "" + location = "registry.redhat.io/openshift4" + mirror-by-digest-only = true + + [[registry.mirror]] + location = "registry.ocp-edge-cluster-0.qe.lab.redhat.com:5000/openshift4" + + [[registry]] + prefix = "" + location = "registry.redhat.io/rhacm2" + mirror-by-digest-only = true +# ... +# ... +---- + +. In the {mce-short} namespace, you create the `multiclusterengine` CR, which enables both the Agent and `hypershift-addon` add-ons. The {mce-short} namespace must contain the config maps to modify behavior in a disconnected deployment. The namespace also contains the `multicluster-engine`, `assisted-service`, and `hypershift-addon-manager` pods. +. Create the following objects that are necessary to deploy the hosted cluster: + +** Secrets: Secrets contain the pull secret, SSH key, and etcd encryption key. +** Config map: The config map contains the CA certificate of the private registry. +** `HostedCluster`: The `HostedCluster` resource defines the configuration of the cluster that the user intends to create. +** `NodePool`: The `NodePool` resource identifies the node pool that references the machines to use for the data plane. + +. After you create the hosted cluster objects, the HyperShift Operator establishes the `HostedControlPlane` namespace to accommodate control plane pods. The namespace also hosts components such as Agents, bare metal hosts (BMHs), and the `InfraEnv` resource. Later, you create the `InfraEnv` resource, and after ISO creation, you create the BMHs and their secrets that contain baseboard management controller (BMC) credentials. + +. The Metal3 Operator in the `openshift-machine-api` namespace inspects the new BMHs. Then, the Metal3 Operator tries to connect to the BMCs to start them by using the configured `LiveISO` and `RootFS` values that are specified through the `AgentServiceConfig` CR in the {mce-short} namespace. + +. After the worker nodes of the `HostedCluster` resource are started, an Agent container is started. This agent establishes contact with the Assisted Service, which orchestrates the actions to complete the deployment. Initially, you need to scale the `NodePool` resource to the number of worker nodes for the `HostedCluster` resource. The Assisted Service manages the remaining tasks. + +. At this point, you wait for the deployment process to be completed. \ No newline at end of file diff --git a/modules/hcp-dc-bm-reqs.adoc b/modules/hcp-dc-bm-reqs.adoc new file mode 100644 index 0000000000..c175ec3846 --- /dev/null +++ b/modules/hcp-dc-bm-reqs.adoc @@ -0,0 +1,20 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: CONCEPT +[id="hcp-dc-bm-reqs_{context}"] += Requirements to deploy {hcp} on bare metal in a disconnected environment + +To configure {hcp} in a disconnected environment, you must meet the following prerequisites: + +- CPU: The number of CPUs provided determines how many hosted clusters can run concurrently. In general, use 16 CPUs for each node for 3 nodes. For minimal development, you can use 12 CPUs for each node for 3 nodes. +- Memory: The amount of RAM affects how many hosted clusters can be hosted. Use 48 GB of RAM for each node. For minimal development, 18 GB of RAM might be sufficient. +- Storage: Use SSD storage for {mce-short}. +* Management cluster: 250 GB. +* Registry: The storage needed depends on the number of releases, operators, and images that are hosted. An acceptable number might be 500 GB, preferably separated from the disk that hosts the hosted cluster. +* Web server: The storage needed depends on the number of ISOs and images that are hosted. An acceptable number might be 500 GB. +- Production: For a production environment, separate the management cluster, the registry, and the web server on different disks. This example illustrates a possible configuration for production: +* Registry: 2 TB +* Management cluster: 500 GB +* Web server: 2 TB \ No newline at end of file diff --git a/modules/hcp-dc-extract.adoc b/modules/hcp-dc-extract.adoc new file mode 100644 index 0000000000..33c4e49d0c --- /dev/null +++ b/modules/hcp-dc-extract.adoc @@ -0,0 +1,26 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: CONCEPT +[id="hcp-dc-extract_{context}"] += Extracting the release image digest + +You can extract the {product-title} release image digest by using the tagged image. + +.Procedure + +* Obtain the image digest by running the following command: ++ +[source,terminal] +---- +$ oc adm release info | grep "Pull From" +---- ++ +Replace `` with the tagged image for the supported {product-title} version, for example, `quay.io/openshift-release-dev/ocp-release:4.14.0-x8_64`. ++ +.Example output ++ +---- +Pull From: quay.io/openshift-release-dev/ocp-release@sha256:69d1292f64a2b67227c5592c1a7d499c7d00376e498634ff8e1946bc9ccdddfe +---- \ No newline at end of file diff --git a/modules/hcp-dc-hypervisor.adoc b/modules/hcp-dc-hypervisor.adoc new file mode 100644 index 0000000000..88ef593a41 --- /dev/null +++ b/modules/hcp-dc-hypervisor.adoc @@ -0,0 +1,170 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-hypervisor_{context}"] += Configuring the hypervisor for a disconnected installation of {hcp} + +The following information applies to virtual machine environments only. + +.Procedure + +. To deploy a virtual management cluster, access the required packages by entering the following command: ++ +[source,terminal] +---- +$ sudo dnf install dnsmasq radvd vim golang podman bind-utils net-tools httpd-tools tree htop strace tmux -y +---- + +. Enable and start the Podman service by entering the following command: ++ +[source,terminal] +---- +$ systemctl enable --now podman +---- + +. To use `kcli` to deploy the management cluster and other virtual components, install and configure the hypervisor by entering the following commands: ++ +[source,terminal] +---- +$ sudo yum -y install libvirt libvirt-daemon-driver-qemu qemu-kvm +---- ++ +[source,terminal] +---- +$ sudo usermod -aG qemu,libvirt $(id -un) +---- ++ +[source,terminal] +---- +$ sudo newgrp libvirt +---- ++ +[source,terminal] +---- +$ sudo systemctl enable --now libvirtd +---- ++ +[source,terminal] +---- +$ sudo dnf -y copr enable karmab/kcli +---- ++ +[source,terminal] +---- +$ sudo dnf -y install kcli +---- ++ +[source,terminal] +---- +$ sudo kcli create pool -p /var/lib/libvirt/images default +---- ++ +[source,terminal] +---- +$ kcli create host kvm -H 127.0.0.1 local +---- ++ +[source,terminal] +---- +$ sudo setfacl -m u:$(id -un):rwx /var/lib/libvirt/images +---- ++ +[source,terminal] +---- +$ kcli create network -c 192.168.122.0/24 default +---- + +. Enable the network manager dispatcher to ensure that virtual machines can resolve the required domains, routes, and registries. To enable the network manager dispatcher, in the `/etc/NetworkManager/dispatcher.d/` directory, create a script named `forcedns` that contains the following content: ++ +[source,bash] +---- +#!/bin/bash + +export IP="192.168.126.1" <1> +export BASE_RESOLV_CONF="/run/NetworkManager/resolv.conf" + +if ! [[ `grep -q "$IP" /etc/resolv.conf` ]]; then +export TMP_FILE=$(mktemp /etc/forcedns_resolv.conf.XXXXXX) +cp $BASE_RESOLV_CONF $TMP_FILE +chmod --reference=$BASE_RESOLV_CONF $TMP_FILE +sed -i -e "s/dns.base.domain.name//" -e "s/search /& dns.base.domain.name /" -e "0,/nameserver/s/nameserver/& $IP\n&/" $TMP_FILE <2> +mv $TMP_FILE /etc/resolv.conf +fi +echo "ok" +---- ++ +<1> Modify the `IP` variable to point to the IP address of the hypervisor interface that hosts the {product-title} management cluster. +<2> Replace `dns.base.domain.name` with the DNS base domain name. + +. After you create the file, add permissions by entering the following command: ++ +[source,terminal] +---- +$ chmod 755 /etc/NetworkManager/dispatcher.d/forcedns +---- + +. Run the script and verify that the output returns `ok`. + +. Configure `ksushy` to simulate baseboard management controllers (BMCs) for the virtual machines. Enter the following commands: ++ +[source,terminal] +---- +$ sudo dnf install python3-pyOpenSSL.noarch python3-cherrypy -y +---- ++ +[source,terminal] +---- +$ kcli create sushy-service --ssl --ipv6 --port 9000 +---- ++ +[source,terminal] +---- +$ sudo systemctl daemon-reload +---- ++ +[source,terminal] +---- +$ systemctl enable --now ksushy +---- + +. Test whether the service is correctly functioning by entering the following command: ++ +[source,terminal] +---- +$ systemctl status ksushy +---- + +. If you are working in a development environment, configure the hypervisor system to allow various types of connections through different virtual networks within the environment. ++ +[NOTE] +==== +If you are working in a production environment, you must establish proper rules for the `firewalld` service and configure SELinux policies to maintain a secure environment. +==== + +* For SELinux, enter the following command: ++ +[source,terminal] +---- +$ sed -i s/^SELINUX=.*$/SELINUX=permissive/ /etc/selinux/config; setenforce 0 +---- + +* For `firewalld`, enter the following command: ++ +[source,terminal] +---- +$ systemctl disable --now firewalld +---- + +* For `libvirtd`, enter the following commands: ++ +[source,terminal] +---- +$ systemctl restart libvirtd +---- ++ +[source,terminal] +---- +$ systemctl enable --now libvirtd +---- \ No newline at end of file diff --git a/modules/hcp-dc-image-mirror.adoc b/modules/hcp-dc-image-mirror.adoc new file mode 100644 index 0000000000..9a0a9eb127 --- /dev/null +++ b/modules/hcp-dc-image-mirror.adoc @@ -0,0 +1,108 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-image-mirror_{context}"] += Configuring image mirroring for {hcp} in a disconnected environment + +Image mirroring is the process of fetching images from external registries, such as `registry.redhat.com` or `quay.io`, and storing them in your private registry. + +In the following procedures, the `oc-mirror` tool is used, which is a binary that uses the `ImageSetConfiguration` object. In the file, you can specify the following information: + +* The {product-title} versions to mirror. The versions are in `quay.io`. +* The additional Operators to mirror. Select packages individually. +* The extra images that you want to add to the repository. + +.Prerequisites + +Ensure that the registry server is running before you start the mirroring process. + +.Procedure + +To configure image mirroring, complete the following steps: + +. Ensure that your `${HOME}/.docker/config.json` file is updated with the registries that you are going to mirror from and with the private registry that you plan to push the images to. + +. By using the following example, create an `ImageSetConfiguration` object to use for mirroring. Replace values as needed to match your environment: ++ +[source,yaml] +---- +apiVersion: mirror.openshift.io/v1alpha2 +kind: ImageSetConfiguration +storageConfig: + registry: + imageURL: registry.:5000/openshift/release/metadata:latest <1> +mirror: + platform: + channels: + - name: candidate-4.17 + minVersion: 4.x.y-build <2> + maxVersion: 4.x.y-build <2> + type: ocp + kubeVirtContainer: true <3> + graph: true + additionalImages: + - name: quay.io/karmab/origin-keepalived-ipfailover:latest + - name: quay.io/karmab/kubectl:latest + - name: quay.io/karmab/haproxy:latest + - name: quay.io/karmab/mdns-publisher:latest + - name: quay.io/karmab/origin-coredns:latest + - name: quay.io/karmab/curl:latest + - name: quay.io/karmab/kcli:latest + - name: quay.io/user-name/trbsht:latest + - name: quay.io/user-name/hypershift:BMSelfManage-v4.17 + - name: registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 + operators: + - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 + packages: + - name: lvms-operator + - name: local-storage-operator + - name: odf-csi-addons-operator + - name: odf-operator + - name: mcg-operator + - name: ocs-operator + - name: metallb-operator + - name: kubevirt-hyperconverged <4> +---- ++ +<1> Replace `` with the DNS base domain name. +<2> Replace `4.x.y-build` with the supported {product-title} version you want to use. +<3> Set this optional flag to `true` if you want to also mirror the container disk image for the {op-system-first} boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. +<4> For deployments that use the KubeVirt provider, include this line. + +. Start the mirroring process by entering the following command: ++ +[source,terminal] +---- +$ oc-mirror --v2 --config imagesetconfig.yaml docker://${REGISTRY} +---- ++ +After the mirroring process is finished, you have a new folder named `oc-mirror-workspace/results-XXXXXX/`, which contains the IDMS and the catalog sources to apply on the hosted cluster. + +. Mirror the nightly or CI versions of {product-title} by configuring the `imagesetconfig.yaml` file as follows: ++ +[source,yaml] +---- +apiVersion: mirror.openshift.io/v2alpha1 +kind: ImageSetConfiguration +mirror: + platform: + graph: true + release: registry.ci.openshift.org/ocp/release:4.x.y-build <1> + kubeVirtContainer: true <2> +# ... +---- ++ +<1> Replace `4.x.y-build` with the supported {product-title} version you want to use. +<2> Set this optional flag to `true` if you want to also mirror the container disk image for the {op-system-first} boot image for the KubeVirt provider. This flag is available with oc-mirror v2 only. + +. Apply the changes to the file by entering the following command: ++ +[source,terminal] +---- +$ oc-mirror --v2 --config imagesetconfig.yaml docker://${REGISTRY} +---- + +. Mirror the latest {mce-short} images by following the steps in link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/clusters/cluster_mce_overview#install-on-disconnected-networks[Install on disconnected networks]. \ No newline at end of file diff --git a/modules/hcp-dc-infraenv.adoc b/modules/hcp-dc-infraenv.adoc new file mode 100644 index 0000000000..3d34f87716 --- /dev/null +++ b/modules/hcp-dc-infraenv.adoc @@ -0,0 +1,46 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-infraenv_{context}"] += Creating an InfraEnv resource for the hosted cluster + +The `InfraEnv` resource is an Assisted Service object that includes essential details, such as the `pullSecretRef` and the `sshAuthorizedKey`. Those details are used to create the Red Hat Enterprise Linux CoreOS (RHCOS) boot image that is customized for the hosted cluster. + +You can host more than one `InfraEnv` resource, and each one can adopt certain types of hosts. For example, you might want to divide your server farm between a host that has greater RAM capacity. + +.Procedure + +. Create a YAML file with the following information about the `InfraEnv` resource, replacing values as necessary: ++ +[source,yaml] +---- +apiVersion: agent-install.openshift.io/v1beta1 +kind: InfraEnv +metadata: + name: + namespace: - <1> <2> +spec: + pullSecretRef: <3> + name: pull-secret + sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDk7ICaUE+/k4zTpxLk4+xFdHi4ZuDi5qjeF52afsNkw0w/glILHhwpL5gnp5WkRuL8GwJuZ1VqLC9EKrdmegn4MrmUlq7WTsP0VFOZFBfq2XRUxo1wrRdor2z0Bbh93ytR+ZsDbbLlGngXaMa0Vbt+z74FqlcajbHTZ6zBmTpBVq5RHtDPgKITdpE1fongp7+ZXQNBlkaavaqv8bnyrP4BWahLP4iO9/xJF9lQYboYwEEDzmnKLMW1VtCE6nJzEgWCufACTbxpNS7GvKtoHT/OVzw8ArEXhZXQUS1UY8zKsX2iXwmyhw5Sj6YboA8WICs4z+TrFP89LmxXY0j6536TQFyRz1iB4WWvCbH5n6W+ABV2e8ssJB1AmEy8QYNwpJQJNpSxzoKBjI73XxvPYYC/IjPFMySwZqrSZCkJYqQ023ySkaQxWZT7in4KeMu7eS2tC+Kn4deJ7KwwUycx8n6RHMeD8Qg9flTHCv3gmab8JKZJqN3hW1D378JuvmIX4V0= <4> +---- ++ +<1> Replace `` with your hosted cluster. +<2> Replace `` with the name of your hosted cluster namespace. +<3> The `pullSecretRef` refers to the config map reference in the same namespace as the `InfraEnv`, where the pull secret is used. +<4> The `sshAuthorizedKey` represents the SSH public key that is placed in the boot image. The SSH key allows access to the worker nodes as the `core` user. + +. Create the `InfraEnv` resource by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f 03-infraenv.yaml +---- ++ +.Example output +---- +NAMESPACE NAME ISO CREATED AT +clusters-hosted-dual hosted 2023-09-11T15:14:10Z +---- \ No newline at end of file diff --git a/modules/hcp-dc-mgmt-cluster.adoc b/modules/hcp-dc-mgmt-cluster.adoc new file mode 100644 index 0000000000..f53e148643 --- /dev/null +++ b/modules/hcp-dc-mgmt-cluster.adoc @@ -0,0 +1,147 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-mgmt-cluster_{context}"] += Setting up a management cluster for {hcp} in a disconnected environment + +To set up an {product-title} management cluster, you can use dev-scripts, or if you are based on virtual machines, you can use the `kcli` tool. The following instructions are specific to the `kcli` tool. + +.Procedure + +. Ensure that the right networks are prepared for use in the hypervisor. The networks will host both the management and hosted clusters. Enter the following `kcli` command: ++ +[source,terminal] +---- +$ kcli create network -c 192.168.126.0/24 -P dhcp=false -P dns=false -d 2620:52:0:1306::0/64 --domain dns.base.domain.name --nodhcp dual +---- ++ +where: + +* `-c` specifies the CIDR for the network. +* `-P dhcp=false` configures the network to disable the DHCP, which is handled by the `dnsmasq` that you configured. +* `-P dns=false` configures the network to disable the DNS, which is also handled by the `dnsmasq` that you configured. +* `--domain` sets the domain to search. +* `dns.base.domain.name` is the DNS base domain name. +* `dual` is the name of the network that you are creating. + +. After the network is created, review the following output: ++ +[source,terminal] +---- +[root@hypershiftbm ~]# kcli list network +Listing Networks... ++---------+--------+---------------------+-------+------------------+------+ +| Network | Type | Cidr | Dhcp | Domain | Mode | ++---------+--------+---------------------+-------+------------------+------+ +| default | routed | 192.168.122.0/24 | True | default | nat | +| ipv4 | routed | 2620:52:0:1306::/64 | False | dns.base.domain.name | nat | +| ipv4 | routed | 192.168.125.0/24 | False | dns.base.domain.name | nat | +| ipv6 | routed | 2620:52:0:1305::/64 | False | dns.base.domain.name | nat | ++---------+--------+---------------------+-------+------------------+------+ +---- + ++ +[source,terminal] +---- +[root@hypershiftbm ~]# kcli info network ipv6 +Providing information about network ipv6... +cidr: 2620:52:0:1306::/64 +dhcp: false +domain: dns.base.domain.name +mode: nat +plan: kvirt +type: routed +---- + +. Ensure that the pull secret and `kcli` plan files are in place so that you can deploy the {product-title} management cluster: + +.. Confirm that the pull secret is in the same folder as the `kcli` plan, and that the pull secret file is named `openshift_pull.json`. + +.. Add the `kcli` plan, which contains the {product-title} definition, in the `mgmt-compact-hub-dual.yaml` file. Ensure that you update the file contents to match your environment: ++ +[source,yaml] +---- +plan: hub-dual +force: true +version: stable +tag: "4.x.y-x86_64" <1> +cluster: "hub-dual" +dualstack: true +domain: dns.base.domain.name +api_ip: 192.168.126.10 +ingress_ip: 192.168.126.11 +service_networks: +- 172.30.0.0/16 +- fd02::/112 +cluster_networks: +- 10.132.0.0/14 +- fd01::/48 +disconnected_url: registry.dns.base.domain.name:5000 +disconnected_update: true +disconnected_user: dummy +disconnected_password: dummy +disconnected_operators_version: v4.14 +disconnected_operators: +- name: metallb-operator +- name: lvms-operator + channels: + - name: stable-4.14 +disconnected_extra_images: +- quay.io/user-name/trbsht:latest +- quay.io/user-name/hypershift:BMSelfManage-v4.14-rc-v3 +- registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.10 +dualstack: true +disk_size: 200 +extra_disks: [200] +memory: 48000 +numcpus: 16 +ctlplanes: 3 +workers: 0 +manifests: extra-manifests +metal3: true +network: dual +users_dev: developer +users_devpassword: developer +users_admin: admin +users_adminpassword: admin +metallb_pool: dual-virtual-network +metallb_ranges: +- 192.168.126.150-192.168.126.190 +metallb_autoassign: true +apps: +- users +- lvms-operator +- metallb-operator +vmrules: +- hub-bootstrap: + nets: + - name: ipv6 + mac: aa:aa:aa:aa:10:07 +- hub-ctlplane-0: + nets: + - name: ipv6 + mac: aa:aa:aa:aa:10:01 +- hub-ctlplane-1: + nets: + - name: ipv6 + mac: aa:aa:aa:aa:10:02 +- hub-ctlplane-2: + nets: + - name: ipv6 + mac: aa:aa:aa:aa:10:03 +---- ++ +<1> Replace `4.x.y` with the supported {product-title} version you want to use. + +. To provision the management cluster, enter the following command: ++ +[source,terminal] +---- +$ kcli create cluster openshift --pf mgmt-compact-hub-dual.yaml +---- + +.Next steps + +Next, configure the web server. \ No newline at end of file diff --git a/modules/hcp-dc-registry.adoc b/modules/hcp-dc-registry.adoc new file mode 100644 index 0000000000..e099586207 --- /dev/null +++ b/modules/hcp-dc-registry.adoc @@ -0,0 +1,121 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-registry_{context}"] += Deploying a registry for {hcp} in a disconnected environment + +For development environments, deploy a small, self-hosted registry by using a Podman container. For production environments, deploy an enterprise-hosted registry, such as {quay}, Nexus, or Artifactory. + +.Procedure + +To deploy a small registry by using Podman, complete the following steps: + +. As a privileged user, access the `${HOME}` directory and create the following script: ++ +[source,bash] +---- +#!/usr/bin/env bash + +set -euo pipefail + +PRIMARY_NIC=$(ls -1 /sys/class/net | grep -v podman | head -1) +export PATH=/root/bin:$PATH +export PULL_SECRET="/root/baremetal/hub/openshift_pull.json" <1> + +if [[ ! -f $PULL_SECRET ]];then + echo "Pull Secret not found, exiting..." + exit 1 +fi + +dnf -y install podman httpd httpd-tools jq skopeo libseccomp-devel +export IP=$(ip -o addr show $PRIMARY_NIC | head -1 | awk '{print $4}' | cut -d'/' -f1) +REGISTRY_NAME=registry.$(hostname --long) +REGISTRY_USER=dummy +REGISTRY_PASSWORD=dummy +KEY=$(echo -n $REGISTRY_USER:$REGISTRY_PASSWORD | base64) +echo "{\"auths\": {\"$REGISTRY_NAME:5000\": {\"auth\": \"$KEY\", \"email\": \"sample-email@domain.ltd\"}}}" > /root/disconnected_pull.json +mv ${PULL_SECRET} /root/openshift_pull.json.old +jq ".auths += {\"$REGISTRY_NAME:5000\": {\"auth\": \"$KEY\",\"email\": \"sample-email@domain.ltd\"}}" < /root/openshift_pull.json.old > $PULL_SECRET +mkdir -p /opt/registry/{auth,certs,data,conf} +cat < /opt/registry/conf/config.yml +version: 0.1 +log: + fields: + service: registry +storage: + cache: + blobdescriptor: inmemory + filesystem: + rootdirectory: /var/lib/registry + delete: + enabled: true +http: + addr: :5000 + headers: + X-Content-Type-Options: [nosniff] +health: + storagedriver: + enabled: true + interval: 10s + threshold: 3 +compatibility: + schema1: + enabled: true +EOF +openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 3650 -out /opt/registry/certs/domain.crt -subj "/C=US/ST=Madrid/L=San Bernardo/O=Karmalabs/OU=Guitar/CN=$REGISTRY_NAME" -addext "subjectAltName=DNS:$REGISTRY_NAME" +cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ +update-ca-trust extract +htpasswd -bBc /opt/registry/auth/htpasswd $REGISTRY_USER $REGISTRY_PASSWORD +podman create --name registry --net host --security-opt label=disable --replace -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/conf/config.yml:/etc/docker/registry/config.yml -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry" -e "REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v /opt/registry/certs:/certs:z -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key docker.io/library/registry:latest +[ "$?" == "0" ] || !! +systemctl enable --now registry +---- ++ +<1> Replace the location of the `PULL_SECRET` with the appropriate location for your setup. + +. Name the script file `registry.sh` and save it. When you run the script, it pulls in the following information: ++ +* The registry name, based on the hypervisor hostname +* The necessary credentials and user access details + +. Adjust permissions by adding the execution flag as follows: ++ +[source,terminal] +---- +$ chmod u+x ${HOME}/registry.sh +---- + +. To run the script without any parameters, enter the following command: ++ +[source,terminal] +---- +$ ${HOME}/registry.sh +---- ++ +The script starts the server. The script uses a `systemd` service for management purposes. + +. If you need to manage the script, you can use the following commands: ++ +[source,terminal] +---- +$ systemctl status +---- ++ +[source,terminal] +---- +$ systemctl start +---- ++ +[source,terminal] +---- +$ systemctl stop +---- + +The root folder for the registry is in the `/opt/registry` directory and contains the following subdirectories: + +* `certs` contains the TLS certificates. +* `auth` contains the credentials. +* `data` contains the registry images. +* `conf` contains the registry configuration. \ No newline at end of file diff --git a/modules/hcp-dc-scale-np.adoc b/modules/hcp-dc-scale-np.adoc new file mode 100644 index 0000000000..c85d81e73b --- /dev/null +++ b/modules/hcp-dc-scale-np.adoc @@ -0,0 +1,47 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-scale-np_{context}"] += Scaling up the node pool + +After you create the bare metal hosts, their statuses change from `Registering` to `Provisioning` to `Provisioned`. The nodes start with the `LiveISO` of the agent and a default pod that is named `agent`. That agent is responsible for receiving instructions from the Assisted Service Operator to install the {product-title} payload. + +.Procedure + +. To scale up the node pool, enter the following command: ++ +[source,terminal] +---- +$ oc -n scale nodepool --replicas 3 +---- ++ +where: + +* `` is the name of the hosted cluster namespace. +* `` is the name of the hosted cluster. + +. After the scaling process is complete, notice that the agents are assigned to a hosted cluster: ++ +.Example output +[source,terminal] +---- +NAMESPACE NAME CLUSTER APPROVED ROLE STAGE +clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0411 hosted true auto-assign +clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0412 hosted true auto-assign +clusters-hosted aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa0413 hosted true auto-assign +---- + +. Also notice that the node pool replicas are set: ++ +.Example output +[source,terminal] +---- +NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE +clusters hosted hosted 3 False False 4.x.y-x86_64 Minimum availability requires 3 replicas, current 0 available +---- ++ +Replace `4.x.y` with the supported {product-title} version that you want to use. + +. Wait until the nodes join the cluster. During the process, the agents provide updates on their stage and status. \ No newline at end of file diff --git a/modules/hcp-dc-tls-hosted.adoc b/modules/hcp-dc-tls-hosted.adoc new file mode 100644 index 0000000000..c1158e0201 --- /dev/null +++ b/modules/hcp-dc-tls-hosted.adoc @@ -0,0 +1,50 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-tls-hosted_{context}"] += Adding the registry CA to the worker nodes for the hosted cluster + +In order for the data plane workers in the hosted cluster to be able to retrieve images from the private registry, you need to add the registry CA to the worker nodes. + +.Procedure + +. In the `hc.spec.additionalTrustBundle` file, add the following specification: ++ +[source,yaml] +---- +spec: + additionalTrustBundle: + - name: user-ca-bundle <1> +---- ++ +<1> The `user-ca-bundle` entry is a config map that you create in the next step. + +. In the same namespace where the `HostedCluster` object is created, create the `user-ca-bundle` config map. The config map resembles the following example: ++ +[source,yaml] +---- +apiVersion: v1 +data: + ca-bundle.crt: | + // Registry1 CA + -----BEGIN CERTIFICATE----- + -----END CERTIFICATE----- + + // Registry2 CA + -----BEGIN CERTIFICATE----- + -----END CERTIFICATE----- + + // Registry3 CA + -----BEGIN CERTIFICATE----- + -----END CERTIFICATE----- + +kind: ConfigMap +metadata: + name: user-ca-bundle + namespace: <1> +---- ++ +<1> Specify the namespace where the `HostedCluster` object is created. \ No newline at end of file diff --git a/modules/hcp-dc-tls-mgmt.adoc b/modules/hcp-dc-tls-mgmt.adoc new file mode 100644 index 0000000000..fbd1e83256 --- /dev/null +++ b/modules/hcp-dc-tls-mgmt.adoc @@ -0,0 +1,51 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-tls-mgmt_{context}"] += Adding the registry CA to the management cluster + +To add the registry CA to the management cluster, complete the following steps. + +.Procedure + +. Create a config map that resembles the following example: ++ +[source,yaml] +---- +apiVersion: v1 +kind: ConfigMap +metadata: + name: <1> + namespace: <2> +data: <3> + ..: | <4> + -----BEGIN CERTIFICATE----- + -----END CERTIFICATE----- + ..: | + -----BEGIN CERTIFICATE----- + -----END CERTIFICATE----- + ..: | + -----BEGIN CERTIFICATE----- + -----END CERTIFICATE----- +---- ++ +<1> Specify the name of the config map. +<2> Specify the namespace for the config map. +<3> In the `data` field, specify the registry names and the registry certificate content. Replace `` with the port where the registry server is running; for example, `5000`. +<4> Ensure that the data in the config map is defined by using `|` only instead of other methods, such as `| -`. If you use other methods, issues can occur when the pod reads the certificates. + +. Patch the cluster-wide object, `image.config.openshift.io` to include the following specification: ++ +[source,yaml] +---- +spec: + additionalTrustedCA: + - name: registry-config +---- ++ +As a result of this patch, the control plane nodes can retrieve images from the private registry and the HyperShift Operator can extract the {product-title} payload for hosted cluster deployments. ++ +The process to patch the object might take several minutes to be completed. \ No newline at end of file diff --git a/modules/hcp-dc-usr-wkld.adoc b/modules/hcp-dc-usr-wkld.adoc new file mode 100644 index 0000000000..dd1ab8ab6f --- /dev/null +++ b/modules/hcp-dc-usr-wkld.adoc @@ -0,0 +1,46 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-dc-monitor.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-usr-wkld_{context}"] += Resolving user workload monitoring issues + +If you installed {mce-short} on {product-title} clusters that are not connected to the internet, when you try to run the user workload monitoring feature of the HyperShift Operator by entering the following command, the feature fails with an error: + +[source,terminal] +---- +$ oc get events -n hypershift +---- + +.Example error +[source,terminal] +---- +LAST SEEN TYPE REASON OBJECT MESSAGE +4m46s Warning ReconcileError deployment/operator Failed to ensure UWM telemetry remote write: cannot get telemeter client secret: Secret "telemeter-client" not found +---- + +To resolve the error, you must disable the user workload monitoring option by creating a config map in the `local-cluster` namespace. You can create the config map either before or after you enable the add-on. The add-on agent reconfigures the HyperShift Operator. + +.Procedure + +. Create the following config map: ++ +[source,yaml] +---- +kind: ConfigMap +apiVersion: v1 +metadata: + name: hypershift-operator-install-flags + namespace: local-cluster +data: + installFlagsToAdd: "" + installFlagsToRemove: "--enable-uwm-telemetry-remote-write" +---- + +. Apply the config map by running the following command: ++ +[source,terminal] +---- +$ oc apply -f .yaml +---- \ No newline at end of file diff --git a/modules/hcp-dc-verify.adoc b/modules/hcp-dc-verify.adoc new file mode 100644 index 0000000000..7924eababe --- /dev/null +++ b/modules/hcp-dc-verify.adoc @@ -0,0 +1,47 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-dc-monitor.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-verify_{context}"] += Verifying the status of the hosted control plane feature + +The hosted control plane feature is enabled by default. + +.Procedure + +. If the feature is disabled and you want to enable it, enter the following command. Replace `` with the name of your {mce-short} instance: ++ +[source,terminal] +---- +$ oc patch mce --type=merge -p '{"spec":{"overrides":{"components":[{"name":"hypershift","enabled": true}]}}}' +---- ++ +When you enable the feature, the `hypershift-addon` managed cluster add-on is installed in the `local-cluster` managed cluster, and the add-on agent installs the HyperShift Operator on the {mce-short} hub cluster. + +. Confirm that the `hypershift-addon` managed cluster add-on is installed by entering the following command: ++ +[source,terminal] +---- +$ oc get managedclusteraddons -n local-cluster hypershift-addon +---- ++ +.Example output +---- +NAME AVAILABLE DEGRADED PROGRESSING +hypershift-addon True False +---- + +. To avoid a timeout during this process, enter the following commands: ++ +[source,terminal] +---- +$ oc wait --for=condition=Degraded=True managedclusteraddons/hypershift-addon -n local-cluster --timeout=5m +---- ++ +[source,terminal] +---- +$ oc wait --for=condition=Available=True managedclusteraddons/hypershift-addon -n local-cluster --timeout=5m +---- ++ +When the process is complete, the `hypershift-addon` managed cluster add-on and the HyperShift Operator are installed, and the `local-cluster` managed cluster is available to host and manage hosted clusters. \ No newline at end of file diff --git a/modules/hcp-dc-web-server.adoc b/modules/hcp-dc-web-server.adoc new file mode 100644 index 0000000000..0098500d2b --- /dev/null +++ b/modules/hcp-dc-web-server.adoc @@ -0,0 +1,48 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-dc-web-server_{context}"] += Configuring the web server for {hcp} in a disconnected environment + +You need to configure an additional web server to host the {op-system-first} images that are associated with the {product-title} release that you are deploying as a hosted cluster. + +.Procedure + +To configure the web server, complete the following steps: + +. Extract the `openshift-install` binary from the {product-title} release that you want to use by entering the following command: ++ +[source,terminal] +---- +$ oc adm -a ${LOCAL_SECRET_JSON} release extract --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}" +---- + +. Run the following script. The script creates a folder in the `/opt/srv` directory. The folder contains the {op-system} images to provision the worker nodes. ++ +[source,bash] +---- +#!/bin/bash + +WEBSRV_FOLDER=/opt/srv +ROOTFS_IMG_URL="$(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.pxe.rootfs.location')" <1> +LIVE_ISO_URL="$(./openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.metal.formats.iso.disk.location')" <2> + +mkdir -p ${WEBSRV_FOLDER}/images +curl -Lk ${ROOTFS_IMG_URL} -o ${WEBSRV_FOLDER}/images/${ROOTFS_IMG_URL##*/} +curl -Lk ${LIVE_ISO_URL} -o ${WEBSRV_FOLDER}/images/${LIVE_ISO_URL##*/} +chmod -R 755 ${WEBSRV_FOLDER}/* + +## Run Webserver +podman ps --noheading | grep -q websrv-ai +if [[ $? == 0 ]];then + echo "Launching Registry pod..." + /usr/bin/podman run --name websrv-ai --net host -v /opt/srv:/usr/local/apache2/htdocs:z quay.io/alosadag/httpd:p8080 +fi +---- ++ +<1> You can find the `ROOTFS_IMG_URL` value on the OpenShift CI Release page. +<2> You can find the `LIVE_ISO_URL` value on the OpenShift CI Release page. + +After the download is completed, a container runs to host the images on a web server. The container uses a variation of the official HTTPd image, which also enables it to work with IPv6 networks. \ No newline at end of file diff --git a/modules/hcp-hc-objects.adoc b/modules/hcp-hc-objects.adoc new file mode 100644 index 0000000000..92007cc45f --- /dev/null +++ b/modules/hcp-hc-objects.adoc @@ -0,0 +1,290 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-hc-objects_{context}"] += Deploying hosted cluster objects + +Typically, the HyperShift Operator creates the `HostedControlPlane` namespace. However, in this case, you want to include all the objects before the HyperShift Operator begins to reconcile the `HostedCluster` object. Then, when the Operator starts the reconciliation process, it can find all of the objects in place. + +.Procedure + +. Create a YAML file with the following information about the namespaces: ++ +[source,yaml] +---- +--- +apiVersion: v1 +kind: Namespace +metadata: + creationTimestamp: null + name: - <1> +spec: {} +status: {} +--- +apiVersion: v1 +kind: Namespace +metadata: + creationTimestamp: null + name: <2> +spec: {} +status: {} +---- ++ +<1> Replace `` with your hosted cluster. +<2> Replace `` with the name of your hosted cluster namespace. + +. Create a YAML file with the following information about the config maps and secrets to include in the `HostedCluster` deployment: ++ +[source,yaml] +---- +--- +apiVersion: v1 +data: + ca-bundle.crt: | + -----BEGIN CERTIFICATE----- + -----END CERTIFICATE----- +kind: ConfigMap +metadata: + name: user-ca-bundle + namespace: <1> +--- +apiVersion: v1 +data: + .dockerconfigjson: xxxxxxxxx +kind: Secret +metadata: + creationTimestamp: null + name: -pull-secret <2> + namespace: <1> +--- +apiVersion: v1 +kind: Secret +metadata: + name: sshkey-cluster- <2> + namespace: <1> +stringData: + id_rsa.pub: ssh-rsa xxxxxxxxx +--- +apiVersion: v1 +data: + key: nTPtVBEt03owkrKhIdmSW8jrWRxU57KO/fnZa8oaG0Y= +kind: Secret +metadata: + creationTimestamp: null + name: -etcd-encryption-key <2> + namespace: <1> +type: Opaque +---- ++ +<1> Replace `` with the name of your hosted cluster namespace. +<2> Replace `` with your hosted cluster. + +. Create a YAML file that contains the RBAC roles so that Assisted Service agents can be in the same `HostedControlPlane` namespace as the hosted control plane and still be managed by the cluster API: ++ +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + creationTimestamp: null + name: capi-provider-role + namespace: - <1> <2> +rules: +- apiGroups: + - agent-install.openshift.io + resources: + - agents + verbs: + - '*' +---- ++ +<1> Replace `` with the name of your hosted cluster namespace. +<2> Replace `` with your hosted cluster. + +. Create a YAML file with information about the `HostedCluster` object, replacing values as necessary: ++ +[source,yaml] +---- +apiVersion: hypershift.openshift.io/v1beta1 +kind: HostedCluster +metadata: + name: <1> + namespace: <2> +spec: + additionalTrustBundle: + name: "user-ca-bundle" + olmCatalogPlacement: guest + imageContentSources: <3> + - source: quay.io/openshift-release-dev/ocp-v4.0-art-dev + mirrors: + - registry.:5000/openshift/release <4> + - source: quay.io/openshift-release-dev/ocp-release + mirrors: + - registry.:5000/openshift/release-images <4> + - mirrors: + ... + ... + autoscaling: {} + controllerAvailabilityPolicy: SingleReplica + dns: + baseDomain: <4> + etcd: + managed: + storage: + persistentVolume: + size: 8Gi + restoreSnapshotURL: null + type: PersistentVolume + managementType: Managed + fips: false + networking: + clusterNetwork: + - cidr: 10.132.0.0/14 + - cidr: fd01::/48 + networkType: OVNKubernetes + serviceNetwork: + - cidr: 172.31.0.0/16 + - cidr: fd02::/112 + platform: + agent: + agentNamespace: - <1> <2> + type: Agent + pullSecret: + name: -pull-secret <1> + release: + image: registry.:5000/openshift/release-images:4.x.y-x86_64 <4> <5> + secretEncryption: + aescbc: + activeKey: + name: -etcd-encryption-key <1> + type: aescbc + services: + - service: APIServer + servicePublishingStrategy: + type: LoadBalancer + - service: OAuthServer + servicePublishingStrategy: + type: Route + - service: OIDC + servicePublishingStrategy: + type: Route + - service: Konnectivity + servicePublishingStrategy: + type: Route + - service: Ignition + servicePublishingStrategy: + type: Route + sshKey: + name: sshkey-cluster- <1> +status: + controlPlaneEndpoint: + host: "" + port: 0 +---- ++ +<1> Replace `` with your hosted cluster. +<2> Replace `` with the name of your hosted cluster namespace. +<3> The `imageContentSources` section contains mirror references for user workloads within the hosted cluster. +<4> Replace `` with the DNS base domain name. +<5> Replace `4.x.y` with the supported {product-title} version you want to use. + +. Add an annotation in the `HostedCluster` object that points to the HyperShift Operator release in the {product-title} release: + +.. Obtain the image payload by entering the following command: ++ +[source,terminal] +---- +$ oc adm release info registry.:5000/openshift-release-dev/ocp-release:4.x.y-x86_64 | grep hypershift +---- ++ +where `` is the DNS base domain name and `4.x.y` is the supported {product-title} version you want to use. ++ +.Example output +[source,terminal] +---- +hypershift sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 +---- + +.. By using the {product-title} Images namespace, check the digest by entering the following command: ++ +[source,terminal] +---- +podman pull registry.:5000/openshift-release-dev/ocp-v4.0-art-dev@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 +---- ++ +where `` is the DNS base domain name. ++ +.Example output +[source,terminal] +---- +podman pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8 +Trying to pull registry.dns.base.domain.name:5000/openshift/release@sha256:31149e3e5f8c5e5b5b100ff2d89975cf5f7a73801b2c06c639bf6648766117f8... +Getting image source signatures +Copying blob d8190195889e skipped: already exists +Copying blob c71d2589fba7 skipped: already exists +Copying blob d4dc6e74b6ce skipped: already exists +Copying blob 97da74cc6d8f skipped: already exists +Copying blob b70007a560c9 done +Copying config 3a62961e6e done +Writing manifest to image destination +Storing signatures +3a62961e6ed6edab46d5ec8429ff1f41d6bb68de51271f037c6cb8941a007fde +---- ++ +The release image that is set in the `HostedCluster` object must use the digest rather than the tag; for example, `quay.io/openshift-release-dev/ocp-release@sha256:e3ba11bd1e5e8ea5a0b36a75791c90f29afb0fdbe4125be4e48f69c76a5c47a0`. + +. Create all of the objects that you defined in the YAML files by concatenating them into a file and applying them against the management cluster. To do so, enter the following command: ++ +[source,terminal] +---- +$ oc apply -f 01-4.14-hosted_cluster-nodeport.yaml +---- ++ +.Example output +[source,terminal] +---- +NAME READY STATUS RESTARTS AGE +capi-provider-5b57dbd6d5-pxlqc 1/1 Running 0 3m57s +catalog-operator-9694884dd-m7zzv 2/2 Running 0 93s +cluster-api-f98b9467c-9hfrq 1/1 Running 0 3m57s +cluster-autoscaler-d7f95dd5-d8m5d 1/1 Running 0 93s +cluster-image-registry-operator-5ff5944b4b-648ht 1/2 Running 0 93s +cluster-network-operator-77b896ddc-wpkq8 1/1 Running 0 94s +cluster-node-tuning-operator-84956cd484-4hfgf 1/1 Running 0 94s +cluster-policy-controller-5fd8595d97-rhbwf 1/1 Running 0 95s +cluster-storage-operator-54dcf584b5-xrnts 1/1 Running 0 93s +cluster-version-operator-9c554b999-l22s7 1/1 Running 0 95s +control-plane-operator-6fdc9c569-t7hr4 1/1 Running 0 3m57s +csi-snapshot-controller-785c6dc77c-8ljmr 1/1 Running 0 77s +csi-snapshot-controller-operator-7c6674bc5b-d9dtp 1/1 Running 0 93s +csi-snapshot-webhook-5b8584875f-2492j 1/1 Running 0 77s +dns-operator-6874b577f-9tc6b 1/1 Running 0 94s +etcd-0 3/3 Running 0 3m39s +hosted-cluster-config-operator-f5cf5c464-4nmbh 1/1 Running 0 93s +ignition-server-6b689748fc-zdqzk 1/1 Running 0 95s +ignition-server-proxy-54d4bb9b9b-6zkg7 1/1 Running 0 95s +ingress-operator-6548dc758b-f9gtg 1/2 Running 0 94s +konnectivity-agent-7767cdc6f5-tw782 1/1 Running 0 95s +kube-apiserver-7b5799b6c8-9f5bp 4/4 Running 0 3m7s +kube-controller-manager-5465bc4dd6-zpdlk 1/1 Running 0 44s +kube-scheduler-5dd5f78b94-bbbck 1/1 Running 0 2m36s +machine-approver-846c69f56-jxvfr 1/1 Running 0 92s +oauth-openshift-79c7bf44bf-j975g 2/2 Running 0 62s +olm-operator-767f9584c-4lcl2 2/2 Running 0 93s +openshift-apiserver-5d469778c6-pl8tj 3/3 Running 0 2m36s +openshift-controller-manager-6475fdff58-hl4f7 1/1 Running 0 95s +openshift-oauth-apiserver-dbbc5cc5f-98574 2/2 Running 0 95s +openshift-route-controller-manager-5f6997b48f-s9vdc 1/1 Running 0 95s +packageserver-67c87d4d4f-kl7qh 2/2 Running 0 93s +---- ++ +When the hosted cluster is available, the output looks like the following example. ++ +.Example output +[source,terminal] +---- +NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE +clusters hosted-dual hosted-admin-kubeconfig Partial True False The hosted control plane is available +---- \ No newline at end of file diff --git a/modules/hcp-metallb.adoc b/modules/hcp-metallb.adoc index 7fcab92815..efd613a4e4 100644 --- a/modules/hcp-metallb.adoc +++ b/modules/hcp-metallb.adoc @@ -31,6 +31,7 @@ $ oc apply -f configure-metallb.yaml ---- + .Example output +[source,terminal] ---- metallb.metallb.io/metallb created ---- @@ -55,10 +56,11 @@ spec: + [source,terminal] ---- -oc apply -f create-ip-address-pool.yaml +$ oc apply -f create-ip-address-pool.yaml ---- + .Example output +[source,terminal] ---- ipaddresspool.metallb.io/metallb created ---- @@ -85,6 +87,7 @@ $ oc apply -f l2advertisement.yaml ---- + .Example output +[source,terminal] ---- l2advertisement.metallb.io/metallb created ---- \ No newline at end of file diff --git a/modules/hcp-monitor-cp.adoc b/modules/hcp-monitor-cp.adoc new file mode 100644 index 0000000000..93c49b3877 --- /dev/null +++ b/modules/hcp-monitor-cp.adoc @@ -0,0 +1,30 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-monitor-cp_{context}"] += Monitoring the control plane + +While the deployment proceeds, you can monitor the control plane by gathering information about the following artifacts: + +* The HyperShift Operator +* The `HostedControlPlane` pod +* The bare metal hosts +* The agents +* The `InfraEnv` resource +* The `HostedCluster` and `NodePool` resources + +.Procedure + +* Enter the following commands to monitor the control plane: ++ +[source,terminal] +---- +$ export KUBECONFIG=/root/.kcli/clusters/hub-ipv4/auth/kubeconfig +---- ++ +[source,terminal] +---- +$ watch "oc get pod -n hypershift;echo;echo;oc get pod -n clusters-hosted-ipv4;echo;echo;oc get bmh -A;echo;echo;oc get agent -A;echo;echo;oc get infraenv -A;echo;echo;oc get hostedcluster -A;echo;echo;oc get nodepool -A;echo;echo;" +---- \ No newline at end of file diff --git a/modules/hcp-monitor-dp.adoc b/modules/hcp-monitor-dp.adoc new file mode 100644 index 0000000000..1308863bb5 --- /dev/null +++ b/modules/hcp-monitor-dp.adoc @@ -0,0 +1,29 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-monitor-dp_{context}"] += Monitoring the data plane + +While the deployment proceeds, you can monitor the data plane by gathering information about the following artifacts: + +* The cluster version +* The nodes, specifically, about whether the nodes joined the cluster +* The cluster Operators + +.Procedure + +* Enter the following commands: ++ +---- +$ oc get secret -n clusters-hosted-ipv4 admin-kubeconfig -o jsonpath='{.data.kubeconfig}' |base64 -d > /root/hc_admin_kubeconfig.yaml +---- ++ +---- +$ export KUBECONFIG=/root/hc_admin_kubeconfig.yaml +---- ++ +---- +$ watch "oc get clusterversion,nodes,co" +---- \ No newline at end of file diff --git a/modules/hcp-nodepool-hc.adoc b/modules/hcp-nodepool-hc.adoc new file mode 100644 index 0000000000..456a740d7c --- /dev/null +++ b/modules/hcp-nodepool-hc.adoc @@ -0,0 +1,58 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-nodepool-hc_{context}"] += Creating a NodePool object for the hosted cluster + +A `NodePool` is a scalable set of worker nodes that is associated with a hosted cluster. `NodePool` machine architectures remain consistent within a specific pool and are independent of the machine architecture of the control plane. + +.Procedure + +. Create a YAML file with the following information about the `NodePool` object, replacing values as necessary: ++ +[source,yaml] +---- +apiVersion: hypershift.openshift.io/v1beta1 +kind: NodePool +metadata: + creationTimestamp: null + name: \// <1> + namespace: \// <2> +spec: + arch: amd64 + clusterName: + management: + autoRepair: false \// <3> + upgradeType: InPlace \// <4> + nodeDrainTimeout: 0s + platform: + type: Agent + release: + image: registry.:5000/openshift/release-images:4.x.y-x86_64 \// <5> + replicas: 0 +status: + replicas: 0 // <6> +---- ++ +<1> Replace `` with your hosted cluster. +<2> Replace `` with the name of your hosted cluster namespace. +<3> The `autoRepair` field is set to `false` because the node will not be re-created if it is removed. +<4> The `upgradeType` is set to `InPlace`, which indicates that the same bare metal node is reused during an upgrade. +<5> All of the nodes included in this `NodePool` are based on the following {product-title} version: `4.x.y-x86_64`. Replace the `` value with your DNS base domain name and the `4.x.y` value with the supported {product-title} version you want to use. +<6> The `replicas` value is set to `0` so that you can scale them when needed. It is important to keep the `NodePool` replicas at 0 until all steps are completed. + +. Create the `NodePool` object by entering the following command: ++ +[source,terminal] +---- +$ oc apply -f 02-nodepool.yaml +---- ++ +.Example output +[source,terminal] +---- +NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE +clusters hosted-dual hosted 0 False False 4.x.y-x86_64 +---- \ No newline at end of file diff --git a/modules/hcp-virt-add-networks.adoc b/modules/hcp-virt-add-networks.adoc index 3261bee305..bdac1203ef 100644 --- a/modules/hcp-virt-add-networks.adoc +++ b/modules/hcp-virt-add-networks.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * hosted_control_planes/hcp-deploy-disconnected.adoc +// * hosted_control_planes/hcp-deploy/hcp-deploy-virt.adoc :_mod-docs-content-type: PROCEDURE [id="hcp-virt-add-networks_{context}"] diff --git a/modules/hcp-virt-addl-network.adoc b/modules/hcp-virt-addl-network.adoc index 3fdccec7fd..14351b2638 100644 --- a/modules/hcp-virt-addl-network.adoc +++ b/modules/hcp-virt-addl-network.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * hosted_control_planes/hcp-deploy-disconnected.adoc +// * hosted_control_planes/hcp-deploy/hcp-deploy-virt.adoc :_mod-docs-content-type: PROCEDURE [id="hcp-virt-addl-network_{context}"] diff --git a/modules/hcp-virt-create-hc-cli.adoc b/modules/hcp-virt-create-hc-cli.adoc index b0cef0f5b6..30952679e8 100644 --- a/modules/hcp-virt-create-hc-cli.adoc +++ b/modules/hcp-virt-create-hc-cli.adoc @@ -1,6 +1,7 @@ // Module included in the following assemblies: // -// * hosted_control_planes/hcp-deploy-disconnected.adoc +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-virt.adoc +// * hosted_control_planes/hcp-deploy/hcp-deploy-virt.adoc :_mod-docs-content-type: PROCEDURE [id="hcp-virt-create-hc-cli_{context}"] diff --git a/modules/hcp-virt-create-hc-console.adoc b/modules/hcp-virt-create-hc-console.adoc index 7aeca9ff4d..aa1082bc7a 100644 --- a/modules/hcp-virt-create-hc-console.adoc +++ b/modules/hcp-virt-create-hc-console.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * hosted_control_planes/hcp-deploy-disconnected.adoc +// * hosted_control_planes/hcp-deploy/hcp-deploy-virt.adoc :_mod-docs-content-type: PROCEDURE [id="hcp-virt-create-hc-console_{context}"] diff --git a/modules/hcp-virt-create-hc-ext-infra.adoc b/modules/hcp-virt-create-hc-ext-infra.adoc index 8e35c36ad6..16f97ea215 100644 --- a/modules/hcp-virt-create-hc-ext-infra.adoc +++ b/modules/hcp-virt-create-hc-ext-infra.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * hosted_control_planes/hcp-deploy-disconnected.adoc +// * hosted_control_planes/hcp-deploy/hcp-deploy-virt.adoc :_mod-docs-content-type: PROCEDURE [id="hcp-virt-create-hc-ext-infra_{context}"] diff --git a/modules/hcp-virt-guaranteed-cpus.adoc b/modules/hcp-virt-guaranteed-cpus.adoc index a61c6b47cc..0c095e7123 100644 --- a/modules/hcp-virt-guaranteed-cpus.adoc +++ b/modules/hcp-virt-guaranteed-cpus.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * hosted_control_planes/hcp-deploy-disconnected.adoc +// * hosted_control_planes/hcp-deploy/hcp-deploy-virt.adoc :_mod-docs-content-type: PROCEDURE [id="hcp-virt-guaranteed-cpus_{context}"] diff --git a/modules/hcp-virt-ingress-dns.adoc b/modules/hcp-virt-ingress-dns.adoc index a35471a833..9691d11387 100644 --- a/modules/hcp-virt-ingress-dns.adoc +++ b/modules/hcp-virt-ingress-dns.adoc @@ -32,7 +32,7 @@ For the default ingress DNS to work properly, the cluster that hosts the KubeVir ---- $ oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]' ---- -+ + [NOTE] ==== When you use the default hosted cluster ingress, connectivity is limited to HTTPS traffic over port 443. Plain HTTP traffic over port 80 is rejected. This limitation applies to only the default ingress behavior. diff --git a/modules/hcp-virt-sched-vms.adoc b/modules/hcp-virt-sched-vms.adoc index 11ae6a408a..9853520ce7 100644 --- a/modules/hcp-virt-sched-vms.adoc +++ b/modules/hcp-virt-sched-vms.adoc @@ -1,6 +1,6 @@ // Module included in the following assemblies: // -// * hosted_control_planes/hcp-deploy-disconnected.adoc +// * hosted_control_planes/hcp-deploy/hcp-deploy-virt.adoc :_mod-docs-content-type: PROCEDURE [id="hcp-virt-sched-vms_{context}"] diff --git a/modules/hcp-worker-hc.adoc b/modules/hcp-worker-hc.adoc new file mode 100644 index 0000000000..d8e7028c14 --- /dev/null +++ b/modules/hcp-worker-hc.adoc @@ -0,0 +1,116 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-disconnected/hcp-deploy-dc-bm.adoc + +:_mod-docs-content-type: PROCEDURE +[id="hcp-worker-hc_{context}"] += Creating worker nodes for the hosted cluster + +If you are working on a bare metal platform, creating worker nodes is crucial to ensure that the details in the `BareMetalHost` are correctly configured. + +If you are working with virtual machines, you can complete the following steps to create empty worker nodes for the Metal3 Operator to consume. To do so, you use the `kcli` tool. + +.Procedure + +. If this is not your first attempt to create worker nodes, you must first delete your previous setup. To do so, delete the plan by entering the following command: ++ +[source,terminal] +---- +$ kcli delete plan <1> +---- ++ +<1> Replace `` with the name of your hosted cluster. + +.. When you are prompted to confirm whether you want to delete the plan, type `y`. + +.. Confirm that you see a message stating that the plan was deleted. + +. Create the virtual machines by entering the following commands: + +.. Enter the following command to create the first virtual machine: ++ +[source,terminal] +---- +$ kcli create vm \ + -P start=False \// <1> + -P uefi_legacy=true \// <2> + -P plan= \// <3> + -P memory=8192 -P numcpus=16 \// <4> + -P disks=[200,200] \// <5> + -P nets=["{\"name\": \"\", \"mac\": \"aa:aa:aa:aa:11:01\"}"] \// <6> + -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1101 \ + -P name=-worker0 // <7> +---- ++ +<1> Include `start=False` if you do not want the virtual machine (VM) to automatically start upon creation. +<2> Include `uefi_legacy=true` to indicate that you will use UEFI legacy boot to ensure compatibility with previous UEFI implementations. +<3> Replace `` with the name of your hosted cluster. The `plan=` statement indicates the plan name, which identifies a group of machines as a cluster. +<4> Include the `memory=8192` and `numcpus=16` parameters to specify the resources for the VM, including the RAM and CPU. +<5> Include `disks=[200,200]` to indicate that you are creating two thin-provisioned disks in the VM. +<6> Include `nets=[{"name": "", "mac": "aa:aa:aa:aa:02:13"}]` to provide network details, including the network name to connect to, the type of network (`ipv4`, `ipv6`, or `dual`), and the MAC address of the primary interface. +<7> Replace `` with the name of your hosted cluster. + +.. Enter the following command to create the second virtual machine: ++ +[source,terminal] +---- +$ kcli create vm \ + -P start=False \// <1> + -P uefi_legacy=true \// <2> + -P plan= \// <3> + -P memory=8192 -P numcpus=16 \// <4> + -P disks=[200,200] \// <5> + -P nets=["{\"name\": \"\", \"mac\": \"aa:aa:aa:aa:11:02\"}"] \// <6> + -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1102 + -P name=-worker1 // <7> +---- ++ +<1> Include `start=False` if you do not want the virtual machine (VM) to automatically start upon creation. +<2> Include `uefi_legacy=true` to indicate that you will use UEFI legacy boot to ensure compatibility with previous UEFI implementations. +<3> Replace `` with the name of your hosted cluster. The `plan=` statement indicates the plan name, which identifies a group of machines as a cluster. +<4> Include the `memory=8192` and `numcpus=16` parameters to specify the resources for the VM, including the RAM and CPU. +<5> Include `disks=[200,200]` to indicate that you are creating two thin-provisioned disks in the VM. +<6> Include `nets=[{"name": "", "mac": "aa:aa:aa:aa:02:13"}]` to provide network details, including the network name to connect to, the type of network (`ipv4`, `ipv6`, or `dual`), and the MAC address of the primary interface. +<7> Replace `` with the name of your hosted cluster. + +.. Enter the following command to create the third virtual machine: ++ +[source,terminal] +---- +$ kcli create vm \ + -P start=False \// <1> + -P uefi_legacy=true \// <2> + -P plan= \// <3> + -P memory=8192 -P numcpus=16 \// <4> + -P disks=[200,200] \// <5> + -P nets=["{\"name\": \"\", \"mac\": \"aa:aa:aa:aa:11:03\"}"] \// <6> + -P uuid=aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaa1103 + -P name=-worker2 // <7> +---- ++ +<1> Include `start=False` if you do not want the virtual machine (VM) to automatically start upon creation. +<2> Include `uefi_legacy=true` to indicate that you will use UEFI legacy boot to ensure compatibility with previous UEFI implementations. +<3> Replace `` with the name of your hosted cluster. The `plan=` statement indicates the plan name, which identifies a group of machines as a cluster. +<4> Include the `memory=8192` and `numcpus=16` parameters to specify the resources for the VM, including the RAM and CPU. +<5> Include `disks=[200,200]` to indicate that you are creating two thin-provisioned disks in the VM. +<6> Include `nets=[{"name": "", "mac": "aa:aa:aa:aa:02:13"}]` to provide network details, including the network name to connect to, the type of network (`ipv4`, `ipv6`, or `dual`), and the MAC address of the primary interface. +<7> Replace `` with the name of your hosted cluster. + +. Enter the `restart ksushy` command to restart the `ksushy` tool to ensure that the tool detects the VMs that you added: ++ +[source,terminal] +---- +$ systemctl restart ksushy +---- ++ +.Example output +[source,terminal] +---- ++---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ +| Name | Status | Ip | Source | Plan | Profile | ++---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ +| hosted-worker0 | down | | | hosted-dual | kvirt | +| hosted-worker1 | down | | | hosted-dual | kvirt | +| hosted-worker2 | down | | | hosted-dual | kvirt | ++---------------------+--------+-------------------+----------------------------------------------------+-------------+---------+ +---- \ No newline at end of file