mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
OSDOCS#10760: Add machine config for hcp
This commit is contained in:
committed by
openshift-cherrypick-robot
parent
d2686cf89a
commit
8ca54ee871
@@ -2354,6 +2354,8 @@ Topics:
|
||||
File: hcp-getting-started
|
||||
- Name: Authentication and authorization for hosted control planes
|
||||
File: hcp-authentication-authorization
|
||||
- Name: Handling a machine configuration for hosted control planes
|
||||
File: hcp-machine-config
|
||||
- Name: Using feature gates in a hosted cluster
|
||||
File: hcp-using-feature-gates
|
||||
- Name: Updating hosted control planes
|
||||
|
||||
17
hosted_control_planes/hcp-machine-config.adoc
Normal file
17
hosted_control_planes/hcp-machine-config.adoc
Normal file
@@ -0,0 +1,17 @@
|
||||
:_mod-docs-content-type: ASSEMBLY
|
||||
[id="hcp-machine-config"]
|
||||
include::_attributes/common-attributes.adoc[]
|
||||
= Handling a machine configuration for {hcp}
|
||||
:context: hcp-machine-config
|
||||
|
||||
toc::[]
|
||||
|
||||
In a standalone {product-title} cluster, a machine config pool manages a set of nodes. You can handle a machine configuration by using the `MachineConfigPool` custom resource (CR).
|
||||
|
||||
In {hcp}, the `MachineConfigPool` CR does not exist. A node pool contains a set of compute nodes. You can handle a machine configuration by using node pools.
|
||||
|
||||
include::modules/configuring-node-pools-for-hcp.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/node-tuning-hosted-cluster.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/sriov-operator-hosted-control-planes.adoc[leveloffset=+1]
|
||||
@@ -8,6 +8,5 @@ toc::[]
|
||||
|
||||
You can gather metrics for {hcp} by configuring metrics sets. The HyperShift Operator can create or delete monitoring dashboards in the management cluster for each hosted cluster that it manages.
|
||||
|
||||
//using service-level DNS for control plane services
|
||||
include::modules/hosted-control-planes-metrics-sets.adoc[leveloffset=+1]
|
||||
include::modules/hosted-control-planes-monitoring-dashboard.adoc[leveloffset=+1]
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * updates/updating_a_cluster/updating-hosted-control-planes.adoc
|
||||
// * hosted_control_planes/hcp-machine-config.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="configuring-node-pools-for-hcp_{context}"]
|
||||
|
||||
@@ -1,109 +0,0 @@
|
||||
{hcp-capital}// Module included in the following assemblies:
|
||||
//
|
||||
// * hosted_control_planes/hcp-troubleshooting.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="debug-nodes-hcp_{context}"]
|
||||
= Checking why worker nodes did not join the hosted cluster
|
||||
|
||||
If your control plane API endpoint is available, but worker nodes did not join the hosted cluster on AWS, you can debug worker node issues. To troubleshoot why worker nodes did not join the hosted cluster on AWS, you can check the following information.
|
||||
|
||||
:FeatureName: {hcp-capital} on AWS
|
||||
include::snippets/technology-preview.adoc[]
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* You have link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.9/html/clusters/cluster_mce_overview#hosting-service-cluster-configure-aws[configured the hosting cluster on AWS].
|
||||
* Your control plane API endpoint is available.
|
||||
|
||||
.Procedure
|
||||
|
||||
. Address any error messages in the status of the `HostedCluster` and `NodePool` resources:
|
||||
|
||||
.. Check the status of the `HostedCluster` resource by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get hc -n <hosted_cluster_namespace> <hosted_cluster_name> -o jsonpath='{.status}'
|
||||
----
|
||||
|
||||
.. Check the status of the `NodePool` resource by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get hc -n <hosted_cluster_namespace> <hosted_cluster_name> -o jsonpath='{.status}'
|
||||
----
|
||||
+
|
||||
If you did not find any error messages in the status of the `HostedCluster` and `NodePool` resources, proceed to the next step.
|
||||
|
||||
. Check if your worker machines are created by running the following commands, replacing values as necessary:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ HC_NAMESPACE="clusters"
|
||||
$ HC_NAME="cluster_name"
|
||||
$ CONTROL_PLANE_NAMESPACE="${HC_NAMESPACE}-${HC_NAME}"
|
||||
$ oc get machines.cluster.x-k8s.io -n $CONTROL_PLANE_NAMESPACE
|
||||
$ oc get awsmachines -n $CONTROL_PLANE_NAMESPACE
|
||||
----
|
||||
|
||||
. If worker machines do not exist, check if the `machinedeployment` and `machineset` resources are created by running the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc get machinedeployment -n $CONTROL_PLANE_NAMESPACE
|
||||
$ oc get machineset -n $CONTROL_PLANE_NAMESPACE
|
||||
----
|
||||
|
||||
. If the `machinedeployment` and `machineset` resources do not exist, check logs of the HyperShift Operator by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs deployment/operator -n hypershift
|
||||
----
|
||||
|
||||
. If worker machines exist but are not provisioned in the hosted cluster, check the log of the cluster API provider by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc logs deployment/capi-provider -c manager -n $CONTROL_PLANE_NAMESPACE
|
||||
----
|
||||
|
||||
. If worker machines exist and are provisioned in the cluster, ensure that machines are initialized through Ignition successfully by checking the system console logs. Check the system console logs of every machine by using the `console-logs` utility by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./bin/hypershift console-logs aws --name $HC_NAME --aws-creds ~/.aws/credentials --output-dir /tmp/console-logs
|
||||
----
|
||||
+
|
||||
You can access the system console logs in the `/tmp/console-logs` directory. The control plane exposes the Ignition endpoint. If you see an error related to the Ignition endpoint, then the Ignition endpoint is not accessible from the worker nodes through `https`.
|
||||
|
||||
. If worker machines are provisioned and initialized through Ignition successfully, you can extract and access the journal logs of every worker machine by creating a bastion machine. A bastion machine allows you to access worker machines by using SSH.
|
||||
|
||||
.. Create a bastion machine by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ ./bin/hypershift create bastion aws --aws-creds ~/.aws/credentials --name $CLUSTER_NAME --ssh-key-file /tmp/ssh/id_rsa.pub
|
||||
----
|
||||
|
||||
.. Optional: If you used the `--generate-ssh` flag when creating the cluster, you can extract the public and private key for the cluster by running the following commands:
|
||||
+
|
||||
[souce,terminal]
|
||||
----
|
||||
$ mkdir /tmp/ssh
|
||||
$ oc get secret -n clusters ${HC_NAME}-ssh-key -o jsonpath='{ .data.id_rsa }' | base64 -d > /tmp/ssh/id_rsa
|
||||
$ oc get secret -n clusters ${HC_NAME}-ssh-key -o jsonpath='{ .data.id_rsa\.pub }' | base64 -d > /tmp/ssh/id_rsa.pub
|
||||
----
|
||||
|
||||
.. Extract journal logs from the every worker machine by running the following commands:
|
||||
+
|
||||
[source,terminal]
|
||||
----
|
||||
$ mkdir /tmp/journals
|
||||
$ INFRAID="$(oc get hc -n clusters $CLUSTER_NAME -o jsonpath='{ .spec.infraID }')"
|
||||
$ SSH_PRIVATE_KEY=/tmp/ssh/id_rsa
|
||||
$ ./test/e2e/util/dump/copy-machine-journals.sh /tmp/journals
|
||||
----
|
||||
+
|
||||
You must place journal logs in the `/tmp/journals` directory in a compressed format. Check for the error that indicates why kubelet did not join the cluster.
|
||||
@@ -1,13 +1,12 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * scalability_and_performance/using-node-tuning-operator.adoc
|
||||
// * hosted_control_planes/hcp-machine-config.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="node-tuning-hosted-cluster_{context}"]
|
||||
= Configuring node tuning in a hosted cluster
|
||||
|
||||
//# Manage node-level tuning with the Node Tuning Operator
|
||||
|
||||
To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In {hcp}, you can configure node tuning by creating config maps that contain `Tuned` objects and referencing those config maps in your node pools.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * hosted_control_planes/hcp-managing.adoc
|
||||
// * hosted_control_planes/hcp-troubleshooting.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="scale-down-data-plane_{context}"]
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * networking/hardware_networks/configuring-sriov-operator.adoc
|
||||
// * hosted-control-planes/hcp-managing.adoc
|
||||
// * hosted-control-planes/hcp-machine-config.adoc
|
||||
|
||||
:_mod-docs-content-type: PROCEDURE
|
||||
[id="sriov-operator-hosted-control-planes_{context}"]
|
||||
|
||||
Reference in New Issue
Block a user