diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 45f7e51298..5861200166 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2411,7 +2411,7 @@ Topics: File: hcp-sizing-guidance - Name: Overriding resouce utilization measurements File: hcp-override-resource-util - - Name: Installing the hosted control plane command line interface + - Name: Installing the hosted control plane command-line interface File: hcp-cli - Name: Distributing hosted cluster workloads File: hcp-distribute-workloads diff --git a/modules/advanced-node-tuning-hosted-cluster.adoc b/modules/advanced-node-tuning-hosted-cluster.adoc index b437f82818..62f92ed3be 100644 --- a/modules/advanced-node-tuning-hosted-cluster.adoc +++ b/modules/advanced-node-tuning-hosted-cluster.adoc @@ -50,24 +50,30 @@ The `.spec.recommend.match` field is intentionally left blank. In this case, thi + [source,terminal] ---- -$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-hugepages.yaml +$ oc --kubeconfig="" create -f tuned-hugepages.yaml <1> ---- +<1> Replace `` with the name of your management cluster `kubeconfig` file. . Create a `NodePool` manifest YAML file, customize the upgrade type of the `NodePool`, and reference the `ConfigMap` object that you created in the `spec.tuningConfig` section. Create the `NodePool` manifest and save it in a file named `hugepages-nodepool.yaml` by using the `hcp` CLI: + -[source,yaml] +[source,terminal] ---- - NODEPOOL_NAME=hugepages-example - INSTANCE_TYPE=m5.2xlarge - NODEPOOL_REPLICAS=2 - - hcp create nodepool aws \ - --cluster-name $CLUSTER_NAME \ - --name $NODEPOOL_NAME \ - --node-count $NODEPOOL_REPLICAS \ - --instance-type $INSTANCE_TYPE \ - --render > hugepages-nodepool.yaml +$ hcp create nodepool aws \ + --cluster-name \// <1> + --name \// <2> + --node-count \// <3> + --instance-type \// <4> + --render > hugepages-nodepool.yaml ---- +<1> Replace `` with the name of your hosted cluster. +<2> Replace `` with the name of your node pool. +<3> Replace `` with the number of your node pool replicas, for example, `2`. +<4> Replace `` with the instance type, for example, `m5.2xlarge`. ++ +[NOTE] +==== +The `--render` flag in the `hcp create` command does not render the secrets. To render the secrets, you must use both the `--render` and the `--render-sensitive` flags in the `hcp create` command. +==== . In the `hugepages-nodepool.yaml` file, set `.spec.management.upgradeType` to `InPlace`, and set `.spec.tuningConfig` to reference the `tuned-hugepages` `ConfigMap` object that you created. + @@ -97,7 +103,7 @@ To avoid the unnecessary re-creation of nodes when you apply the new `MachineCon + [source,terminal] ---- -$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f hugepages-nodepool.yaml +$ oc --kubeconfig="" create -f hugepages-nodepool.yaml ---- .Verification @@ -108,7 +114,7 @@ After the nodes are available, the containerized TuneD daemon calculates the req + [source,terminal] ---- -$ oc --kubeconfig="$HC_KUBECONFIG" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator +$ oc --kubeconfig="" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator ---- + .Example output @@ -124,7 +130,7 @@ rendered 123m + [source,terminal] ---- -$ oc --kubeconfig="$HC_KUBECONFIG" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator +$ oc --kubeconfig="" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator ---- + .Example output @@ -143,7 +149,7 @@ Both of the worker nodes in the new `NodePool` have the `openshift-node-hugepage + [source,terminal] ---- -$ oc --kubeconfig="$HC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline +$ oc --kubeconfig="" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline ---- + .Example output diff --git a/modules/restoring-etcd-snapshot-hosted-cluster.adoc b/modules/restoring-etcd-snapshot-hosted-cluster.adoc index 2aee3d0264..cf52e83694 100644 --- a/modules/restoring-etcd-snapshot-hosted-cluster.adoc +++ b/modules/restoring-etcd-snapshot-hosted-cluster.adoc @@ -10,6 +10,11 @@ If you have a snapshot of etcd from your hosted cluster, you can restore it. Cur To restore an etcd snapshot, you modify the output from the `create cluster --render` command and define a `restoreSnapshotURL` value in the etcd section of the `HostedCluster` specification. +[NOTE] +==== +The `--render` flag in the `hcp create` command does not render the secrets. To render the secrets, you must use both the `--render` and the `--render-sensitive` flags in the `hcp create` command. +==== + .Prerequisites You took an etcd snapshot on a hosted cluster.