1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

Merge pull request #82518 from openshift-cherrypick-robot/cherry-pick-82438-to-enterprise-4.17

[enterprise-4.17] OSDOCS#12123: Describe the --render Usage
This commit is contained in:
Servesha Dudhgaonkar
2024-09-26 11:04:09 +05:30
committed by GitHub
3 changed files with 28 additions and 17 deletions

View File

@@ -2411,7 +2411,7 @@ Topics:
File: hcp-sizing-guidance
- Name: Overriding resouce utilization measurements
File: hcp-override-resource-util
- Name: Installing the hosted control plane command line interface
- Name: Installing the hosted control plane command-line interface
File: hcp-cli
- Name: Distributing hosted cluster workloads
File: hcp-distribute-workloads

View File

@@ -50,24 +50,30 @@ The `.spec.recommend.match` field is intentionally left blank. In this case, thi
+
[source,terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f tuned-hugepages.yaml
$ oc --kubeconfig="<management_cluster_kubeconfig>" create -f tuned-hugepages.yaml <1>
----
<1> Replace `<management_cluster_kubeconfig>` with the name of your management cluster `kubeconfig` file.
. Create a `NodePool` manifest YAML file, customize the upgrade type of the `NodePool`, and reference the `ConfigMap` object that you created in the `spec.tuningConfig` section. Create the `NodePool` manifest and save it in a file named `hugepages-nodepool.yaml` by using the `hcp` CLI:
+
[source,yaml]
[source,terminal]
----
NODEPOOL_NAME=hugepages-example
INSTANCE_TYPE=m5.2xlarge
NODEPOOL_REPLICAS=2
hcp create nodepool aws \
--cluster-name $CLUSTER_NAME \
--name $NODEPOOL_NAME \
--node-count $NODEPOOL_REPLICAS \
--instance-type $INSTANCE_TYPE \
--render > hugepages-nodepool.yaml
$ hcp create nodepool aws \
--cluster-name <hosted_cluster_name> \// <1>
--name <nodepool_name> \// <2>
--node-count <nodepool_replicas> \// <3>
--instance-type <instance_type> \// <4>
--render > hugepages-nodepool.yaml
----
<1> Replace `<hosted_cluster_name>` with the name of your hosted cluster.
<2> Replace `<nodepool_name>` with the name of your node pool.
<3> Replace `<nodepool_replicas>` with the number of your node pool replicas, for example, `2`.
<4> Replace `<instance_type>` with the instance type, for example, `m5.2xlarge`.
+
[NOTE]
====
The `--render` flag in the `hcp create` command does not render the secrets. To render the secrets, you must use both the `--render` and the `--render-sensitive` flags in the `hcp create` command.
====
. In the `hugepages-nodepool.yaml` file, set `.spec.management.upgradeType` to `InPlace`, and set `.spec.tuningConfig` to reference the `tuned-hugepages` `ConfigMap` object that you created.
+
@@ -97,7 +103,7 @@ To avoid the unnecessary re-creation of nodes when you apply the new `MachineCon
+
[source,terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" create -f hugepages-nodepool.yaml
$ oc --kubeconfig="<management_cluster_kubeconfig>" create -f hugepages-nodepool.yaml
----
.Verification
@@ -108,7 +114,7 @@ After the nodes are available, the containerized TuneD daemon calculates the req
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator
$ oc --kubeconfig="<hosted_cluster_kubeconfig>" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator
----
+
.Example output
@@ -124,7 +130,7 @@ rendered 123m
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator
$ oc --kubeconfig="<hosted_cluster_kubeconfig>" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator
----
+
.Example output
@@ -143,7 +149,7 @@ Both of the worker nodes in the new `NodePool` have the `openshift-node-hugepage
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline
$ oc --kubeconfig="<hosted_cluster_kubeconfig>" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline
----
+
.Example output

View File

@@ -10,6 +10,11 @@ If you have a snapshot of etcd from your hosted cluster, you can restore it. Cur
To restore an etcd snapshot, you modify the output from the `create cluster --render` command and define a `restoreSnapshotURL` value in the etcd section of the `HostedCluster` specification.
[NOTE]
====
The `--render` flag in the `hcp create` command does not render the secrets. To render the secrets, you must use both the `--render` and the `--render-sensitive` flags in the `hcp create` command.
====
.Prerequisites
You took an etcd snapshot on a hosted cluster.