1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00
Files
openshift-docs/modules/hcp-configure-ntp.adoc
2025-04-15 06:48:40 +00:00

176 lines
4.9 KiB
Plaintext

// Module included in the following assemblies:
// * hosted_control_planes/hcp-machine-config.adoc
:_mod-docs-content-type: PROCEDURE
[id="hcp-configure-ntp_{context}"]
= Configuring the NTP server for hosted clusters
You can configure the Network Time Protocol (NTP) server for your hosted clusters by using Butane.
.Procedure
. Create a Butane config file, `99-worker-chrony.bu`, that includes the contents of the `chrony.conf` file. For more information about Butane, see "Creating machine configs with Butane".
+
.Example `99-worker-chrony.bu` configuration
[source,yaml,subs="attributes+"]
----
# ...
variant: openshift
version: {product-version}.0
metadata:
name: 99-worker-chrony
labels:
machineconfiguration.openshift.io/role: worker
storage:
files:
- path: /etc/chrony.conf
mode: 0644 #<1>
overwrite: true
contents:
inline: |
pool 0.rhel.pool.ntp.org iburst #<2>
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
# ...
----
<1> Specify an octal value mode for the `mode` field in the machine config file. After creating the file and applying the changes, the `mode` field is converted to a decimal value.
<2> Specify any valid, reachable time source, such as the one provided by your Dynamic Host Configuration Protocol (DHCP) server.
+
[NOTE]
====
For machine-to-machine communication, the NTP on the User Datagram Protocol (UDP) port is `123`. If you configured an external NTP time server, you must open UDP port `123`.
====
. Use Butane to generate a `MachineConfig` object file, `99-worker-chrony.yaml`, that contains a configuration that Butane sends to the nodes. Run the following command:
+
[source,terminal]
----
$ butane 99-worker-chrony.bu -o 99-worker-chrony.yaml
----
+
.Example `99-worker-chrony.yaml` configuration
[source,yaml]
----
# Generated by Butane; do not edit
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: <machineconfig_name>
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:...
mode: 420
overwrite: true
path: /example/path
----
. Add the contents of the `99-worker-chrony.yaml` file inside of a config map in the management cluster:
+
.Example config map
[source,yaml]
----
apiVersion: v1
kind: ConfigMap
metadata:
name: <configmap_name>
namespace: <namespace> #<1>
data:
config: |
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: <machineconfig_name>
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:...
mode: 420
overwrite: true
path: /example/path
# ...
----
<1> Replace `<namespace>` with the name of your namespace where you created the node pool, such as `clusters`.
. Apply the config map to your node pool by running the following command:
+
[source,terminal]
----
$ oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>
----
+
.Example `NodePool` configuration
[source,yaml]
----
apiVersion: hypershift.openshift.io/v1alpha1
kind: NodePool
metadata:
# ...
name: nodepool-1
namespace: clusters
# ...
spec:
config:
- name: <configmap_name> #<1>
# ...
----
<1> Replace `<configmap_name>` with the name of your config map.
. Add the list of your NTP servers in the `infra-env.yaml` file, which defines the `InfraEnv` custom resource (CR):
+
.Example `infra-env.yaml` file
[source,yaml]
----
apiVersion: agent-install.openshift.io/v1beta1
kind: InfraEnv
# ...
spec:
additionalNTPSources:
- <ntp_server> #<1>
- <ntp_server1>
- <ntp_server2>
# ...
----
<1> Replace `<ntp_server>` with the name of your NTP server. For more details about creating a host inventory and the `InfraEnv` CR, see "Creating a host inventory".
. Apply the `InfraEnv` CR by running the following command:
+
[source,terminal]
----
$ oc apply -f infra-env.yaml
----
.Verification
* Check the following fields to know the status of your host inventory:
+
** `conditions`: The standard Kubernetes conditions indicating if the image was created successfully.
** `isoDownloadURL`: The URL to download the Discovery Image.
** `createdTime`: The time at which the image was last created. If you modify the `InfraEnv` CR, ensure that you have updated the timestamp before downloading a new image.
+
Verify that your host inventory is created by running the following command:
+
[source,terminal]
----
$ oc describe infraenv <infraenv_resource_name> -n <infraenv_namespace>
----
+
[NOTE]
====
If you modify the `InfraEnv` CR, confirm that the `InfraEnv` CR has created a new Discovery Image by looking at the `createdTime` field. If you already booted hosts, boot them again with the latest Discovery Image.
====