1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 21:46:22 +01:00

Added importing of image stream tags

This commit is contained in:
Shubha Narayanan
2023-01-23 12:41:50 +05:30
committed by openshift-cherrypick-robot
parent db2d321e63
commit e7d4e4c91d
2 changed files with 85 additions and 7 deletions

View File

@@ -0,0 +1,76 @@
// Module included in the following assemblies:
// * openshift_images/cluster-tasks.adoc
:_content-type: PROCEDURE
[id="images-cluster-sample-imagestream-import_{context}"]
= Configuring periodic importing of Cluster Sample Operator image stream tags
You can ensure that you always have access to the latest versions of the Cluster Sample Operator images by periodically importing the image stream tags when new versions become available.
.Procedure
. Fetch all the imagestreams in the `openshift` namespace by running the following command:
+
[source,terminal]
----
oc get imagestreams -nopenshift
----
. Fetch the tags for every imagestream in the `openshift` namespace by running the following command:
+
[source, terminal]
----
$ oc get is <image-stream-name> -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift
----
+
For example:
+
[source, terminal]
----
$ oc get is ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}{.name}{'\t'}{.from.name}{'\n'}{end}" -nopenshift
----
+
.Example output
[source, terminal]
----
1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11
1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12
----
. Schedule periodic importing of images for each tag present in the image stream by running the following command:
+
[source,terminal]
----
$ oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift
----
+
For example:
+
[source,terminal]
----
$ oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift
$ oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift
----
+
This command causes {product-title} to periodically update this particular image stream tag. This period is a cluster-wide setting set to 15 minutes by default.
. Verify the scheduling status of the periodic import by running the following command:
+
[source,terminal]
----
oc get imagestream <image-stream-name> -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift
----
+
For example:
+
[source,terminal]
----
oc get imagestream ubi8-openjdk-17 -o jsonpath="{range .spec.tags[*]}Tag: {.name}{'\t'}Scheduled: {.importPolicy.scheduled}{'\n'}{end}" -nopenshift
----
+
.Example output
[source,terminal]
----
Tag: 1.11 Scheduled: true
Tag: 1.12 Scheduled: true
----

View File

@@ -171,9 +171,9 @@ After you deploy your {product-title} cluster, you can add worker nodes to scale
For installer-provisioned infrastructure clusters, you can manually or automatically scale the `MachineSet` object to match the number of available bare-metal hosts.
To add a bare-metal host, you must configure all network prerequisites, configure an associated `baremetalhost` object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console.
* xref:../scalability_and_performance/managing-bare-metal-hosts.adoc#adding-bare-metal-host-to-cluster-using-web-console_managing-bare-metal-hosts[Adding worker nodes using the web console]
To add a bare-metal host, you must configure all network prerequisites, configure an associated `baremetalhost` object, then provision the worker node to the cluster. You can add a bare-metal host manually or by using the web console.
* xref:../scalability_and_performance/managing-bare-metal-hosts.adoc#adding-bare-metal-host-to-cluster-using-web-console_managing-bare-metal-hosts[Adding worker nodes using the web console]
* xref:../scalability_and_performance/managing-bare-metal-hosts.adoc#adding-bare-metal-host-to-cluster-using-yaml_managing-bare-metal-hosts[Adding worker nodes using YAML in the web console]
@@ -191,13 +191,13 @@ For user-provisioned infrastructure clusters, you can add worker nodes by using
For clusters managed by the Assisted Installer, you can add worker nodes by using the {cluster-manager-first} console, the Assisted Installer REST API or you can manually add worker nodes using an ISO image and cluster Ignition config files.
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#sno-adding-worker-nodes-to-sno-clusters_add-workers[Adding worker nodes using the OpenShift Cluster Manager]
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#sno-adding-worker-nodes-to-sno-clusters_add-workers[Adding worker nodes using the OpenShift Cluster Manager]
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#adding-worker-nodes-using-the-assisted-installer-api[Adding worker nodes using the Assisted Installer REST API]
* xref:../nodes/nodes/nodes-sno-worker-nodes.adoc#sno-adding-worker-nodes-to-single-node-clusters-manually_add-workers[Manually adding worker nodes to a SNO cluster]
=== Adding worker nodes to clusters managed by the multicluster engine for Kubernetes
=== Adding worker nodes to clusters managed by the multicluster engine for Kubernetes
For clusters managed by the multicluster engine for Kubernetes, you can add worker nodes by using the dedicated multicluster engine console.
@@ -545,7 +545,7 @@ include::modules/machineset-delete-policy.adoc[leveloffset=+2]
include::modules/nodes-scheduler-node-selectors-cluster.adoc[leveloffset=+2]
[id="post-worker-latency-profiles"]
== Improving cluster stability in high latency environments using worker latency profiles
== Improving cluster stability in high latency environments using worker latency profiles
include::snippets/worker-latency-profile-intro.adoc[]
@@ -567,7 +567,7 @@ In a production deployment, it is recommended that you deploy at least three com
For information on infrastructure nodes and which components can run on infrastructure nodes, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets[Creating infrastructure machine sets].
To create an infrastructure node, you can xref:../post_installation_configuration/cluster-tasks.adoc#machineset-creating_post-install-cluster-tasks[use a machine set], xref:../post_installation_configuration/cluster-tasks.adoc#creating-an-infra-node_post-install-cluster-tasks[assign a label to the nodes], or xref:../post_installation_configuration/cluster-tasks.adoc#creating-infra-machines_post-install-cluster-tasks[use a machine config pool].
To create an infrastructure node, you can xref:../post_installation_configuration/cluster-tasks.adoc#machineset-creating_post-install-cluster-tasks[use a machine set], xref:../post_installation_configuration/cluster-tasks.adoc#creating-an-infra-node_post-install-cluster-tasks[assign a label to the nodes], or xref:../post_installation_configuration/cluster-tasks.adoc#creating-infra-machines_post-install-cluster-tasks[use a machine config pool].
For sample machine sets that you can use with these procedures, see xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets-clouds[Creating machine sets for different clouds].
@@ -707,3 +707,5 @@ include::modules/installation-images-samples-disconnected-mirroring-assist.adoc[
include::modules/installation-restricted-network-samples.adoc[leveloffset=+2]
include::modules/installation-preparing-restricted-cluster-to-gather-support-data.adoc[leveloffset=+2]
include::modules/images-cluster-sample-imagestream-import.adoc[leveloffset=+1]