mirror of
https://github.com/openshift/openshift-docs.git
synced 2026-02-05 12:46:18 +01:00
CP-OSDOCS-7557-fix: manual CP
This commit is contained in:
@@ -129,8 +129,10 @@ Name: Networking
|
||||
Dir: microshift_networking
|
||||
Distros: microshift
|
||||
Topics:
|
||||
- Name: Networking settings
|
||||
File: microshift-networking
|
||||
- Name: About the networking plugin
|
||||
File: microshift-cni
|
||||
- Name: Using networking settings
|
||||
File: microshift-networking-settings
|
||||
- Name: Firewall configuration
|
||||
File: microshift-firewall
|
||||
---
|
||||
@@ -138,7 +140,7 @@ Name: Storage
|
||||
Dir: microshift_storage
|
||||
Distros: microshift
|
||||
Topics:
|
||||
- Name: MicroShift storage overview
|
||||
- Name: About MicroShift storage
|
||||
File: index
|
||||
- Name: Understanding ephemeral storage for MicroShift
|
||||
File: understanding-ephemeral-storage-microshift
|
||||
@@ -157,15 +159,15 @@ Name: Running applications
|
||||
Dir: microshift_running_apps
|
||||
Distros: microshift
|
||||
Topics:
|
||||
- Name: Embedded applications on RHEL for Edge
|
||||
- Name: Embedding applications on RHEL for Edge
|
||||
File: microshift-embedded-apps-on-rhel-edge
|
||||
- Name: Embedding applications for offline use
|
||||
File: microshift-embed-apps-offline-use
|
||||
- Name: Embedding applications tutorial
|
||||
File: microshift-embedding-apps-tutorial
|
||||
- Name: Application deployment
|
||||
- Name: Deploying applications
|
||||
File: microshift-applications
|
||||
- Name: Operators
|
||||
- Name: Using operators
|
||||
File: microshift-operators
|
||||
- Name: Greenboot workload health check scripts
|
||||
File: microshift-greenboot-workload-scripts
|
||||
@@ -189,7 +191,7 @@ Topics:
|
||||
File: microshift-troubleshoot-cluster
|
||||
- Name: Troubleshoot updates
|
||||
File: microshift-troubleshoot-updates
|
||||
- Name: Checking audit logs
|
||||
- Name: Checking audit logs
|
||||
File: microshift-audit-logs
|
||||
- Name: Additional information
|
||||
File: microshift-things-to-know
|
||||
|
||||
@@ -8,6 +8,11 @@ toc::[]
|
||||
|
||||
You can manually back up and restore the {product-title} database on all supported systems. The {product-title} service must be stopped and Greenboot health checks must be completed prior to any backups.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
Only {product-title} data is backed up with the following procedures. Application data is not included.
|
||||
====
|
||||
|
||||
* On `rpm-ostree` systems, {product-title} automatically creates a backup on every start. These automatic backups are deleted and replaced with the latest backup each time the system restarts.
|
||||
* If you are using an `rpm-ostree` system, restoring data is also automated. Otherwise, you must back up and restore data manually.
|
||||
|
||||
@@ -23,6 +28,8 @@ include::modules/microshift-backing-up-manually.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-restoring-data-backups.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-service-starting.adoc[leveloffset=+1]
|
||||
|
||||
//additional resources for restoring-data module
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
|
||||
@@ -10,10 +10,9 @@ You can use different command-line interface (CLI) tools to build, deploy, and m
|
||||
|
||||
CLI tools available for use with {product-title} are the following:
|
||||
|
||||
* Built-in `microshift` command types
|
||||
* Linux CLI tools
|
||||
* Kubernetes CLI (`kubectl`)
|
||||
* The {oc-first} tool with an enabled subset of commands
|
||||
* Built-in `microshift` command types
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -57,3 +57,7 @@ include::modules/microshift-accessing-cluster-locally.adoc[leveloffset=+2]
|
||||
include::modules/microshift-accessing-cluster-open-firewall.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/microshift-accessing-cluster-remotely.adoc[leveloffset=+2]
|
||||
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* xref:../microshift_configuring/microshift-cluster-access-kubeconfig.adoc#microshift-kubeconfig-generating-remote-kcfiles_microshift-cluster-access-kubeconfig[Generating additional kubeconfig files for remote access]
|
||||
@@ -1,16 +1,17 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="microshift-greenboot"]
|
||||
= The greenboot health check
|
||||
= The Greenboot health check
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-greenboot
|
||||
|
||||
toc::[]
|
||||
|
||||
Greenboot is the generic health check framework for the `systemd` service on RPM-OSTree-based systems. The `microshift-greenboot` RPM and `greenboot-default-health-checks` are RPM packages installed with {product-title}. Greenboot is used to assess system health and automate a rollback to the last healthy state in the event of software trouble.
|
||||
Greenboot is the generic health check framework for the `systemd` service on RPM-OSTree-based systems. The `microshift-greenboot` RPM and `greenboot-default-health-checks` are RPM packages installed with {product-title}. Greenboot is used to assess system health and automate a rollback to the last healthy state in the event of software trouble, for example:
|
||||
|
||||
This health check framework is especially useful when you need to check for software problems and perform system rollbacks on edge devices where direct serviceability is either limited or non-existent. When health check scripts are installed and configured, health checks run every time the system starts.
|
||||
|
||||
Using greenboot can reduce your risk of being locked out of edge devices during updates and prevent a significant interruption of service if an update fails. When a failure is detected, the system boots into the last known working configuration using the `rpm-ostree` rollback capability.
|
||||
* This health check framework is especially useful when you need to check for software problems and perform system rollbacks on edge devices where direct serviceability is either limited or non-existent.
|
||||
* When health check scripts are installed and configured, health checks run every time the system starts.
|
||||
* Using Greenboot can reduce your risk of being locked out of edge devices during updates and prevent a significant interruption of service if an update fails.
|
||||
* When a failure is detected, the system boots into the last known working configuration using the `rpm-ostree` rollback capability.
|
||||
|
||||
A {product-title} application health check script is included in the `microshift-greenboot` RPM. The `greenboot-default-health-checks` RPM also includes health check scripts verifying that DNS and `ostree` services are accessible. In addition, you can create your own health check scripts for the workloads you are running. You can write one that verifies that an application has started, for example.
|
||||
|
||||
@@ -39,6 +40,7 @@ include::modules/microshift-greenboot-prerollback-log.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-greenboot-check-update.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_microshift-greenboot_{context}"]
|
||||
[role="_additional-resources_microshift-greenboot"]
|
||||
.Additional resources
|
||||
== Additional resources
|
||||
* xref:../microshift_running_apps/microshift-greenboot-workload-scripts.adoc#microshift-greenboot-workload-scripts[Greenboot workload health check scripts]
|
||||
@@ -10,8 +10,9 @@ To troubleshoot a failed {product-title} installation, you can run an sos report
|
||||
|
||||
include::modules/microshift-gathering-sos-report.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_microshift-installing-troubleshooting_{context}"]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
== Additional resources
|
||||
* xref:../microshift_support/microshift-sos-report.adoc#microshift-sos-report[About MicroShift sos reports]
|
||||
|
||||
* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/getting_the_most_from_your_support_experience/generating-an-sos-report-for-technical-support_getting-the-most-from-your-support-experience[Generating an sos report for technical support]
|
||||
@@ -1,10 +1,10 @@
|
||||
// Module included in the following assemblies:
|
||||
//
|
||||
// * microshift_networking/microshift-understanding networking.adoc
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="microshift-cni_{context}"]
|
||||
:_content-type: ASSEMBLY
|
||||
[id="microshift-cni"]
|
||||
= About the OVN-Kubernetes network plugin
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-about-ovn-k-plugin
|
||||
|
||||
toc::[]
|
||||
|
||||
OVN-Kubernetes is the default networking solution for {product-title} deployments. OVN-Kubernetes is a virtualized network for pods and services that is based on Open Virtual Network (OVN). The OVN-Kubernetes Container Network Interface (CNI) plugin is the network plugin for the cluster. A cluster that uses the OVN-Kubernetes network plugin also runs Open vSwitch (OVS) on the node. OVN configures OVS on the node to implement the declared network configuration.
|
||||
|
||||
@@ -104,7 +104,7 @@ OVN-Kubernetes manifests and startup logic are built into {product-title}. The s
|
||||
|
||||
* `/etc/NetworkManager/conf.d/microshift-nm.conf` for NetworkManager.service
|
||||
* `/etc/systemd/system/ovs-vswitchd.service.d/microshift-cpuaffinity.conf` for ovs-vswitchd.service
|
||||
* `/etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.conf`
|
||||
* `/etc/systemd/system/ovsdb-server.service.d/microshift-cpuaffinity.conf`for ovs-server.service
|
||||
* `/usr/bin/configure-ovs-microshift.sh` for microshift-ovs-init.service
|
||||
* `/usr/bin/configure-ovs.sh` for microshift-ovs-init.service
|
||||
* `/etc/crio/crio.conf.d/microshift-ovn.conf` for CRI-O service
|
||||
@@ -29,9 +29,9 @@ include::modules/microshift-firewall-apply-settings.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-firewall-verify-settings.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_microshift-using-a-firewall_{context}"]
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_microshift-using-a-firewall"]
|
||||
.Additional resources
|
||||
== Additional resources
|
||||
|
||||
* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_firewalls_and_packet_filters/using-and-configuring-firewalld_firewall-packet-filters[RHEL: Using and configuring firewalld]
|
||||
|
||||
|
||||
@@ -21,8 +21,6 @@ By default, Kubernetes allocates each pod an internal IP address for application
|
||||
To troubleshoot connection problems with the NodePort service, read about the known issue in the Release Notes.
|
||||
====
|
||||
|
||||
include::modules/microshift-cni.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-configuring-ovn.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-restart-ovnkube-master.adoc[leveloffset=+1]
|
||||
@@ -41,8 +39,7 @@ include::modules/microshift-blocking-nodeport-access.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-mDNS.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_microshift-understanding-networking-settings_{context}"]
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_microshift-understanding-networking-settings"]
|
||||
.Additional resources
|
||||
|
||||
== Additional resources
|
||||
* xref:../microshift_release_notes/microshift-4-14-release-notes.adoc#microshift-4-14-known-issues[{product-title} {product-version} release notes --> Known issues]
|
||||
@@ -147,13 +147,13 @@ Networking updates to {product-title} {product-version} include traffic flow pat
|
||||
[id="microshift-4-14-traffic-flow-change"]
|
||||
==== North-south traffic flow changed
|
||||
|
||||
The external gateway bridge and the physical device on the host are no longer connected. The north-south traffic between the network service and the OVN external switch flows from the host kernel to {product-title} through the external gateway bridge. See the xref:../microshift_networking/microshift-networking.adoc#microshift-description-connections-network-topology_microshift-networking[MicroShift networking] documentation for more information.
|
||||
The external gateway bridge and the physical device on the host are no longer connected. The north-south traffic between the network service and the OVN external switch flows from the host kernel to {product-title} through the external gateway bridge. See the xref:../microshift_networking/microshift-cni.adoc#microshift-description-connections-network-topology_microshift-cni[MicroShift networking] documentation for more information.
|
||||
|
||||
[discrete]
|
||||
[id="microshift-4-14-network-config-flags-deprecated"]
|
||||
==== Network configuration flags are deprecated
|
||||
|
||||
The gateway bridge flag, 'gatewayInterface', and the OVS flag, `disableOVSInit`, in the networking configuration file, `/etc/microshift/ovn.yaml`, are deprecated with this release. See the xref:../microshift_networking/microshift-networking.adoc#microshift-config-OVN-K_microshift-networking[MicroShift OVN-K configuration] documentation for more information.
|
||||
The gateway bridge flag, 'gatewayInterface', and the OVS flag, `disableOVSInit`, in the networking configuration file, `/etc/microshift/ovn.yaml`, are deprecated with this release. See the xref:../microshift_networking/microshift-networking-settings.adoc#microshift-configuring-ovn_microshift-networking-settings[MicroShift OVN-K configuration] documentation for more information.
|
||||
|
||||
[discrete]
|
||||
[id="microshift-4-14-cidr-removal"]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
:_content-type: ASSEMBLY
|
||||
[id="microshift-embedded-apps-on-rhel-edge"]
|
||||
= Options for embedding {product-title} applications in a {op-system-ostree} image
|
||||
= Options for embedding {product-title} applications in a RHEL for Edge image
|
||||
include::_attributes/attributes-microshift.adoc[]
|
||||
:context: microshift-embedded-apps-on-rhel-edge
|
||||
|
||||
|
||||
@@ -18,8 +18,8 @@ include::modules/microshift-greenboot-create-health-check-script.adoc[leveloffse
|
||||
|
||||
include::modules/microshift-greenboot-testing-workload-script.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_microshift-greenboot-workload-scripts"]
|
||||
[id="additional-resources_microshift-greenboot-workload-scripts_{context}"]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
== Additional resources
|
||||
* xref:../microshift_install/microshift-greenboot.adoc#microshift-greenboot[The greenboot health check]
|
||||
* xref:../microshift_running_apps/microshift-applications.adoc#microshift-applying-manifests-example_applications-microshift[Auto applying manifests]
|
||||
|
||||
@@ -10,10 +10,10 @@ Managing storage is a distinct problem from managing compute resources. {product
|
||||
|
||||
include::modules/storage-persistent-storage-overview.adoc[leveloffset=+1]
|
||||
|
||||
[id="additional-resources_understanding-persistent-storage-microshift"]
|
||||
[id="additional-resources_understanding-persistent-storage-microshift_{context}"]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.14/html/storage/understanding-persistent-storage#pv-access-modes_understanding-persistent-storage[Access modes for persistent storage]
|
||||
== Additional resources
|
||||
* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/{ocp-version}/html/storage/understanding-persistent-storage#pv-access-modes_understanding-persistent-storage[Access modes for persistent storage]
|
||||
|
||||
include::modules/storage-persistent-storage-lifecycle.adoc[leveloffset=+1]
|
||||
|
||||
@@ -23,11 +23,9 @@ include::modules/storage-persistent-storage-reclaim.adoc[leveloffset=+2]
|
||||
|
||||
include::modules/storage-persistent-storage-pv.adoc[leveloffset=+1]
|
||||
|
||||
ifdef::microshift[]
|
||||
[role="_additional-resources"]
|
||||
.Additional resources
|
||||
* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_file_systems/mounting-file-systems_managing-file-systems#common-mount-options_mounting-file-systems[Common mount options]
|
||||
endif::microshift[]
|
||||
|
||||
include::modules/storage-persistent-storage-pvc.adoc[leveloffset=+1]
|
||||
|
||||
|
||||
@@ -9,9 +9,6 @@ toc::[]
|
||||
[role="_abstract"]
|
||||
{product-title} etcd is delivered as part of the {product-title} RPM. The etcd service is run as a separate process and the lifecycle is managed automatically by {product-title}.
|
||||
|
||||
//:FeatureName: MicroShift
|
||||
//include::snippets/microshift-tech-preview-snip.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-observe-debug-etcd-server.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-config-etcd.adoc[leveloffset=+1]
|
||||
@@ -9,15 +9,12 @@ toc::[]
|
||||
[role="_abstract"]
|
||||
You can use the `sos` tool to collect troubleshooting information about a host. The `sos report` command generates a detailed report that shows all of the enabled plugins and data from the different components and applications in a system.
|
||||
|
||||
//:FeatureName: MicroShift
|
||||
//include::snippets/microshift-tech-preview-snip.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-about-sos-reports.adoc[leveloffset=+1]
|
||||
|
||||
include::modules/microshift-gathering-sos-report.adoc[leveloffset=+1]
|
||||
|
||||
[role="_additional-resources"]
|
||||
[id="additional-resources_microshift-sos-report"]
|
||||
[id="additional-resources_microshift-sos-report_{context}"]
|
||||
== Additional resources
|
||||
* link:https://access.redhat.com/solutions/2112[How to provide files to Red Hat Support (vmcore, rhev logcollector, sosreports, heap dumps, log files, etc.]
|
||||
* link:https://access.redhat.com/solutions/3592[What is an sos report and how to create one in {op-system-base-full}?]
|
||||
@@ -6,4 +6,6 @@ include::_attributes/attributes-microshift.adoc[]
|
||||
|
||||
toc::[]
|
||||
|
||||
You can use audit logs to identify pod security violations.
|
||||
|
||||
include::modules/microshift-viewing-audit-logs.adoc[leveloffset=+1]
|
||||
@@ -8,7 +8,7 @@ toc::[]
|
||||
|
||||
To troubleshoot failed data backups and restorations, check the basics first, such as data paths, storage configuration, and storage capacity.
|
||||
|
||||
[id="troubleshoot-backup-restore-microshift-backup-data-failed"]
|
||||
[id="troubleshoot-backup-restore-microshift-backup-data-failed_{context}"]
|
||||
== Backing up data failed
|
||||
Data backups are automatic on `rpm-ostree` systems. If you are not using an `rpm-ostree` system and attempted to create a manual backup, the following reasons can cause the backup to fail:
|
||||
|
||||
@@ -19,7 +19,7 @@ Data backups are automatic on `rpm-ostree` systems. If you are not using an `rpm
|
||||
* If you do not have sufficient storage for the data, the backup fails. Ensure that you have enough storage for the {product-title} data.
|
||||
* If you do not have sufficient permissions, a backup can fail. Ensure you have the correct user permissions to create a backup and perform the required configurations.
|
||||
|
||||
[id="troubleshoot-backup-restore-microshift-backup-logs"]
|
||||
[id="troubleshoot-backup-restore-microshift-backup-logs_{context}"]
|
||||
== Backup logs
|
||||
* Logs print to the console during manual backups.
|
||||
* Logs are automatically generated for `rpm-ostree` system automated backups as part of the {product-title} journal logs. You can check the logs by running the following command:
|
||||
@@ -29,11 +29,11 @@ Data backups are automatic on `rpm-ostree` systems. If you are not using an `rpm
|
||||
$ sudo journalctl -u microshift
|
||||
----
|
||||
|
||||
[id="troubleshoot-backup-restore-microshift-restore-data-failed"]
|
||||
[id="troubleshoot-backup-restore-microshift-restore-data-failed_{context}"]
|
||||
== Restoring data failed
|
||||
The restoration of data can fail for many reasons, including storage and permission issues. Mismatched data versions can cause failures when {product-title} restarts.
|
||||
|
||||
[id="troubleshoot-backup-restore-microshift-RPM-OSTree-data-restore-failed"]
|
||||
[id="troubleshoot-backup-restore-microshift-RPM-OSTree-data-restore-failed_{context}"]
|
||||
=== RPM-OSTree-based systems data restore failed
|
||||
Data restorations are automatic on `rpm-ostree` systems, but can fail, for example:
|
||||
|
||||
@@ -45,7 +45,7 @@ Data restorations are automatic on `rpm-ostree` systems, but can fail, for examp
|
||||
|
||||
** Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of {product-title} is an older version than the version of the {product-title} data you are currently using, the restoration can fail.
|
||||
|
||||
[id="troubleshoot-backup-restore-microshift-rpm-manual-restore-data-failed"]
|
||||
[id="troubleshoot-backup-restore-microshift-rpm-manual-restore-data-failed_{context}"]
|
||||
=== RPM-based manual data restore failed
|
||||
If you are using an RPM system that is not `rpm-ostree` and tried to restore a manual backup, the following reasons can cause the restoration to fail:
|
||||
|
||||
@@ -59,7 +59,7 @@ If you are using an RPM system that is not `rpm-ostree` and tried to restore a m
|
||||
* You are attempting to restore data from a newer version of {product-title}.
|
||||
** Ensure that the data you are restoring follows same versioning pattern as the update path. For example, if the destination version of {product-title} is an older version than the version of the {product-title} data you are attempting to use, the restoration can fail.
|
||||
|
||||
[id="troubleshoot-backup-restore-microshift-storage-migration-failed"]
|
||||
[id="troubleshoot-backup-restore-microshift-storage-migration-failed_{context}"]
|
||||
== Storage migration failed
|
||||
Storage migration failures are typically caused by substantial changes in custom resources (CRs) from one {product-title} to the next. If a storage migration fails, there is usually an unresolvable discrepancy between versions that requires manual review.
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ toc::[]
|
||||
|
||||
Upgrades are supported on {product-title-first} beginning with the General Availability version 4.14. Supported upgrades include those from one minor version to the next in sequence, for example, from 4.14 to 4.15. Patch updates are also supported from z-stream to z-stream, for example 4.14.1 to 4.14.2.
|
||||
|
||||
[id="microshift-about-updates-understanding-microshift-updates"]
|
||||
[id="microshift-about-updates-understanding-microshift-updates_{context}"]
|
||||
== Understanding {product-title} updates
|
||||
{product-title} updates are supported on both `rpm-ostree` edge-deployed hosts and non-OSTree hosts. You can complete updates using the following methods:
|
||||
|
||||
@@ -20,7 +20,7 @@ Upgrades are supported on {product-title-first} beginning with the General Avail
|
||||
Only `rpm-ostree` updates include automatic rollbacks.
|
||||
====
|
||||
|
||||
[id="microshift-about-updates-rpm-ostree-updates"]
|
||||
[id="microshift-about-updates-rpm-ostree-updates_{context}"]
|
||||
=== RPM OSTree updates
|
||||
Using the {op-system-ostree} `rpm-ostree` update path allows for automated backup and system rollback in case any part of the update fails. You must build a new `rpm-ostree` image and embed the new {product-title} version in that image. The `rpm-ostree` image can be the same version or an updated version, but the versions of {op-system-ostree} and {product-title} must be compatible.
|
||||
|
||||
@@ -28,11 +28,11 @@ Check following compatibility table for details:
|
||||
|
||||
include::snippets/microshift-rhde-compatibility-table-snip.adoc[leveloffset=+1]
|
||||
|
||||
[id="microshift-about-updates-rpm-updates"]
|
||||
[id="microshift-about-updates-rpm-updates_{context}"]
|
||||
=== Manual RPM updates
|
||||
You can use the manual RPM update path to replace your existing version of {product-title}. The versions of {op-system} and {product-title} must be compatible. Ensuring system health and completing additional system backups are manual processes.
|
||||
|
||||
[id="microshift-about-updates-checking-version-update-path"]
|
||||
[id="microshift-about-updates-checking-version-update-path_{context}"]
|
||||
== Checking version update path
|
||||
Before attempting an update of either {op-system-bundle} component, determine which versions of {product-title} and {op-system-ostree} or {op-system} you have installed. Plan for the versions of each that you intend to use.
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ You can update {product-title} without reinstalling the applications you created
|
||||
|
||||
{product-title} operates as an in-place update and does not require removal of the previous version. Data backups beyond those required for the usual functioning of your applications are also not required.
|
||||
|
||||
[id="microshift-update-options-rpm-ostree-updates"]
|
||||
[id="microshift-update-options-rpm-ostree-updates_{context}"]
|
||||
=== RPM-OSTree updates
|
||||
You can update {product-title} on an `rpm-ostree` system such as {op-system-ostree} by building a new image containing the new version of {product-title}. Ensure that the version of the operating system you want to use is compatible with the new version of {product-title} that you update to.
|
||||
|
||||
@@ -47,13 +47,13 @@ To understand more about Greenboot, see the following documentation:
|
||||
* xref:../microshift_install/microshift-greenboot.adoc#microshift-greenboot[The Greenboot health check]
|
||||
* xref:../microshift_running_apps/microshift-greenboot-workload-scripts.adoc#microshift-greenboot-workload-scripts[Greenboot workload health check scripts]
|
||||
|
||||
[id="microshift-update-options-manual-rpm-updates"]
|
||||
[id="microshift-update-options-manual-rpm-updates_{context}"]
|
||||
=== Manual RPM updates
|
||||
You can update {product-title} manually on a non-OSTree system such as {op-system-base-full} by downloading and updating the RPMs. To complete this update type, use the subscription manager to access the repository containing the new RPMs. To begin a manual RPM update, use the procedures in the following documentation:
|
||||
|
||||
* xref:../microshift_updating/microshift-update-rpms-manually.adoc#microshift-update-rpms-manually[About updating MicroShift RPMs manually]
|
||||
|
||||
[id="microshift-update-options-standalone-rhel-updates"]
|
||||
[id="microshift-update-options-standalone-rhel-updates_{context}"]
|
||||
== Standalone {op-system-ostree} updates
|
||||
You can update {op-system-ostree} or {op-system} without updating {product-title}, on the condition that the two versions are compatible. Check compatibilities before beginning an update. Use the {op-system-ostree} documentation specific to your update path.
|
||||
|
||||
@@ -62,7 +62,7 @@ You can update {op-system-ostree} or {op-system} without updating {product-title
|
||||
.Additional resources
|
||||
* link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/index[Managing RHEL for Edge images]
|
||||
|
||||
[id="microshift-update-options-simultaneous-microshift-rhel-updates"]
|
||||
[id="microshift-update-options-simultaneous-microshift-rhel-updates_{context}"]
|
||||
== Simultaneous {product-title} and operating system updates
|
||||
You can update {op-system-ostree} or {op-system} and update {product-title} at the same time, on the condition that the versions are compatible. Check for compatibility before beginning an update. First use the {op-system-ostree} documentation specific to your update path to plan and update the operating system. Then use the {product-title} update type specific to your update path.
|
||||
|
||||
|
||||
@@ -41,6 +41,11 @@ The `user@workstation` login is used to access the host machine remotely. The `<
|
||||
[user@workstation]$ ssh <user>@$MICROSHIFT_MACHINE "sudo cat /var/lib/microshift/resources/kubeadmin/$MICROSHIFT_MACHINE/kubeconfig" > ~/.kube/config
|
||||
----
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
To generate `kubeconfig` files for this step, see the "Generating additional kubeconfig files for remote access" link in the additional resources section.
|
||||
====
|
||||
|
||||
. As `user@workstation`, update the permissions on your `~/.kube/config` file by running the following command:
|
||||
+
|
||||
[source,terminal]
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
[id="microshift-building-apps-rpms_{context}"]
|
||||
= Building the RPM package for the application manifests
|
||||
|
||||
To building your own RPMs, you must create a spec file that adds the application manifests to the RPM package. The following is an example procedure. As long as the application RPMs and other elements needed for image building are accessible to Image Builder, you can use the method that you prefer.
|
||||
To build your own RPMs, you must create a spec file that adds the application manifests to the RPM package. The following is an example procedure. As long as the application RPMs and other elements needed for image building are accessible to Image Builder, you can use the method that you prefer.
|
||||
|
||||
.Prerequisites
|
||||
* You have set up a {op-system-ostree-first} {op-system-version} build host that meets the Image Builder system requirements.
|
||||
@@ -17,7 +17,7 @@ To building your own RPMs, you must create a spec file that adds the application
|
||||
|
||||
. In the `~/rpmbuild/SPECS` directory, create a file such as `<application_workload_manifests.spec>` using the following template:
|
||||
+
|
||||
.Example `.spec` file
|
||||
.Example spec file
|
||||
[source,terminal]
|
||||
----
|
||||
Name: <application_workload_manifests>
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
The following tutorial reviews the {product-title} installation steps and adds a description of the workflow for embedding applications. If you are already familiar with `rpm-ostree` systems such as {op-system-ostree-first} and {product-title}, you can go straight to the procedures.
|
||||
|
||||
[id="microshift-installation-workflow-review"]
|
||||
[id="microshift-installation-workflow-review_{context}"]
|
||||
== Installation workflow review
|
||||
Embedding applications requires a similar workflow to embedding {product-title} into a {op-system-ostree} image. Reviewing those steps can help you understand the steps needed to embed an application:
|
||||
//larger concept image here
|
||||
@@ -23,11 +23,11 @@ Embedding applications requires a similar workflow to embedding {product-title}
|
||||
|
||||
. You downloaded the ISO with {product-title} embedded, prepared it for use, provisioned it, then installed it onto your edge devices.
|
||||
|
||||
[id="microshift-embed-app-rpms-workflow"]
|
||||
[id="microshift-embed-app-rpms-workflow_{context}"]
|
||||
== Embed application RPMs workflow
|
||||
|
||||
After you have set up a build host that meets the Image Builder requirements, you can add your application in the form of a directory of manifests to the image. After those steps, the simplest way to embed your application or workload into a new ISO is to create your own RPMs that include the manifests. Your application RPMs contain all of the configuration files describing your deployment.
|
||||
|
||||
The following procedures use the `rpmbuild` tool to create a `.spec` file and local repository. The `.spec` file defines how the package is built, moving your application manifests to the correct location inside the RPM package for {product-title} to pick them up. That RPM package is then embedded in the ISO.
|
||||
The following procedures use the `rpmbuild` tool to create a specification file and local repository. The specification file defines how the package is built, moving your application manifests to the correct location inside the RPM package for {product-title} to pick them up. That RPM package is then embedded in the ISO.
|
||||
|
||||
//rpm workflow image here
|
||||
@@ -7,5 +7,3 @@
|
||||
= Known firewall issue
|
||||
|
||||
* To avoid breaking traffic flows with a firewall reload or restart, execute firewall commands before starting {product-title}. The CNI driver in {product-title} makes use of iptable rules for some traffic flows, such as those using the NodePort service. The iptable rules are generated and inserted by the CNI driver, but are deleted when the firewall reloads or restarts. The absence of the iptable rules breaks traffic flows. If firewall commands have to be executed after {product-title} is running, manually restart `ovnkube-master` pod in the `openshift-ovn-kubernetes` namespace to reset the rules controlled by the CNI driver.
|
||||
|
||||
//Revise and use the unused ki-cni-iptables-deleted procedure in release notes? Need to verify status for 4.14
|
||||
@@ -18,7 +18,6 @@ $ sudo grub2-editenv - list | grep ^boot_success
|
||||
----
|
||||
|
||||
.Example output for a successful update
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
boot_success=1
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
[id="microshift-greenboot-app-health-check-script_{context}"]
|
||||
= How to create a health check script for your application
|
||||
|
||||
You can create workload or application health check scripts in the text editor of your choice using the example in this documentation. Save the scripts in the `/etc/greenboot/check/required.d` directory. When a script in the `/etc/greenboot/check/required.d` directory exits with an error, greenboot triggers a reboot in an attempt to heal the system.
|
||||
You can create workload or application health check scripts in the text editor of your choice using the example in this documentation. Save the scripts in the `/etc/greenboot/check/required.d` directory. When a script in the `/etc/greenboot/check/required.d` directory exits with an error, Greenboot triggers a reboot in an attempt to heal the system.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
@@ -45,7 +45,6 @@ Choose a name prefix for your application that ensures it runs after the `40_mic
|
||||
====
|
||||
|
||||
.Example workload health check script
|
||||
|
||||
[source, bash]
|
||||
----
|
||||
# #!/bin/bash
|
||||
@@ -56,7 +55,7 @@ PODS_NS_LIST=(<user_workload_namespace1> <user_workload_namespace2>)
|
||||
PODS_CT_LIST=(<user_workload_namespace1_pod_count> <user_workload_namespace2_pod_count>)
|
||||
# Update these two lines with at least one namespace and the pod counts that are specific to your workloads. Use the kubernetes <namespace> where your workload is deployed.
|
||||
|
||||
# Set greenboot to read and execute the workload health check functions library.
|
||||
# Set Greenboot to read and execute the workload health check functions library.
|
||||
source /usr/share/microshift/functions/greenboot.sh
|
||||
|
||||
# Set the exit handler to log the exit status.
|
||||
|
||||
@@ -4,15 +4,15 @@
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="microshift-greenboot-dir-structure_{context}"]
|
||||
= How greenboot uses directories to run scripts
|
||||
= How Greenboot uses directories to run scripts
|
||||
|
||||
Health check scripts run from four `/etc/greenboot` directories. These scripts run in alphabetical order. Keep this in mind when you configure the scripts for your workloads.
|
||||
|
||||
When the system starts, greenboot runs the scripts in the `required.d` and `wanted.d` directories. Depending on the outcome of those scripts, greenboot continues the startup or attempts a rollback as follows:
|
||||
When the system starts, Greenboot runs the scripts in the `required.d` and `wanted.d` directories. Depending on the outcome of those scripts, Greenboot continues the startup or attempts a rollback as follows:
|
||||
|
||||
. System as expected: When all of the scripts in the `required.d` directory are successful, greenboot runs any scripts present in the `/etc/greenboot/green.d` directory.
|
||||
. System as expected: When all of the scripts in the `required.d` directory are successfully run, Greenboot runs any scripts present in the `/etc/greenboot/green.d` directory.
|
||||
|
||||
. System trouble: If any of the scripts in the `required.d` directory fail, greenboot runs any prerollback scripts present in the `red.d` directory, then restarts the system.
|
||||
. System trouble: If any of the scripts in the `required.d` directory fail, Greenboot runs any prerollback scripts present in the `red.d` directory, then restarts the system.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
@@ -26,16 +26,16 @@ Returning a nonzero exit code from any script means that script has failed. Gree
|
||||
|
||||
* `/etc/greenboot/check/required.d` contains the health checks that must not fail.
|
||||
|
||||
** If the scripts fail, greenboot retries them three times by default. You can configure the number of retries in the `/etc/greenboot/greenboot.conf` file by setting the `GREENBOOT_MAX_BOOTS` parameter to the desired number of retries.
|
||||
** If the scripts fail, Greenboot retries them three times by default. You can configure the number of retries in the `/etc/greenboot/greenboot.conf` file by setting the `GREENBOOT_MAX_BOOTS` parameter to the desired number of retries.
|
||||
|
||||
** After all retries fail, greenboot automatically initiates a rollback if one is available. If a rollback is not available, the system log output shows that manual intervention is required.
|
||||
** After all retries fail, Greenboot automatically initiates a rollback if one is available. If a rollback is not available, the system log output shows that manual intervention is required.
|
||||
|
||||
** The `40_microshift_running_check.sh` health check script for {product-title} is installed into this directory.
|
||||
|
||||
* `/etc/greenboot/check/wanted.d` contains health scripts that are allowed to fail without causing the system to be rolled back.
|
||||
|
||||
** If any of these scripts fail, greenboot logs the failure but does not initiate a rollback.
|
||||
** If any of these scripts fail, Greenboot logs the failure but does not initiate a rollback.
|
||||
|
||||
* `/etc/greenboot/green.d` contains scripts that run after greenboot has declared the start successful.
|
||||
* `/etc/greenboot/green.d` contains scripts that run after Greenboot has declared the start successful.
|
||||
|
||||
* `/etc/greenboot/red.d` contains scripts that run after greenboot has declared the startup as failed, including the `40_microshift_pre_rollback.sh` prerollback script. This script is executed right before a system rollback. The script performs {product-title} pod and OVN-Kubernetes cleanup to avoid potential conflicts after the system is rolled back to a previous version.
|
||||
* `/etc/greenboot/red.d` contains scripts that run after Greenboot has declared the startup as failed, including the `40_microshift_pre_rollback.sh` prerollback script. This script is executed right before a system rollback. The script performs {product-title} pod and OVN-Kubernetes cleanup to avoid potential conflicts after the system is rolled back to a previous version.
|
||||
|
||||
@@ -4,9 +4,9 @@
|
||||
|
||||
:_content-type: CONCEPT
|
||||
[id="microshift-health-script_{context}"]
|
||||
= The {product-title} health script
|
||||
= The {product-title} health check script
|
||||
|
||||
The `40_microshift_running_check.sh` health check script only performs validation of core {product-title} services. Install your customized workload health check scripts in the greenboot directories to ensure successful application operations after system updates. Scripts run in alphabetical order.
|
||||
The `40_microshift_running_check.sh` health check script only performs validation of core {product-title} services. Install your customized workload health check scripts in the Greenboot directories to ensure successful application operations after system updates. Scripts run in alphabetical order.
|
||||
|
||||
{product-title} health checks are listed in the following table:
|
||||
|
||||
@@ -51,7 +51,7 @@ The `40_microshift_running_check.sh` health check script only performs validatio
|
||||
|`exit 1`
|
||||
|===
|
||||
|
||||
[id="validation-wait-period"]
|
||||
[id="validation-wait-period_{context}"]
|
||||
== Validation wait period
|
||||
The wait period in each validation is five minutes by default. After the wait period, if the validation has not succeeded, it is declared a failure. This wait period is incrementally increased by the base wait period after each boot in the verification loop.
|
||||
|
||||
|
||||
@@ -19,7 +19,6 @@ $ sudo journalctl -o cat -u redboot-task-runner.service
|
||||
----
|
||||
|
||||
.Example output of a prerollback script
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
...
|
||||
|
||||
@@ -35,7 +35,6 @@ $ sudo journalctl -o cat -u greenboot-healthcheck.service
|
||||
====
|
||||
+
|
||||
.Example output
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
GRUB boot variables:
|
||||
|
||||
@@ -13,5 +13,4 @@ The following conditions must be met prior to installing {product-title}:
|
||||
* 2 GB RAM for {product-title} or 3 GB RAM, required by {op-system} for networked-based HTTPs or FTP installations
|
||||
* 10 GB of storage
|
||||
* You have an active {product-title} subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.
|
||||
* You have a subscription that includes {product-title} RPMs.
|
||||
* You have a Logical Volume Manager (LVM) Volume Group (VG) with sufficient capacity for the Persistent Volumes (PVs) of your workload.
|
||||
|
||||
@@ -22,7 +22,6 @@ For LVMS to manage thin logical volumes (LVs), a thin-pool `device-class` array
|
||||
If additional storage pools are configured with device classes, then additional storage classes must also exist to expose the storage pools to users and workloads. To enable dynamic provisioning on a thin-pool, a `StorageClass` resource must be present on the cluster. The `StorageClass` resource specifies the source `device-class` array in the `topolvm.io/device-class` parameter.
|
||||
|
||||
.Example `lvmd.yaml` file that specifies a single device class for a thin-pool
|
||||
|
||||
[source, yaml]
|
||||
----
|
||||
socket-name: <1>
|
||||
@@ -36,7 +35,6 @@ device-classes: <2>
|
||||
type: thin <6>
|
||||
volume-group: ssd <7>
|
||||
----
|
||||
[.small]
|
||||
<1> String. The UNIX domain socket endpoint of gRPC. Defaults to `/run/lvmd/lvmd.socket`.
|
||||
<2> A list of maps for the settings for each `device-class`.
|
||||
<3> String. The unique name of the `device-class`.
|
||||
|
||||
@@ -16,7 +16,6 @@ If you need to take volume snapshots, you must use thin provisioning in your `lv
|
||||
The following `lvmd.yaml` example file shows a basic LVMS configuration:
|
||||
|
||||
.LVMS configuration example
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
socket-name: <1>
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
:_content-type: PROCEDURE
|
||||
[id="microshift-manifests-override-paths_{context}"]
|
||||
= Override the list of manifest paths
|
||||
|
||||
You can override the list of default manifest paths by using a new single path, or by using a new glob pattern for multiple files. Use the following procedure to customize your manifest paths.
|
||||
|
||||
.Procedure
|
||||
|
||||
@@ -18,7 +18,6 @@ $ sudo ovs-vsctl show
|
||||
----
|
||||
+
|
||||
.Example OVS interfaces in a running cluster
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
9d9f5ea2-9d9d-4e34-bbd2-dbac154fdc93
|
||||
|
||||
@@ -4,9 +4,9 @@
|
||||
|
||||
:_content-type: PROCEDURE
|
||||
[id="microshift-restoring-data-backups-manually_{context}"]
|
||||
= Restoring application data backups manually
|
||||
= Restoring {product-title} data backups manually
|
||||
|
||||
You can restore application data from a backup manually. Backups can be restored after updates, or after other system events that remove or damage required data. Backups are in the `/var/lib/microshift-backups` directory by default. When you restore a backup, you must use the entire file path.
|
||||
You can restore {product-title} data from a backup manually. Backups can be restored after updates, or after other system events that remove or damage required data. Backups are in the `/var/lib/microshift-backups` directory by default. When you restore a backup, you must use the entire file path.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
@@ -14,7 +14,6 @@ LVMS only supports the `volumeBindingMode` of the storage class being set to `Wa
|
||||
====
|
||||
|
||||
.Example workload that deploys a single pod and PVC
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
$ oc apply -f - <<EOF
|
||||
|
||||
@@ -39,7 +39,6 @@ $ sudo lvdisplay <retrieved_snapshot_handle>
|
||||
----
|
||||
+
|
||||
.Example output
|
||||
|
||||
[source,terminal]
|
||||
----
|
||||
--- Logical volume ---
|
||||
|
||||
@@ -14,7 +14,6 @@ Storage classes provide the workload layer interface for selecting a device clas
|
||||
Multiple storage classes can refer to the same device class. You can provide varying sets of parameters for the same backing device class, such as `xfs` and `ext4` variants.
|
||||
|
||||
.Example {product-title} default storage class resource
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: storage.k8s.io/v1
|
||||
@@ -30,7 +29,6 @@ reclaimPolicy: Delete
|
||||
volumeBindingMode: WaitForFirstConsumer <4>
|
||||
allowVolumeExpansion: <5>
|
||||
----
|
||||
[.small]
|
||||
<1> An example of the default storage class. If a PVC does not specify a storage class, this class is assumed. There can only be one default storage class in a cluster. Having no value assigned to this annotation is also supported.
|
||||
<2> Specifies which file system to provision on the volume. Options are "xfs" and "ext4".
|
||||
<3> Identifies which provisioner should manage this class.
|
||||
|
||||
@@ -9,7 +9,6 @@
|
||||
To create a snapshot of a {product-title} storage volume, you must first configure {op-system-ostree} and the cluster. In the following example procedure, the pod that the source volume is mounted to is deleted. Deleting the pod prevents data from being written to it during snapshot creation. Ensuring that no data is being written during a snapshot is crucial to creating a viable snapshot.
|
||||
|
||||
.Prerequisites
|
||||
|
||||
* User has root access to a {product-title} cluster.
|
||||
* A {product-title} cluster is running.
|
||||
* A device class defines an LVM thin-pool.
|
||||
@@ -51,7 +50,6 @@ spec:
|
||||
persistentVolumeClaimName: test-claim-thin # <4>
|
||||
EOF
|
||||
----
|
||||
[.small]
|
||||
<1> Create a `VolumeSnapshot` object.
|
||||
<2> The name that you specify for the snapshot.
|
||||
<3> Specify the desired name of the `VolumeSnapshotClass` object.
|
||||
|
||||
@@ -41,6 +41,5 @@ device-classes:
|
||||
stripe-size: "64"
|
||||
lvcreate-options:<2>
|
||||
----
|
||||
[.small]
|
||||
<1> When you set the spare capacity to anything other than `0`, more space can be allocated than expected.
|
||||
<2> Extra arguments to pass to the `lvcreate` command, such as `--type=<type>`. Neither {product-title} nor the LVMS verifies `lvcreate-options` values. These optional values are passed as is to the `lvcreate` command. Ensure that the options specified here are correct.
|
||||
|
||||
@@ -14,7 +14,6 @@ You must enable thin logical volumes to take logical volume snapshots.
|
||||
====
|
||||
|
||||
.Example `VolumeSnapshotClass` configuration file
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
@@ -26,7 +25,6 @@ metadata:
|
||||
driver: topolvm.io <2>
|
||||
deletionPolicy: Delete <3>
|
||||
----
|
||||
[.small]
|
||||
<1> Determines which `VolumeSnapshotClass` configuration file to use when none is specified by `VolumeSnapshot`, which is a request for snapshot of a volume by a user.
|
||||
<2> Identifies which snapshot provisioner should manage the requests for snapshots of a volume by a user for this class.
|
||||
<3> Determines whether `VolumeSnapshotContent` objects and the backing snapshots are kept or deleted when a bound `VolumeSnapshot` is deleted. Valid values are `Retain` or `Delete`.
|
||||
|
||||
@@ -31,7 +31,6 @@ Check the following update paths:
|
||||
* Generally Available Version 4.14.0 to 4.14.z on {op-system-ostree} 9.2
|
||||
* Generally Available Version 4.14.0 to 4.14.z on {op-system} 9.2
|
||||
|
||||
|
||||
[id="microshift-ostree-update-failed_{context}"]
|
||||
== OSTree update failed
|
||||
If you updated on an OSTree system, the Greenboot health check automatically logs and acts on system health. A failure can be indicated by a system rollback by Greenboot. In cases where the update failed, but Greenboot did not complete a system rollback, you can troubleshoot using the {op-system-ostree} documentation linked in the "Additional resources" section that follows this content.
|
||||
|
||||
Reference in New Issue
Block a user