1
0
mirror of https://github.com/openshift/openshift-docs.git synced 2026-02-05 12:46:18 +01:00

[enterprise-4.13] CNV-17386: Consolidate support files

Signed-off-by: Avital Pinnick <apinnick@redhat.com>
This commit is contained in:
Avital Pinnick
2023-03-01 14:38:56 +02:00
parent 2dad9dcc15
commit e8946ab978
47 changed files with 388 additions and 546 deletions

View File

@@ -3569,8 +3569,6 @@ Topics:
File: virt-configuring-vgpu-passthrough
- Name: Configuring mediated devices
File: virt-configuring-mediated-devices
- Name: Configuring a watchdog device
File: virt-configuring-a-watchdog
- Name: Automatic importing and updating of pre-defined boot sources
File: virt-automatic-bootsource-updates
- Name: Enabling descheduler evictions on virtual machines
@@ -3710,35 +3708,32 @@ Topics:
- Name: Preventing node reconciliation
File: virt-preventing-node-reconciliation
# Removed the Node Networking content from the topic map because this section is now part of the OCP docs
# Logging, events, and monitoring
# Support, was Logging, events, and monitoring
- Name: Support
Dir: support
Topics:
- Name: Viewing OpenShift Virtualization logs
File: virt-logs
- Name: Viewing events
File: virt-events
- Name: Monitoring live migration
File: virt-monitor-vmi-migration
- Name: Diagnosing data volumes using events and conditions
File: virt-diagnosing-datavolumes-using-events-and-conditions
- Name: Monitoring virtual machine health with health probes
File: virt-monitoring-vm-health
- Name: Viewing cluster information
File: virt-using-dashboard-to-get-cluster-info
- Name: OpenShift cluster monitoring, logging, and Telemetry
File: virt-openshift-cluster-monitoring
- Name: Running OpenShift cluster checkups
File: virt-running-cluster-checkups
- Name: Prometheus queries for virtual resources
File: virt-prometheus-queries
- Name: Exposing custom metrics for virtual machines
File: virt-exposing-custom-metrics-for-vms
- Name: OpenShift Virtualization runbooks
File: virt-runbooks
- Name: Support overview
File: virt-support-overview
- Name: Collecting data for Red Hat Support
File: virt-collecting-virt-data
Distros: openshift-enterprise
- Name: Monitoring
Dir: monitoring
Topics:
- Name: Monitoring overview
File: virt-monitoring-overview
- Name: OpenShift cluster checkup framework
File: virt-running-cluster-checkups
- Name: Prometheus queries for virtual resources
File: virt-prometheus-queries
- Name: Virtual machine custom metrics
File: virt-exposing-custom-metrics-for-vms
- Name: Virtual machine health checks
File: virt-monitoring-vm-health
- Name: Troubleshooting
File: virt-troubleshooting
- Name: Runbooks
File: virt-runbooks
- Name: Backup and restore
Dir: backup_restore
Topics:

View File

@@ -1,12 +1,6 @@
// Module included in the following assemblies:
//
// * logging/cluster-logging.adoc
// * virt/support/virt-openshift-cluster-monitoring.adoc
ifeval::["{context}" == "virt-openshift-cluster-monitoring"]
:virt-logging:
endif::[]
:_content-type: CONCEPT
[id="cluster-logging-about-components_{context}"]
@@ -21,10 +15,3 @@ The major components of the {logging} are:
* log store - This is where the logs are stored. The default implementation is Elasticsearch. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage.
* visualization - This is the UI component you can use to view logs, graphs, charts, and so forth. The current implementation is Kibana.
ifndef::virt-logging[]
This document might refer to log store or Elasticsearch, visualization or Kibana, collection or Fluentd, interchangeably, except where noted.
endif::virt-logging[]
ifeval::["{context}" == "virt-openshift-cluster-monitoring"]
:!virt-logging:
endif::[]

View File

@@ -1,6 +1,5 @@
// Module included in the following assemblies:
//
// * virt/support/virt-openshift-cluster-monitoring.adoc
// * monitoring/monitoring-overview.adoc
// This module uses a conditionalized title so that the module

View File

@@ -1,13 +1,12 @@
// Module included in the following assemblies:
//
// * virt/support/virt-openshift-cluster-monitoring.adoc
// * support/remote_health_monitoring/about-remote-health-monitoring.adoc
:_content-type: CONCEPT
[id="telemetry-about-telemetry_{context}"]
= About Telemetry
Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document.
Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document.
This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out {product-title} upgrades to customers to minimize service impact and continuously improve the upgrade experience.

View File

@@ -1,12 +1,8 @@
// Module included in the following assemblies:
//
// * virt/support/virt-openshift-cluster-monitoring.adoc
// * support/remote_health_monitoring/about-remote-health-monitoring.adoc
ifeval::["{context}" == "virt-openshift-cluster-monitoring"]
:virt-cluster:
endif::[]
:_content-type: REFERENCE
[id="what-information-is-collected_{context}"]
= Information collected by Telemetry
@@ -35,6 +31,3 @@ endif::openshift-dedicated[]
Telemetry does not collect identifying information such as user names or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the link:https://www.redhat.com/en/about/privacy-policy[Red Hat Privacy Statement] for more information about Red Hat's privacy practices.
ifeval::["{context}" == "virt-openshift-cluster-monitoring"]
:!virt-cluster:
endif::[]

View File

@@ -1,12 +1,12 @@
// Module included in the following assemblies:
//
// * virt/support/virt-diagnosing-datavolumes-using-events-and-conditions.adoc
// * virt/support/virt-troubleshooting.adoc
:_content-type: CONCEPT
[id="virt-about-conditions-and-events.adoc_{context}"]
= About conditions and events
[id="virt-about-dv-conditions-and-events.adoc_{context}"]
= About data volume conditions and events
Diagnose data volume issues by examining the output of the `Conditions` and `Events` sections
You can diagnose data volume issues by examining the output of the `Conditions` and `Events` sections
generated by the command:
[source,terminal]
@@ -14,7 +14,7 @@ generated by the command:
$ oc describe dv <DataVolume>
----
There are three `Types` in the `Conditions` section that display:
The `Conditions` section displays the following `Types`:
* `Bound`
* `Running`
@@ -29,7 +29,7 @@ The `Events` section provides the following additional information:
The output from `oc describe` does not always contains `Events`.
An event is generated when either `Status`, `Reason`, or `Message` changes.
An event is generated when the `Status`, `Reason`, or `Message` changes.
Both conditions and events react to changes in the state of the data volume.
For example, if you misspell the URL during an import operation, the import

View File

@@ -1,6 +1,5 @@
// Module included in the following assemblies:
//
// * virt/support/virt-using-dashboard-to-get-cluster-info.adoc
// * web_console/using-dashboard-to-get-cluster-information.adoc
ifeval::["{context}" == "virt-using-dashboard-to-get-cluster-info"]

View File

@@ -1,14 +0,0 @@
// Module included in the following assemblies:
//
// * virt/support/virt-events.adoc
:_content-type: CONCEPT
[id="virt-about-vm-events_{context}"]
= About virtual machine events
{product-title} events are records of important life-cycle information in a
namespace and are useful for monitoring and troubleshooting resource
scheduling, creation, and deletion issues.
{VirtProductName} adds events for virtual machines and virtual machine instances. These
can be viewed from either the web console or the CLI.

View File

@@ -1,9 +1,9 @@
// Module included in the following assemblies:
//
// * virt/support/virt-analyzing-datavolumes-using-events-and-conditions.adoc
// * virt/support/virt-troubleshooting.adoc
[id="virt-analyzing-datavolume-conditions-and-events_{context}"]
= Analyzing data volumes using conditions and events
= Analyzing data volume conditions and events
By inspecting the `Conditions` and `Events` sections generated by the `describe`
command, you determine the state of the data volume
@@ -16,7 +16,7 @@ There are many different combinations of conditions. Each must be evaluated in i
Examples of various combinations follow.
* `Bound` A successfully bound PVC displays in this example.
* `Bound` - A successfully bound PVC displays in this example.
+
Note that the `Type` is `Bound`, so the `Status` is `True`.
If the PVC is not bound, the `Status` is `False`.
@@ -33,21 +33,21 @@ in this case `datavolume-controller`:
[source,terminal]
----
Status:
Conditions:
Last Heart Beat Time: 2020-07-15T03:58:24Z
Last Transition Time: 2020-07-15T03:58:24Z
Message: PVC win10-rootdisk Bound
Reason: Bound
Status: True
Type: Bound
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Bound 24s datavolume-controller PVC example-dv Bound
Conditions:
Last Heart Beat Time: 2020-07-15T03:58:24Z
Last Transition Time: 2020-07-15T03:58:24Z
Message: PVC win10-rootdisk Bound
Reason: Bound
Status: True
Type: Bound
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Bound 24s datavolume-controller PVC example-dv Bound
----
* `Running` In this case, note that `Type` is `Running` and `Status` is `False`,
* `Running` - In this case, note that `Type` is `Running` and `Status` is `False`,
indicating that an event has occurred that caused an attempted
operation to fail, changing the Status from `True` to `False`.
+
@@ -67,19 +67,19 @@ attempting to access the data volume:
[source,terminal]
----
Status:
Conditions:
Last Heart Beat Time: 2020-07-15T04:31:39Z
Last Transition Time: 2020-07-15T04:31:39Z
Message: Import Complete
Reason: Completed
Status: False
Type: Running
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect
to http data source: expected status code 200, got 404. Status: 404 Not Found
Conditions:
Last Heart Beat Time: 2020-07-15T04:31:39Z
Last Transition Time: 2020-07-15T04:31:39Z
Message: Import Complete
Reason: Completed
Status: False
Type: Running
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect
to http data source: expected status code 200, got 404. Status: 404 Not Found
----
* `Ready` If `Type` is `Ready` and `Status` is `True`, then the data volume is ready
@@ -90,9 +90,9 @@ used, the `Status` is `False`:
[source,terminal]
----
Status:
Conditions:
Last Heart Beat Time: 2020-07-15T04:31:39Z
Last Transition Time: 2020-07-15T04:31:39Z
Status: True
Type: Ready
Conditions:
Last Heart Beat Time: 2020-07-15T04:31:39Z
Last Transition Time: 2020-07-15T04:31:39Z
Status: True
Type: Ready
----

View File

@@ -1,24 +0,0 @@
// Module included in the following assemblies:
//
// * virt/support/virt-collecting-virt-data.adoc
:_content-type: PROCEDURE
[id="virt-collecting-data-about-vms_{context}"]
= Collecting data about virtual machines
Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause.
.Prerequisites
* Windows VMs:
** Record the Windows patch update details for Red Hat Support.
** Install the latest version of the VirtIO drivers. The VirtIO drivers include the QEMU guest agent.
** If Remote Desktop Protocol (RDP) is enabled, try to connect to the VMs with RDP to determine whether there is a problem with the connection software.
.Procedure
. Collect detailed `must-gather` data about the malfunctioning VMs.
. Collect screenshots of VMs that have crashed before you restart them.
. Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network.
// . Collect memory dumps from VMs _before_ remediation attempts.
// Uncomment this line for CNV-20256.

View File

@@ -1,22 +0,0 @@
// Module included in the following assemblies:
//
// * virt/support/virt-collecting-virt-data.adoc
:_content-type: PROCEDURE
[id="virt-collecting-data-about-your-environment_{context}"]
= Collecting data about your environment
Collecting data about your environment minimizes the time required to analyze and determine the root cause.
.Prerequisites
* Set the retention time for Prometheus metrics data to a minimum of seven days.
* Configure the Alertmanager to capture relevant alerts and to send them to a dedicated mailbox so that they can be viewed and persisted outside the cluster.
* Record the exact number of affected nodes and virtual machines.
.Procedure
. Collect `must-gather` data for the cluster by using the default `must-gather` image.
. Collect `must-gather` data for {rh-storage-first}, if necessary.
. Collect `must-gather` data for {VirtProductName} by using the {VirtProductName} `must-gather` image.
. Collect Prometheus metrics for the cluster.

View File

@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * virt/support/virt-logs.adoc
// * virt/support/virt-troubleshooting.adoc
:_content-type: REFERENCE
[id="virt-common-error-messages_{context}"]

View File

@@ -1,24 +1,20 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/advanced_vm_management/virt-configuring-a-watchdog.adoc
// * virt/support/monitoring/virt-monitoring-vm-health.adoc
:_content-type: PROCEDURE
[id="virt-defining-a-watchdog"]
= Defining a watchdog device
[id="virt-defining-watchdog-device-vm"]
= Configuring a watchdog device for the virtual machine
Define how the watchdog proceeds when the operating system (OS) no longer responds.
You configure a watchdog device for the virtual machine (VM).
.Available actions
[horizontal]
`poweroff`:: The virtual machine (VM) powers down immediately. If `spec.running` is set to `true`, or `spec.runStrategy` is not set to `manual`, then the VM reboots.
`reset`:: The VM reboots in place and the guest OS cannot react. Because the length of time required for the guest OS to reboot can cause liveness probes to timeout, use of this option is discouraged. This timeout can extend the time it takes the VM to reboot if cluster-level protections notice the liveness probe failed and forcibly reschedule it.
`shutdown`:: The VM gracefully powers down by stopping all services.
.Prerequisites
* The VM must have kernel support for an `i6300esb` watchdog device. {op-system-base-full} images support `i6300esb`.
.Procedure
. Create a YAML file with the following contents:
. Create a `YAML` file with the following contents:
+
[source,yaml]
----
@@ -32,7 +28,7 @@ spec:
running: false
template:
metadata:
labels:
labels:
kubevirt.io/vm: vm2-rhel84-watchdog
spec:
domain:
@@ -43,15 +39,13 @@ spec:
action: "poweroff" <1>
...
----
<1> Specify the `watchdog` action (`poweroff`, `reset`, or `shutdown`).
<1> Specify `poweroff`, `reset`, or `shutdown`.
+
The example above configures the `i6300esb` watchdog device on a RHEL8 VM with the poweroff action and exposes the device as `/dev/watchdog`.
+
This device can now be used by the watchdog binary.
. Apply the YAML file to your cluster by running the following command:
+
[source,yaml]
----
@@ -68,7 +62,6 @@ This procedure is provided for testing watchdog functionality only and must not
--
. Run the following command to verify that the VM is connected to the watchdog device:
+
[source,terminal]
----
@@ -77,7 +70,6 @@ $ lspci | grep watchdog -i
. Run one of the following commands to confirm the watchdog is active:
* Trigger a kernel panic:
+
[source,terminal]
@@ -85,7 +77,7 @@ $ lspci | grep watchdog -i
# echo c > /proc/sysrq-trigger
----
* Terminate the watchdog service:
* Stop the watchdog service:
+
[source,terminal]
----

View File

@@ -1,33 +0,0 @@
// Module included in the following assemblies:
//
// * virt/virtual_machines/advanced_vm_management/virt-configuring-a-watchdog.adoc
:_content-type: PROCEDURE
[id="virt-installing-a-watchdog_{context}"]
= Installing a watchdog device
Install the `watchdog` package on your virtual machine and start the watchdog service.
.Procedure
. As a root user, install the `watchdog` package and dependencies:
+
[source,terminal]
----
# yum install watchdog
----
. Uncomment the following line in the `/etc/watchdog.conf` file, and save the changes:
+
[source,terminal]
----
#watchdog-device = /dev/watchdog
----
. Enable the watchdog service to start on boot:
+
[source,terminal]
----
# systemctl enable --now watchdog.service
----

View File

@@ -0,0 +1,35 @@
// Module included in the following assemblies:
//
// * virt/support/monitoring/virt-monitoring-vm-health.adoc
:_content-type: PROCEDURE
[id="virt-installing-watchdog-agent_{context}"]
= Installing the watchdog agent on the guest
You install the watchdog agent on the guest and start the `watchdog` service.
.Procedure
. Log in to the virtual machine as root user.
. Install the `watchdog` package and its dependencies:
+
[source,terminal]
----
# yum install watchdog
----
. Uncomment the following line in the `/etc/watchdog.conf` file and save the changes:
+
[source,terminal]
----
#watchdog-device = /dev/watchdog
----
. Enable the `watchdog` service to start on boot:
+
[source,terminal]
----
# systemctl enable --now watchdog.service
----

View File

@@ -1,6 +1,5 @@
// Module included in the following assemblies:
//
// * virt/support/virt-monitor-vmi-migration.adoc
// * virt/support/virt-prometheus-queries.adoc
:_content-type: REFERENCE
@@ -9,11 +8,11 @@
The following metrics can be queried to show live migration status:
`kubevirt_migrate_vmi_data_processed_bytes`:: The amount of guest operating system (OS) data that has migrated to the new virtual machine (VM). Type: Gauge.
`kubevirt_migrate_vmi_data_processed_bytes`:: The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
`kubevirt_migrate_vmi_data_remaining_bytes`:: The amount of guest OS data that remains to be migrated. Type: Gauge.
`kubevirt_migrate_vmi_data_remaining_bytes`:: The amount of guest operating system data that remains to be migrated. Type: Gauge.
`kubevirt_migrate_vmi_dirty_memory_rate_bytes`:: The rate at which memory is becoming dirty in the guest OS. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
`kubevirt_migrate_vmi_dirty_memory_rate_bytes`:: The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
`kubevirt_migrate_vmi_pending_count`:: The number of pending migrations. Type: Gauge.

View File

@@ -1,57 +0,0 @@
// Module included in the following assemblies:
//
// * virt/support/virt-monitoring-vm-health.adoc
[id="virt-template-vm-probe-config_{context}"]
= Template: Virtual machine configuration file for defining health checks
[source,yaml]
----
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
special: vm-fedora
name: vm-fedora
spec:
template:
metadata:
labels:
special: vm-fedora
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
resources:
requests:
memory: 1024M
readinessProbe:
httpGet:
port: 1500
initialDelaySeconds: 120
periodSeconds: 20
timeoutSeconds: 10
failureThreshold: 3
successThreshold: 3
terminationGracePeriodSeconds: 180
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-registry-disk-demo
- cloudInitNoCloud:
userData: |-
#cloud-config
password: fedora
chpasswd: { expire: False }
bootcmd:
- setenforce 0
- dnf install -y nmap-ncat
- systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!'
name: cloudinitdisk
----

View File

@@ -1,12 +1,12 @@
// Module included in the following assemblies:
//
// * virt/support/virt-logs.adoc
// * virt/support/virt-troubleshooting.adoc
:_content-type: PROCEDURE
[id="virt-viewing-logs-cli_{context}"]
= Viewing {VirtProductName} logs with the CLI
Configure log verbosity for {VirtProductName} components by editing the `HyperConverged` custom resource (CR). Then, view logs for the component pods by using the `oc` CLI tool.
You can configure the verbosity level of {VirtProductName} component logs by editing the `HyperConverged` custom resource (CR). Then, you can view logs for the component pods by using the `oc` CLI tool.
.Procedure

View File

@@ -1,19 +0,0 @@
// Module included in the following assemblies:
//
// * virt/support/virt-events.adoc
:_content-type: PROCEDURE
[id="virt-viewing-namespace-events-cli_{context}"]
= Viewing namespace events in the CLI
Use the {product-title} client to get the events for a namespace.
.Procedure
* In the namespace, use the `oc get` command:
+
[source,terminal]
----
$ oc get events
----

View File

@@ -1,32 +0,0 @@
// Module included in the following assemblies:
//
// * virt/support/virt-events.adoc
:_content-type: PROCEDURE
[id="virt-viewing-resource-events-cli_{context}"]
= Viewing resource events in the CLI
Events are included in the resource description, which you can get using the
{product-title} client.
.Procedure
* In the namespace, use the `oc describe` command. The following example shows
how to get the events for a virtual machine, a virtual machine instance, and the
virt-launcher pod for a virtual machine:
+
[source,terminal]
----
$ oc describe vm <vm>
----
+
[source,terminal]
----
$ oc describe vmi <vmi>
----
+
[source,terminal]
----
$ oc describe pod virt-launcher-<name>
----

View File

@@ -1,21 +1,19 @@
// Module included in the following assemblies:
//
// * virt/support/virt-logs.adoc
// * virt/support/virt-troubleshooting.adoc
:_content-type: PROCEDURE
[id="virt-viewing-virtual-machine-logs-web_{context}"]
= Viewing virtual machine logs in the web console
= Viewing virtual machine logs with the web console
Get virtual machine logs from the associated virtual machine launcher pod.
You can view virtual machine logs with the {product-title} web console.
.Procedure
. In the {product-title} console, click *Virtualization* -> *VirtualMachines* from the side menu.
. Navigate to *Virtualization* -> *VirtualMachines*.
. Select a virtual machine to open the *VirtualMachine details* page.
. Click the *Details* tab.
. On the *Details* tab, click the pod name to open the *Pod details* page.
. Click the `virt-launcher-<name>` pod in the *Pod* section to open the *Pod details* page.
. Click the *Logs* tab to view the pod logs.
. Click the *Logs* tab to view the logs.

View File

@@ -1,18 +0,0 @@
// Module included in the following assemblies:
//
// * virt/support/virt-events.adoc
:_content-type: PROCEDURE
[id="virt-viewing-vm-events-web_{context}"]
= Viewing the events for a virtual machine in the web console
You can view streaming events for a running virtual machine on the *VirtualMachine details* page of the web console.
.Procedure
. Click *Virtualization* -> *VirtualMachines* from the side menu.
. Select a virtual machine to open the *VirtualMachine details* page.
. Click the *Events* tab to view streaming events for the virtual machine.
* The &#9646;&#9646; button pauses the events stream.
* The &#9654; button resumes a paused events stream.

View File

@@ -15,7 +15,7 @@ include::modules/eco-self-node-remediation-about-watchdog.adoc[leveloffset=+2]
[role="_additional-resources"]
.Additional resources
xref:../../../virt/virtual_machines/advanced_vm_management/virt-configuring-a-watchdog.adoc#virt-configuring-a-watchdog[Configuring a watchdog]
xref:../../../virt/support/monitoring/virt-monitoring-vm-health.adoc#watchdog_virt-monitoring-vm-health[Configuring a watchdog]
include::modules/eco-self-node-remediation-operator-control-plane-fencing.adoc[leveloffset=+1]

View File

@@ -13,6 +13,6 @@ include::modules/virt-about-live-migration.adoc[leveloffset=+1]
== Additional resources
* xref:../../virt/live_migration/virt-migrate-vmi.adoc#virt-migrate-vmi[Migrating a virtual machine instance to another node]
* xref:../../virt/support/virt-monitor-vmi-migration.adoc#virt-monitor-vmi-migration[Monitoring live migration]
* xref:../../virt/support/monitoring/virt-prometheus-queries.adoc#virt-live-migration-metrics_virt-prometheus-queries[Live migration metrics]
* xref:../../virt/live_migration/virt-live-migration-limits.adoc#virt-live-migration-limits[Live migration limiting]
* xref:../../virt/virtual_machines/virtual_disks/virt-creating-data-volumes.adoc#virt-customizing-storage-profile_virt-creating-data-volumes[Customizing the storage profile]

View File

@@ -15,11 +15,19 @@ If a virtual machine uses a host model CPU, you can perform live migration of th
include::modules/virt-initiating-vm-migration-web.adoc[leveloffset=+1]
[id="monitoring-live-migration-by-using-the-web-console_{context}"]
=== Monitoring live migration by using the web console
You can monitor the progress of all live migrations on the xref:../../virt/virt-web-console-overview.adoc#virtualization-overview-migrations_virt-web-console-overview[*Overview* -> *Migrations* tab] in the web console.
You can view the migration metrics of a virtual machine on the xref:../../virt/virt-web-console-overview.adoc#ui-virtualmachine-details-metrics_virt-web-console-overview[*VirtualMachine details* -> *Metrics* tab] in the web console.
include::modules/virt-initiating-vm-migration-cli.adoc[leveloffset=+1]
include::modules/virt-monitoring-vm-migration-cli.adoc[leveloffset=+2]
[role="_additional-resources"]
[id="virt-migrate-vmi_additional-resources"]
== Additional resources
* xref:../../virt/support/virt-monitor-vmi-migration.adoc#virt-monitor-vmi-migration[Monitoring live migration]
* xref:../../virt/live_migration/virt-cancel-vmi-migration.adoc#virt-cancel-vmi-migration[Cancelling the live migration of a virtual machine instance]

View File

@@ -0,0 +1 @@
../../_attributes

View File

@@ -0,0 +1 @@
../../images

View File

@@ -0,0 +1 @@
../../modules

View File

@@ -0,0 +1 @@
../../snippets

View File

@@ -20,16 +20,16 @@ include::modules/virt-accessing-node-exporter-outside-cluster.adoc[leveloffset=+
[role="_additional-resources"]
[id="additional-resources_virt-exposing-custom-metrics-for-vms"]
== Additional resources
* xref:../../monitoring/configuring-the-monitoring-stack.adoc#configuring-the-monitoring-stack[Configuring the monitoring stack]
* xref:../../../monitoring/configuring-the-monitoring-stack.adoc#configuring-the-monitoring-stack[Configuring the monitoring stack]
* xref:../../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects]
* xref:../../../monitoring/enabling-monitoring-for-user-defined-projects.adoc#enabling-monitoring-for-user-defined-projects[Enabling monitoring for user-defined projects]
* xref:../../monitoring/managing-metrics.adoc#managing-metrics[Managing metrics]
* xref:../../../monitoring/managing-metrics.adoc#managing-metrics[Managing metrics]
* xref:../../monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards[Reviewing monitoring dashboards]
* xref:../../../monitoring/reviewing-monitoring-dashboards.adoc#reviewing-monitoring-dashboards[Reviewing monitoring dashboards]
* xref:../../applications/application-health.adoc#application-health[Monitoring application health by using health checks]
* xref:../../../applications/application-health.adoc#application-health[Monitoring application health by using health checks]
* xref:../../nodes/pods/nodes-pods-configmaps.adoc#nodes-pods-configmaps[Creating and using config maps]
* xref:../../../nodes/pods/nodes-pods-configmaps.adoc#nodes-pods-configmaps[Creating and using config maps]
* xref:../../virt/virtual_machines/virt-controlling-vm-states.adoc#virt-controlling-vm-states[Controlling virtual machine states]
* xref:../../../virt/virtual_machines/virt-controlling-vm-states.adoc#virt-controlling-vm-states[Controlling virtual machine states]

View File

@@ -0,0 +1,24 @@
:_content-type: ASSEMBLY
[id="virt-monitoring-overview"]
= Monitoring overview
include::_attributes/common-attributes.adoc[]
:context: virt-monitoring-overview
toc::[]
You can monitor the health of your cluster and virtual machines (VMs) with the following tools:
xref:../../../virt/support/monitoring/virt-running-cluster-checkups.adoc#virt-running-cluster-checkups[{product-title} cluster checkup framework]::
Check network connectivity and latency by using predefined, automated tests.
:FeatureName: The {product-title} cluster checkup framework
include::snippets/technology-preview.adoc[]
xref:../../../virt/support/monitoring/virt-prometheus-queries.adoc#virt-prometheus-queries[Prometheus queries for virtual resources]::
Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
xref:../../../virt/support/monitoring/virt-exposing-custom-metrics-for-vms.adoc#virt-exposing-custom-metrics-for-vms[VM custom metrics]::
Configure the `node-exporter` service to expose internal VM metrics and processes.
xref:../../../virt/support/monitoring/virt-monitoring-vm-health.adoc#[VM health checks]::
Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.

View File

@@ -0,0 +1,54 @@
:_content-type: ASSEMBLY
[id="virt-monitoring-vm-health"]
= Virtual machine health checks
include::_attributes/common-attributes.adoc[]
:context: virt-monitoring-vm-health
toc::[]
You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the `VirtualMachine` resource.
include::modules/virt-about-readiness-liveness-probes.adoc[leveloffset=+1]
include::modules/virt-define-http-readiness-probe.adoc[leveloffset=+2]
include::modules/virt-define-tcp-readiness-probe.adoc[leveloffset=+2]
include::modules/virt-define-http-liveness-probe.adoc[leveloffset=+2]
[id="watchdog_{context}"]
== Defining a watchdog
You can define a watchdog to monitor the health of the guest operating system by performing the following steps:
. Configure a watchdog device for the virtual machine (VM).
. Install the watchdog agent on the guest.
The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
* `poweroff`: The VM powers down immediately. If `spec.running` is set to `true` or `spec.runStrategy` is not set to `manual`, then the VM reboots.
* `reset`: The VM reboots in place and the guest operating system cannot react.
+
[NOTE]
====
The reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
====
* `shutdown`: The VM gracefully powers down by stopping all services.
[NOTE]
====
Watchdog is not available for Windows VMs.
====
include::modules/virt-defining-watchdog-device-vm.adoc[leveloffset=+2]
include::modules/virt-installing-watchdog-agent.adoc[leveloffset=+2]
include::modules/virt-define-guest-agent-ping-probe.adoc[leveloffset=+1]
[id="additional-resources_monitoring-vm-health"]
[role="_additional-resources"]
== Additional resources
* xref:../../../applications/application-health.adoc#application-health[Monitoring application health by using health checks]

View File

@@ -14,7 +14,7 @@ Use the {product-title} monitoring dashboard to query virtualization metrics.
[id="prerequisites_{context}"]
== Prerequisites
* To use the vCPU metric, the `schedstats=enable` kernel argument must be applied to the `MachineConfig` object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. See the xref:../../post_installation_configuration/machine-configuration-tasks.adoc#nodes-nodes-kernel-arguments_post-install-machine-configuration-tasks[{product-title} machine configuration tasks] documentation for more information on applying a kernel argument.
* To use the vCPU metric, the `schedstats=enable` kernel argument must be applied to the `MachineConfig` object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. For more information, see xref:../../../post_installation_configuration/machine-configuration-tasks.html#nodes-nodes-kernel-arguments_post-install-machine-configuration-tasks[Adding kernel arguments to nodes].
* For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
@@ -26,13 +26,13 @@ include::modules/monitoring-querying-metrics-for-user-defined-projects-as-a-deve
include::modules/virt-querying-metrics.adoc[leveloffset=+1]
include::modules/virt-live-migration-metrics.adoc[leveloffset=+1]
include::modules/virt-live-migration-metrics.adoc[leveloffset=+2]
[id="additional-resources_virt-prometheus-queries"]
[role="_additional-resources"]
== Additional resources
* xref:../../monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview]
* xref:../../../monitoring/monitoring-overview.adoc#monitoring-overview[Monitoring overview]
* link:https://prometheus.io/docs/prometheus/latest/querying/basics/[Querying Prometheus]

View File

@@ -1,6 +1,6 @@
:_content-type: ASSEMBLY
[id="virt-running-cluster-checkups"]
= Running cluster checkups
= {product-title} cluster checkup framework
include::_attributes/common-attributes.adoc[]
:context: virt-running-cluster-checkups
@@ -11,12 +11,7 @@ toc::[]
:FeatureName: The {product-title} cluster checkup framework
include::snippets/technology-preview.adoc[]
include::modules/virt-about-cluster-checkup-framework.adoc[leveloffset=+1]
include::modules/virt-measuring-latency-vm-secondary-network.adoc[leveloffset=+1]
[role="_additional-resources"]
[id="additional-resources_running-cluster-checkups"]
== Additional resources
* xref:../../virt/virtual_machines/vm_networking/virt-attaching-vm-multiple-networks.adoc#virt-attaching-vm-multiple-networks[Attaching a virtual machine to multiple networks]

View File

@@ -17,34 +17,48 @@ Prometheus is a time-series database and a rule evaluation engine for metrics. P
Alertmanager::
The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems.
include::modules/virt-collecting-data-about-your-environment.adoc[leveloffset=+1]
For information about the {product-title} monitoring stack, see xref:../../monitoring/monitoring-overview.adoc#about-openshift-monitoring[About {product-title} monitoring].
[id="additional-resources_collecting-data-about-your-environment"]
[role="_additional-resources"]
=== Additional resources
* Configuring the xref:../../monitoring/configuring-the-monitoring-stack.adoc#modifying-retention-time-for-prometheus-metrics-data_configuring-the-monitoring-stack[retention time] for Prometheus metrics data
* Configuring the Alertmanager to send xref:../../monitoring/managing-alerts.adoc#sending-notifications-to-external-systems_managing-alerts[alert notifications] to external systems
* Collecting `must-gather` data for xref:../../support/gathering-cluster-data.adoc#support_gathering_data_gathering-cluster-data[{product-title}]
* Collecting `must-gather` data for link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.13/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[{rh-storage-first}]
* Collecting `must-gather` data for xref:../../virt/support/virt-collecting-virt-data.adoc#virt-using-virt-must-gather_virt-collecting-virt-data[{VirtProductName}]
* Collecting Prometheus metrics for xref:../../monitoring/managing-metrics.adoc#querying-metrics-for-all-projects-as-an-administrator_managing-metrics[all projects] as a cluster administrator
// This procedure is in the assembly so that we can add xrefs instead of a long list of additional resources.
[id="virt-collecting-data-about-your-environment_{context}"]
== Collecting data about your environment
include::modules/virt-collecting-data-about-vms.adoc[leveloffset=+1]
Collecting data about your environment minimizes the time required to analyze and determine the root cause.
[id="additional-resources_collecting-data-about-vms"]
[role="_additional-resources"]
=== Additional resources
* Installing xref:../../virt/virtual_machines/virt-installing-qemu-guest-agent.adoc#virt-installing-virtio-drivers-existing-windows_virt-installing-qemu-guest-agent[VirtIO drivers] on Windows VMs
* Downloading and installing link:https://access.redhat.com/solutions/6957701[VirtIO drivers] on Windows VMs without host access
* Connecting to Windows VMs with RDP using the xref:../../virt/virtual_machines/virt-accessing-vm-consoles.adoc#virt-vm-rdp-console-web_virt-accessing-vm-consoles[web console] or the xref:../../virt/virtual_machines/virt-accessing-vm-consoles.adoc#virt-accessing-rdp-console_virt-accessing-vm-consoles[command line]
* Collecting `must-gather` data about xref:../../virt/support/virt-collecting-virt-data.adoc#virt-must-gather-options_virt-collecting-virt-data[virtual machines]
// * Collecting virtual machine memory dumps. [link TBD. CNV-20256]
.Prerequisites
* xref:../../monitoring/configuring-the-monitoring-stack.adoc#modifying-retention-time-for-prometheus-metrics-data_configuring-the-monitoring-stack[Set the retention time for Prometheus metrics data] to a minimum of seven days.
* xref:../../monitoring/managing-alerts.adoc#sending-notifications-to-external-systems_managing-alerts[Configure the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox] so that they can be viewed and persisted outside the cluster.
* Record the exact number of affected nodes and virtual machines.
.Procedure
. xref:../../support/gathering-cluster-data.adoc#support_gathering_data_gathering-cluster-data[Collect must-gather data for the cluster].
. link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.12/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Collect must-gather data for {rh-storage-first}], if necessary.
. xref:../../virt/support/virt-collecting-virt-data.adoc#virt-using-virt-must-gather_virt-collecting-virt-data[Collect must-gather data for {VirtProductName}].
. xref:../../monitoring/managing-metrics.adoc#querying-metrics-for-all-projects-as-an-administrator_managing-metrics[Collect Prometheus metrics for the cluster].
[id="virt-collecting-data-about-vms_{context}"]
== Collecting data about virtual machines
Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause.
.Prerequisites
* Linux VMs: xref:../../virt/virtual_machines/virt-installing-qemu-guest-agent.adoc#virt-installing-qemu-guest-agent-on-linux-vm_virt-installing-qemu-guest-agent[Install the latest QEMU guest agent].
* Windows VMs:
** Record the Windows patch update details.
** link:https://access.redhat.com/solutions/6957701[Install the latest VirtIO drivers].
** xref:../../virt/virtual_machines/virt-installing-qemu-guest-agent.adoc#virt-installing-virtio-drivers-existing-windows_virt-installing-qemu-guest-agent[Install the latest QEMU guest agent].
** If Remote Desktop Protocol (RDP) is enabled, try to connect to the VMs with RDP by using the xref:../../virt/virtual_machines/virt-accessing-vm-consoles.adoc#virt-vm-rdp-console-web_virt-accessing-vm-consoles[web console] or the xref:../../virt/virtual_machines/virt-accessing-vm-consoles.adoc#virt-accessing-rdp-console_virt-accessing-vm-consoles[command line] to determine whether there is a problem with the connection software.
.Procedure
. xref:../../virt/support/virt-collecting-virt-data.adoc#virt-must-gather-options_virt-collecting-virt-data[Collect must-gather data for the VMs] using the `gather_vms_details` script.
. Collect screenshots of VMs that have crashed _before_ you restart them.
. xref:../../virt/virt-using-the-cli-tools.html#vm-memory-dump-commands_virt-using-the-cli-tools[Collect memory dumps from VMs] _before_ remediation attempts.
. Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network.
include::modules/virt-using-virt-must-gather.adoc[leveloffset=+1]
include::modules/virt-must-gather-options.adoc[leveloffset=+2]
[id="additional-resources_must-gather-virt"]
[role="_additional-resources"]
=== Additional resources
* xref:../../support/gathering-cluster-data.adoc#about-must-gather_gathering-cluster-data[About the `must-gather` tool]

View File

@@ -1,12 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-diagnosing-datavolumes-using-events-and-conditions"]
= Diagnosing data volumes using events and conditions
include::_attributes/common-attributes.adoc[]
:context: virt-diagnosing-datavolumes-using-events-and-conditions
toc::[]
Use the `oc describe` command to analyze and help resolve issues with data volumes.
include::modules/virt-about-conditions-and-events.adoc[leveloffset=+1]
include::modules/virt-analyzing-datavolume-conditions-and-events.adoc[leveloffset=+1]

View File

@@ -1,18 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-events"]
= Viewing events
include::_attributes/common-attributes.adoc[]
:context: virt-events
toc::[]
include::modules/virt-about-vm-events.adoc[leveloffset=+1]
See also:
xref:../../nodes/clusters/nodes-containers-events.adoc#nodes-containers-events[Viewing system event information in an {product-title} cluster].
include::modules/virt-viewing-vm-events-web.adoc[leveloffset=+1]
include::modules/virt-viewing-namespace-events-cli.adoc[leveloffset=+1]
include::modules/virt-viewing-resource-events-cli.adoc[leveloffset=+1]

View File

@@ -1,15 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-logs"]
= Viewing {VirtProductName} logs
include::_attributes/common-attributes.adoc[]
:context: virt-logs
toc::[]
You can view logs for {VirtProductName} components and virtual machines by using the web console or the `oc` CLI. You can retrieve virtual machine logs from the `virt-launcher` pod. To control log verbosity, edit the `HyperConverged` custom resource.
include::modules/virt-viewing-logs-cli.adoc[leveloffset=+1]
include::modules/virt-viewing-virtual-machine-logs-web.adoc[leveloffset=+1]
include::modules/virt-common-error-messages.adoc[leveloffset=+1]

View File

@@ -1,25 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-monitor-vmi-migration"]
= Monitoring live migration
include::_attributes/common-attributes.adoc[]
:context: virt-monitor-vmi-migration
toc::[]
You can monitor the progress of live migration from either the web console or the CLI.
[id="monitoring-live-migration-by-using-the-web-console_{context}"]
== Monitoring live migration by using the web console
You can monitor the progress of all live migrations on the xref:../../virt/virt-web-console-overview.adoc#virtualization-overview-migrations_virt-web-console-overview[*Overview -> Migrations* tab] in the web console.
You can view the migration metrics of a virtual machine on the xref:../../virt/virt-web-console-overview.adoc#ui-virtualmachine-details-metrics_virt-web-console-overview[*VirtualMachine details -> Metrics* tab] in the web console.
include::modules/virt-monitoring-vm-migration-cli.adoc[leveloffset=+1]
[id="metrics_virt-monitor-vmi-migration"]
== Metrics
You can use xref:../../virt/support/virt-prometheus-queries.adoc#virt-prometheus-queries[Prometheus queries] to monitor live migration.
include::modules/virt-live-migration-metrics.adoc[leveloffset=+2]

View File

@@ -1,26 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-monitoring-vm-health"]
= Monitoring virtual machine health with health probes
include::_attributes/common-attributes.adoc[]
:context: virt-monitoring-vm-health
toc::[]
A virtual machine (VM) can become unhealthy due to transient issues such as connectivity loss, deadlocks, or problems with external dependencies. A health check periodically performs diagnostics on a VM by using any combination of the readiness and liveness probes.
include::modules/virt-about-readiness-liveness-probes.adoc[leveloffset=+1]
include::modules/virt-define-http-readiness-probe.adoc[leveloffset=+1]
include::modules/virt-define-tcp-readiness-probe.adoc[leveloffset=+1]
include::modules/virt-define-http-liveness-probe.adoc[leveloffset=+1]
include::modules/virt-define-guest-agent-ping-probe.adoc[leveloffset=+1]
[id="additional-resources_monitoring-vm-health"]
[role="_additional-resources"]
== Additional resources
* xref:../../applications/application-health.adoc#application-health[Monitoring application health by using health checks]

View File

@@ -1,25 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-openshift-cluster-monitoring"]
= {product-title} cluster monitoring, logging, and Telemetry
include::_attributes/common-attributes.adoc[]
:context: virt-openshift-cluster-monitoring
toc::[]
{product-title} provides various resources for monitoring at the cluster level.
// Cluster monitoring
include::modules/monitoring-about-cluster-monitoring.adoc[leveloffset=+1]
// OpenShift Logging
include::modules/cluster-logging-about-components.adoc[leveloffset=+1]
For more information on OpenShift Logging, see the xref:../../logging/cluster-logging.adoc#cluster-logging[OpenShift Logging] documentation.
// Telemetry
include::modules/telemetry-about-telemetry.adoc[leveloffset=+1]
include::modules/telemetry-what-information-is-collected.adoc[leveloffset=+2]
== CLI troubleshooting and debugging commands
For a list of the `oc` client troubleshooting and debugging commands, see the xref:../../cli_reference/openshift_cli/developer-cli-commands.adoc#cli-developer-commands[{product-title} CLI tools] documentation.

View File

@@ -0,0 +1,74 @@
:_content-type: ASSEMBLY
[id="virt-support-overview"]
= Support overview
include::_attributes/common-attributes.adoc[]
:context: virt-support-overview
toc::[]
You can collect data about your environment, monitor the health of your cluster and virtual machines (VMs), and troubleshoot {VirtProductName} resources with the following tools.
[id="virt-web-console_{context}"]
== Web console
The {product-title} web console displays resource usage, alerts, events, and trends for your cluster and for {VirtProductName} components and resources.
.Web console pages for monitoring and troubleshooting
[options="header"]
|====
|Page |Description
|*Overview* page
|Cluster details, status, alerts, inventory, and resource usage
|*Virtualization* -> xref:../../virt/virt-web-console-overview.adoc#virtualization-overview-overview_virt-web-console-overview[*Overview* tab]
|{VirtProductName} resources, usage, alerts, and status
|*Virtualization* -> xref:../../virt/virt-web-console-overview.adoc#virtualization-overview-top-consumers_virt-web-console-overview[*Top consumers* tab]
|Top consumers of CPU, memory, and storage
|*Virtualization* -> xref:../../virt/virt-web-console-overview.adoc#virtualization-overview-migrations_virt-web-console-overview[*Migrations* tab]
|Progress of live migrations
|*VirtualMachines* -> *VirtualMachine* -> *VirtualMachine details* -> xref:../../virt/virt-web-console-overview.adoc#ui-virtualmachine-details-metrics_virt-web-console-overview[*Metrics* tab]
|VM resource usage, storage, network, and migration
|*VirtualMachines* -> *VirtualMachine* -> *VirtualMachine details* -> xref:../../virt/virt-web-console-overview.adoc#ui-virtualmachine-details-events_virt-web-console-overview[*Events* tab]
|List of VM events
|====
[id="collecting-data-for-red-hat-support_{context}"]
== Collecting data for Red Hat Support
When you submit a xref:../../support/getting-support.adoc#support-submitting-a-case_getting-support[support case] to Red Hat Support, it is helpful to provide debugging information. You can gather debugging information by performing the following steps:
xref:../../virt/support/virt-collecting-virt-data.adoc#virt-collecting-data-about-your-environment_virt-collecting-virt-data[Collecting data about your environment]::
Configure Prometheus and Alertmanager and collect `must-gather` data for {product-title} and {VirtProductName}.
xref:../../virt/support/virt-collecting-virt-data.adoc#virt-collecting-data-about-vms_virt-collecting-virt-data[Collecting data about VMs]::
Collect `must-gather` data and memory dumps from VMs.
xref:../../virt/support/virt-collecting-virt-data.adoc#virt-using-virt-must-gather_virt-collecting-virt-data[`must-gather` tool for {VirtProductName}]::
Configure and use the `must-gather` tool.
[id="monitoring_{context}"]
== Monitoring
You can monitor the health of your cluster and VMs. For details about monitoring tools, see the xref:../../virt/support/monitoring/virt-monitoring-overview.adoc#virt-monitoring-overview[Monitoring overview].
[id="troubleshooting_{context}"]
== Troubleshooting
Troubleshoot {VirtProductName} components and VMs and resolve issues that trigger alerts in the web console.
xref:../../virt/support/virt-troubleshooting.adoc#events_virt-troubleshooting[Events]::
View important life-cycle information for VMs, namespaces, and resources.
xref:../../virt/support/virt-troubleshooting.adoc#virt-logs_virt-troubleshooting[Logs]::
View and configure logs for {VirtProductName} components and VMs.
xref:../../virt/support/virt-runbooks.adoc#virt-runbooks[Runbooks]::
Diagnose and resolve issues that trigger {VirtProductName} alerts in the web console.
xref:../../virt/support/virt-troubleshooting.adoc#troubleshooting-data-volumes_virt-troubleshooting[Troubleshooting data volumes]::
Troubleshoot data volumes by analyzing conditions and events.

View File

@@ -0,0 +1,50 @@
:_content-type: ASSEMBLY
[id="virt-troubleshooting"]
= Troubleshooting
include::_attributes/common-attributes.adoc[]
:context: virt-troubleshooting
toc::[]
[id="events_{context}"]
== Events
xref:../../nodes/clusters/nodes-containers-events.adoc#nodes-containers-events[{product-title} events] are records of important life-cycle information and are useful for monitoring and troubleshooting resource issues. You can gather information about the following events:
* VM events: Navigate to the xref:../../virt/virt-web-console-overview.adoc#ui-virtualmachine-details-events_virt-web-console-overview[*Events* tab] of the *VirtualMachine details* page in the web console.
* Namespace events: Use the `oc get` command with the namespace:
+
[source,terminal]
----
$ oc get events -n <namespace>
----
+
See the xref:../../nodes/clusters/nodes-containers-events.adoc#nodes-containers-events-list_nodes-containers-events[list of events] for details about specific events.
* Resource events: Use the `oc describe` command with the resource:
+
[source,terminal]
----
$ oc describe <resource> <resource_name>
----
[id="virt-logs_{context}"]
== Logs
You can view logs for {VirtProductName} components and VMs by using the web console or the `oc` CLI tool. You can retrieve virtual machine logs from the `virt-launcher` pod. To control log verbosity, edit the `HyperConverged` custom resource.
include::modules/virt-viewing-virtual-machine-logs-web.adoc[leveloffset=+2]
include::modules/virt-viewing-logs-cli.adoc[leveloffset=+2]
include::modules/virt-common-error-messages.adoc[leveloffset=+2]
[id="troubleshooting-data-volumes_{context}"]
== Troubleshooting data volumes
You can check the `Conditions` and `Events` sections of the `DataVolume` object to analyze and resolve issues.
include::modules/virt-about-dv-conditions-and-events.adoc[leveloffset=+2]
include::modules/virt-analyzing-datavolume-conditions-and-events.adoc[leveloffset=+2]

View File

@@ -1,13 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-using-dashboard-to-get-cluster-info"]
= Using the {product-title} dashboard to get cluster information
include::_attributes/common-attributes.adoc[]
:context: virt-using-dashboard-to-get-cluster-info
toc::[]
Access the {product-title} dashboard, which captures high-level information about the cluster, by clicking *Home > Dashboards > Overview* from the {product-title} web console.
The {product-title} dashboard provides various cluster information, captured in individual dashboard _cards_.
include::modules/virt-about-the-overview-dashboard.adoc[leveloffset=+1]

View File

@@ -104,7 +104,7 @@ Deprecated features are included in the current release and supported. However,
* In a future release, support for the legacy HPP custom resource, and the associated storage class, will be deprecated. Beginning in {VirtProductName} {VirtVersion}, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. The Operator continues to support the existing (legacy) format of the HPP custom resource and the associated storage class. If you use the HPP Operator, plan to xref:../virt/virtual_machines/virtual_disks/virt-configuring-local-storage-for-vms.adoc#virt-configuring-local-storage-for-vms[create a storage class for the CSI driver] as part of your migration strategy.
[id="virt-4-13-removed"]
@@ -180,7 +180,7 @@ link:https://access.redhat.com/support/offerings/techpreview[Technology Preview
//CNV-21824
* You can now run xref:../virt/support/virt-running-cluster-checkups.adoc#virt-measuring-latency-vm-secondary-network_virt-running-cluster-checkups[{product-title} cluster checkups] to measure network latency between VMs.
* You can now run xref:../virt/support/monitoring/virt-running-cluster-checkups.adoc#virt-running-cluster-checkups[{product-title} cluster checkups] to measure network latency between VMs.
//CNV-20526
* The Tekton Tasks Operator (TTO) now xref:../virt/virtual_machines/virt-managing-vms-openshift-pipelines.adoc#virt-managing-vms-openshift-pipelines[integrates {VirtProductName} with {pipelines-title}]. TTO includes cluster tasks and example pipelines that allow you to:
@@ -192,7 +192,7 @@ link:https://access.redhat.com/support/offerings/techpreview[Technology Preview
** Customize a basic Windows 10 installation and then create a new image and template.
//CNV-20149
* You can now use the xref:../virt/support/virt-monitoring-vm-health.adoc#virt-define-guest-agent-ping-probe_virt-monitoring-vm-health[guest agent ping probe] to determine if the QEMU guest agent is running on a virtual machine.
* You can now use the xref:../virt/support/monitoring/virt-monitoring-vm-health.adoc#virt-define-guest-agent-ping-probe_virt-monitoring-vm-health[guest agent ping probe] to determine if the QEMU guest agent is running on a virtual machine.
//CNV-20963
* You can now use Microsoft Windows 11 as a guest operating system. However, {VirtProductName} {VirtVersion} does not support USB disks, which are required for a critical function of BitLocker recovery. To protect recovery keys, use other methods described in the link:https://learn.microsoft.com/en-us/windows/security/information-protection/bitlocker/bitlocker-recovery-guide-plan[BitLocker recovery guide].

View File

@@ -65,7 +65,6 @@ Manage the VMs:
VMs are connected to the pod network by default. You must configure a secondary network, such as Linux bridge or SR-IOV, and then add the network to the VM configuration.
====
* xref:../virt/support/virt-logs.adoc#virt-logs[View {VirtProductName} logs by using the CLI].
* xref:../virt/virtual_machines/virt-automating-windows-sysprep.adoc#virt-automating-windows-sysprep[Automate Windows VM deployments with `sysprep`].
* xref:../virt/live_migration/virt-live-migration.adoc#virt-live-migration[Live migrate VMs].
* xref:../virt/backup_restore/virt-backup-restore-overview.adoc#virt-backup-restore-overview[Back up and restore VMs].

View File

@@ -1,23 +0,0 @@
:_content-type: ASSEMBLY
[id="virt-configuring-a-watchdog"]
= Configuring a watchdog
include::_attributes/common-attributes.adoc[]
:context: virt-configuring-a-watchdog
toc::[]
Expose a watchdog by configuring the virtual machine (VM) for a watchdog device, installing the watchdog, and starting the watchdog service.
[id="{context}_prerequisites"]
== Prerequisites
* The virtual machine must have kernel support for an `i6300esb` watchdog device. {op-system-base-full} images support `i6300esb`.
include::modules/virt-defining-a-watchdog.adoc[leveloffset=+1]
include::modules/virt-installing-a-watchdog.adoc[leveloffset=+1]
[id="{context}_additional-resources"]
[role="_additional-resources"]
== Additional resources
* xref:../../../applications/application-health.html[Monitoring application health by using health checks]