1
0
mirror of https://github.com/openshift/installer.git synced 2026-02-05 15:47:14 +01:00

Added some steps to help troubleshoot OSP

Added a link in the general OCP on OSP doc
Fixed a wrong md link
This commit is contained in:
Eduardo Minguez Perez
2019-01-15 09:24:09 +01:00
parent c18c6d6477
commit d769ee74bb
2 changed files with 83 additions and 1 deletions

View File

@@ -48,7 +48,7 @@ enough to store the ignition config files, so they are served by swift instead.
`openstack image create --container-format=bare --disk-format=qcow2 --file redhat-coreos-${RHCOSVERSION}-openstack.qcow2 redhat-coreos-${RHCOSVERSION}`
**NOTE:** Depending on your OpenStack environment you can upload the RHCOS image
as `raw` or `qcow2`. See [https://docs.openstack.org/image-guide/image-formats.html](Disk and container formats for images) for more information.
as `raw` or `qcow2`. See [Disk and container formats for images](https://docs.openstack.org/image-guide/image-formats.html) for more information.
* The public network should be created by the OSP admin. Verify the name/ID of the 'External' network:
```
@@ -246,6 +246,9 @@ api VM:
* `openstack server delete <cluster name>-api`
## Troubleshooting
See the [troubleshooting installer issues in OpenStack](./troubleshooting.md) guide.
## Reporting Issues

View File

@@ -0,0 +1,79 @@
# OpenShift 4 installer on OpenStack troubleshooting
Support for launching clusters on OpenStack is **experimental**.
Unfortunately, there will always be some cases where OpenShift fails to install properly. In these events, it is helpful to understand the likely failure modes as well as how to troubleshoot the failure.
This document discusses some troubleshooting options for OpenStack based
deployments. For general tips on troubleshooting the installer, see the [Installer Troubleshooting](../troubleshooting.md) guide.
## View instances logs
OpenStack CLI tools should be installed, then:
`openstack console log show <instance>`
## ssh access to the instances
By default, the only exposed instance is the service VM, but ssh access is not
allowed (nor the ssh key injected), so in case you want to access the hosts, it
is required to create a floating IP and attach it to some host (master-0 in
this example)
### Create security group to allow ssh access
```
INSTANCE=$(openstack server list -f value -c Name | grep master-0)
openstack security group create ssh
# Note this opens port 22/tcp to 0.0.0.0/0
openstack security group rule create \
--ingress \
--protocol tcp \
--dst-port 22 \
ssh
openstack server add security group ${INSTANCE} ssh
```
Optionally, allow ICMP traffic (to ping the instance):
```
openstack security group rule create \
--ingress \
--protocol icmp \
ssh
```
### Create and attach the floating IP
```
# This must be set to the external network configured in the OpenShift install
PUBLIC_NETWORK="external_network"
INSTANCE=$(openstack server list -f value -c Name | grep master-0)
FIP=$(openstack floating ip create ${PUBLIC_NETWORK} --description ${INSTANCE} -f value -c floating_ip_address)
openstack server add floating ip ${INSTANCE} ${FIP}
```
### Access the host
```
ssh core@${FIP}
```
You can use it as jump host as well:
```
ssh -J core@${FIP} core@<host>
```
NOTE: If you are running the `openshift-installer` from an all-in-one OpenStack
deployment (compute + controller in a single host), you can connect to the
instance network namespace directly:
```
NODE_ADDRESSES=$(openstack server show ${INSTANCE} -f value -c addresses | cut -d',' -f1)
NODE_IP=${NODE_ADDRESSES#"openshift="}
sudo ip netns exec "qdhcp-$(openstack network show openshift -f value -c id)" ssh core@$NODE_IP
```