1
0
mirror of https://github.com/openshift/installer.git synced 2026-02-05 15:47:14 +01:00

docs/user/aws/install_upi: Document bring-your-own-DNS

Some users want to provide their own *.apps DNS records instead of
delegating that to the ingress operator [1].  With this commit, we
tell the ingress operator not to worry about managing any hosted
zones, and walk users through how they can create the expected records
[2] themselves.

Removing the zones from the YAML manifest via sed or other POSIX
command was too complicated, so I've given up on that and moved to
Python and PyYAML [3].  There are many possible alternatives, but
PyYAML seemed the most likely to be already installed, it's packaged
for many systems if users want to install it, and the syntax is fairly
readable if users want to accomplish the same task with a different
tool of their choice.  The Python examples are more readable as
multi-line strings than if they were one-liners, and they can still be
copy-pasted into a shell.  Once faq [4] or similar becomes more common
on user systems, we can replace this with:

  $ DATA="$(faq '.compute[0].replicas=0' install-config.yaml)"
  $ echo "${DATA}" >install-config.yaml

and similar.

For not, I'm not suggesting admins monitor for other DNSRecord objects
[5] and fullful them as they show up.  In case we do decide to have
folks monitor them later, here's a sample:

  $ oc -n openshift-ingress-operator get -o yaml dnsrecord default-wildcard
  apiVersion: ingress.operator.openshift.io/v1
  kind: DNSRecord
  metadata:
    creationTimestamp: "2019-08-22T20:45:00Z"
    finalizers:
    - operator.openshift.io/ingress-dns
    generation: 1
    labels:
      ingresscontroller.operator.openshift.io/owning-ingresscontroller: default
    name: default-wildcard
    namespace: openshift-ingress-operator
    ownerReferences:
    - apiVersion: operator.openshift.io/v1
      blockOwnerDeletion: true
      controller: true
      kind: IngressController
      name: default
      uid: b31db6db-c51d-11e9-8a7a-02ae97362ddc
    resourceVersion: "8847"
    selfLink: /apis/ingress.operator.openshift.io/v1/namespaces/openshift-ingress-operator/dnsrecords/default-wildcard
    uid: b59fbbfa-c51d-11e9-8a7a-02ae97362ddc
  spec:
    dnsName: '*.apps.wking.devcluster.openshift.com.'
    recordType: CNAME
    targets:
    - ab37f072ec51d11e98a7a02ae97362dd-240922428.us-west-2.elb.amazonaws.com
  status:
    zones:
    - dnsZone:
        tags:
          Name: wking-nfnsr-int
          kubernetes.io/cluster/wking-nfnsr: owned
    - dnsZone:
        id: Z3URY6TWQ91KVV

The route listing is from a cluster running [6].

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1715635
[2]: 9ce86811e6/pkg/operator/controller/ingress/dns.go (L75-L115)
[3]: https://pyyaml.org/
[4]: https://github.com/jzelinskie/faq
[5]: d115a14661/pkg/api/v1/types.go (L18-L25)
[6]: https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/4.2.0-0.nightly-2019-08-25-233755/
This commit is contained in:
W. Trevor King
2019-08-14 15:19:01 -07:00
parent 8f972b4598
commit 14e06912a3

View File

@@ -4,11 +4,9 @@ The steps for performing a UPI-based install are outlined here. Several [CloudFo
provided to assist in completing these steps or to help model your own. You are also free to create the required
resources through other methods; the CloudFormation templates are just an example.
## Create Ignition Configs
## Create Configuration
The machines will be started manually.
Therefore, it is required to generate the bootstrap and machine Ignition configs and store them for later steps.
Use [a staged install](../overview.md#multiple-invocations) to remove the control-plane Machines and compute MachineSets, because we'll be providing those ourselves and don't want to involve [the machine-API operator][machine-api-operator].
Create an install configuration as for [the usual approach](install.md#create-configuration):
```console
$ openshift-install create install-config
@@ -20,26 +18,56 @@ $ openshift-install create install-config
? Pull Secret [? for help]
```
Edit the resulting `openshift-install.yaml` to set `replicas` to 0 for the `compute` pool:
### Empty Compute Pools
```console
$ sed -i '1,/replicas: / s/replicas: .*/replicas: 0/' install-config.yaml
We'll be providing the control-plane and compute machines ourselves, so edit the resulting `install-config.yaml` to set `replicas` to 0 for the `compute` pool:
```sh
python -c '
import yaml;
path = "install-config.yaml";
data = yaml.load(open(path));
data["compute"][0]["replicas"] = 0;
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
Create manifests to get access to the control-plane Machines and compute MachineSets:
## Edit Manifests
Use [a staged install](../overview.md#multiple-invocations) to make some adjustments which are not exposed via the install configuration.
```console
$ openshift-install create manifests
INFO Consuming "Install Config" from target directory
```
From the manifest assets, remove the control-plane Machines and the compute MachineSets:
### Remove Machines and MachineSets
Remove the control-plane Machines and compute MachineSets, because we'll be providing those ourselves and don't want to involve [the machine-API operator][machine-api-operator]:
```console
$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machinesets-*.yaml
```
You are free to leave the compute MachineSets in if you want to create compute machines via the machine API, but if you do you may need to update the various references (`subnet`, etc.) to match your environment.
### Remove DNS Zones
If you don't want [the ingress operator][ingress-operator] to create DNS records on your behalf, remove the `privateZone` and `publicZone` sections from the DNS configuration:
```sh
python -c '
import yaml;
path = "manifests/cluster-dns-02-config.yml";
data = yaml.load(open(path));
del data["spec"]["publicZone"];
del data["spec"]["privateZone"];
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
```
If you do so, you'll need to [add ingress DNS records manually](#add-the-ingress-dns-records) later on.
## Create Ignition Configs
Now we can create the bootstrap Ignition configs:
```console
@@ -241,6 +269,78 @@ openshift-service-catalog-apiserver-operator openshift-service-catalo
openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m
```
## Add the Ingress DNS Records
If you removed the DNS Zone configuration [earlier](#remove-dns-zones), you'll need to manually create some DNS records pointing at the ingress load balancer.
You can create either a wildcard `*.apps.{baseDomain}.` or specific records (more on the specific records below).
You can use A, CNAME, [alias][route53-alias], etc. records, as you see fit.
For example, you can create wildcard alias records by retrieving the ingress load balancer status:
```console
$ oc -n openshift-ingress get service router-default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router-default LoadBalancer 172.30.62.215 ab37f072ec51d11e98a7a02ae97362dd-240922428.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m
```
Then find the hosted zone ID for the load balancer (or use [this table][route53-zones-for-load-balancers]):
```console
$ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "ab37f072ec51d11e98a7a02ae97362dd-240922428.us-east-2.elb.amazonaws.com").CanonicalHostedZoneNameID'
Z3AADJGX6KTTL2
```
And finally, add the alias records to your private and public zones:
```console
$ aws route53 change-resource-record-sets --hosted-zone-id "${YOUR_PRIVATE_ZONE}" --change-batch '{
> "Changes": [
> {
> "Action": "CREATE",
> "ResourceRecordSet": {
> "Name": "\\052.apps.your.cluster.domain.example.com",
> "Type": "A",
> "AliasTarget":{
> "HostedZoneId": "Z3AADJGX6KTTL2",
> "DNSName": "ab37f072ec51d11e98a7a02ae97362dd-240922428.us-east-2.elb.amazonaws.com.",
> "EvaluateTargetHealth": false
> }
> }
> }
> ]
> }'
$ aws route53 change-resource-record-sets --hosted-zone-id "${YOUR_PUBLIC_ZONE}" --change-batch '{
> "Changes": [
> {
> "Action": "CREATE",
> "ResourceRecordSet": {
> "Name": "\\052.apps.your.cluster.domain.example.com",
> "Type": "A",
> "AliasTarget":{
> "HostedZoneId": "Z3AADJGX6KTTL2",
> "DNSName": "ab37f072ec51d11e98a7a02ae97362dd-240922428.us-east-2.elb.amazonaws.com.",
> "EvaluateTargetHealth": false
> }
> }
> }
> ]
> }'
```
If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes:
```console
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
oauth-openshift.apps.your.cluster.domain.example.com
console-openshift-console.apps.your.cluster.domain.example.com
downloads-openshift-console.apps.your.cluster.domain.example.com
alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com
grafana-openshift-monitoring.apps.your.cluster.domain.example.com
prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com
```
[cloudformation]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html
[delete-stack]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html
[ingress-operator]: https://github.com/openshift/cluster-ingress-operator
[machine-api-operator]: https://github.com/openshift/machine-api-operator
[route53-alias]: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-choosing-alias-non-alias.html
[route53-zones-for-load-balancers]: https://docs.aws.amazon.com/general/latest/gr/rande.html#elb_region