This is what I do. `dnf` no longer complains if invoked as `yum`;
there's no point to having two separate sets of instructions.
Also use `systemctl enable --now` for further brevity.
Catching up with c9b0e2ff (manifests: stop using kube core operator,
2018-10-08, #420).
Generated with:
$ dep ensure
using:
$ dep version
dep:
version : v0.5.0
build date :
git hash : 22125cf
go version : go1.10.3
go compiler : gc
platform : linux/amd64
features : ImportDuringSolve=false
These escaped the great purge of 0c6d53b7 (*: remove bazel,
2018-09-24, #342). kubernetes/BUILD.bazel snuck in with 70ea0e81
(tests/smoke/vendor: switch from glide to dep, 2018-09-28, #380), and
tectonic/BUILD.bazel snuck in with e2d9fd30 (manifests: make tectonic/
flat dir, 2018-09-25, #330). I'd guess both were due to rebases from
commits originally made before #342 landed.
Using Terraform to remove all resources created by the bootstrap
modules. For this to work, all platforms must define a bootstrap
module (and they all currently do).
This command moves the previous destroy-cluster into a new 'destroy
cluster' subcommand, because grouping different destroy flavors into
sub-commands makes the base command easier to understand. We expect
both destroy flavors to be long-running, because it's hard to write
generic logic for "is the cluster sufficiently live for us to remove
the bootstrap". We don't want to hang forever if the cluster dies
before coming up, but there's no solid rules for how long to wait
before deciding that it's never going to come up. When we start
destroying the bootstrap resources automatically in the future, will
pick reasonable timeouts, but will want to still provide callers with
the ability to manually remove the bootstrap resources if we happen to
fall out of that timeout on a cluster that does eventually come up.
I've also created a LoadMetadata helper to share the "retrieve the
metadata from the asset directory" logic between the destroy-cluster
and destroy-bootstrap logic. The new helper lives in the cluster
asset plackage close to the code that determines that file's location.
I've pushed the Terraform module unpacking and 'terraform init' call
down into a helper used by the Apply and Destroy functions to make
life easier on the callers.
I've also fixed a path.Join -> filepath.Join typo in Apply, which
dates back to ff5a57b0 (pkg/terraform: Modify some helper functions
for the new binary layout, 2018-09-19, #289). These aren't network
paths ;).
Avoid:
$ bin/openshift-install cluster
FATAL Error executing openshift-install: open tests/smoke/vendor/github.com/prometheus/procfs/fixtures/26231/fd/0: no such file or directory
as the old implementation attempts to walk the whole directory and
hits:
$ ls -l tests/smoke/vendor/github.com/prometheus/procfs/fixtures/26231/fd/
total 0
lrwxrwxrwx. 1 trking trking 24 Oct 5 01:26 0 -> ../../symlinktargets/abc
lrwxrwxrwx. 1 trking trking 24 Oct 5 01:26 1 -> ../../symlinktargets/def
lrwxrwxrwx. 1 trking trking 24 Oct 5 01:26 10 -> ../../symlinktargets/xyz
lrwxrwxrwx. 1 trking trking 24 Oct 5 01:26 2 -> ../../symlinktargets/ghi
lrwxrwxrwx. 1 trking trking 24 Oct 5 01:26 3 -> ../../symlinktargets/uvw
With this commit, we only load files from the disk when someone asks
for them.
I've adjusted the unit tests a bit because:
* ioutil.ReadFile returns errors like:
read /: is a directory
for directories. There does not appear to be an analog to
os.IsNotExist() for this condition, so instead of checking for it in
the tests, I've just dropped the empty-string input cases. If we
break something and call FetchByName on an empty string, we want to
error out, and that error message is appropriately descriptive
already.
* Globs are not as precise as regular expressions, so our glob would
match master-1x.ign and similar which the previous regexp excluded.
But loading a few extra files doesn't seem like that big a deal, and
folks adding files with names like that seems unlikely.
This seems to be a very common mistake when people try building the
installer. Do a sanity check to catch this and make the error more
clear.
Co-authored-by: W. Trevor King <wking@tremily.us>
From [1]:
If dir is the empty string, TempDir uses the default directory for
temporary files (see os.TempDir).
so there's no point in us calling TempDir() directly.
The explicit call is from 408c0663 (asset/cluster: Invoke terraform in
a temp dir, 2018-09-24, #319).
[1]: https://golang.org/pkg/io/ioutil/#TempDir
- Add Loadable interface to load assets from disk.
- Load on-disk assets in Fetch() and use it to overwrite the state
file.
- Add FileFetcher interface to help reading the files from disk.
Alex gave the history behind our previous bucket name [1]:
We should probably just fix the creation of the S3 bucket since we
no longer rely on CNAMEs (which required the S3 bucket to match the
domain name).
But now we can just let AWS pick a random bucket name for us.
I've also dropped the no-longer-used S3Bucket validator.
[1]: https://github.com/openshift/installer/pull/359#issuecomment-426051251
It looks like this was (accidentally?) removed in f8286662
(modules/vpc: support re-apply of terraform when AZ number changes,
2018-03-12, coreos/tectonic-installer#3092). We need to set it to
spread worker subnets over the available zones.
While analyzing the generated cluster-config.yaml for an OpenStack
deployment, I noticed vpcID under the OpenStack platform config. This
is not used anywhere and should just be removed.