GP3 volumes have the ability to configure throughput from 125 MiB/s to
2000 MiB/s. This allows the ability to set this at install time in the
install-config.
https://issues.redhat.com/browse/CORS-4212
GP3 volumes have the ability to configure throughput from 125 MiB/s to
2000 MiB/s. This allows the ability to set this at install time in the
install-config.
https://issues.redhat.com/browse/CORS-4212
This field was introduced [1] before the Installer had support for
custom AMIs in machine pools [2]. Now that it does, the same
functionality is achieved via the defaultMachinePlatform field
`platform.aws.defaultMachinePlatform.amiID`
[1] fdf94e39ee
[2] bc47222576
User documentation for the Public IPv4 Pool feature (BYOIPv4)
on install-config, where the customer can specify the Public IPv4 Pool
ID from a Public IPv4 CIDR pool that had bought to the AWS account.
Introduce installer docs (reference for the product docs) to provision
VPC and dependencies (Carrier Gateway and subnets) using CloudFormation
templates, when installing the variant of BYO VPC.
Inserting the instructions to install a cluster expanding
nodes in Local Zones (new VPC), into existing documentation
of installing in existing VPC.
The Day-2 section is also added for reference of existing Local
Zone automation. The Day 2 is not part of the official documentation
delivered on 4.14, but it is mapped as an open question in the
enhancement proposal [1232](https://github.com/openshift/enhancements/pull/1232).
The steps described on the KCS was validated with QE and SDN teams.
Add Documentation for Phase-1[1] of installing OCP cluster in existing VPC
with Local Zone Subnets. The documentation includes CloudFormation Templates
to create Local Zone public subnet and route table association.
[1] Enhancement Proposal: https://github.com/openshift/enhancements/pull/1232
64665ebccf added a control plane
machineset manifest. This updates corresponding UPI docs to remove
this manifest when the cluster is runnning without MAO.
For disconnected clusters, OpenShift can be configured not to manage
DNS, and the cluster administrator can configure DNS manually.
Otherwise, the Ingress operator will try to contact sts directly
"sts.amazonaws.com" as opposed to the configured VPC endpoint for the
cluster.
This patch updates the "cloud-install" links in the documentation to
point to the current location.
Signed-off-by: Juan Hernandez <juan.hernandez@redhat.com>
This implements part of the plan from:
https://github.com/openshift/os/issues/477
When we originally added the pinned RHCOS metadata `rhcos.json`
to the installer, we also changed the coreos-assembler `meta.json`
format into an arbitrary new format in the name of some cleanups.
In retrospect, this was a big mistake because we now have two
formats.
Then Fedora CoreOS appeared and added streams JSON as a public API.
We decided to unify on streams metadata; there's now a published
Go library for it: https://github.com/coreos/stream-metadata-go
Among other benefits, it is a single file that supports multiple
architectures.
UPI installs should now use stream metadata, particularly
to find public cloud images. This is exposed via a new
`openshift-install coreos print-stream-json` command.
This is an important preparatory step for exposing this via
`oc` as well as having something in the cluster update to
it.
HOWEVER as a (really hopefully temporary) hack, we *duplicate*
the metadata so that IPI installs use the new stream format,
and UPI CI jobs can still use the old format (with different RHCOS versions).
We will port the UPI docs and CI jobs after this merges.
Co-authored-by: Matthew Staebler <staebler@redhat.com>
Fix a disconnect about the slug to use for the reference link from
17030b3bdb (aws: allow users to set the KMS key id for encrypting EBS
volumes, 2020-03-13, #3293).
users can set AMI for the platform or defaultMachinePool or individual machine pool, and the AMI used is based on increasing order of priority of the list mentioned before.
We shouldn't assume folks will have a private zone they can dedicate
to the sole use of the new cluster. This commit talks users through
adjusting their DNS configuration to consume an existing zone with
arbitrary identification.
I'd like to drop the owned tag from 01_vpc.yaml, but that's been
contentious [1]. I'm punting in this commit so we can get the
consensus doc change landed.
[1]: https://github.com/openshift/installer/pull/2420#issuecomment-541236368
I just dropped this in at the end in 14e06912a3
(docs/user/aws/install_upi: Document bring-your-own-DNS, 2019-08-14, #2221),
but you need it for a functioning cluster (something about console and
OAuth and mumble mumble). Jeremiah got the placement right for GCP in
16d4d388ac (upi/gcp: document manual creation of apps DNS records,
2019-08-29, #2289). This commit updates AWS to match.
We grew replicas-zeroing in c22d042 (docs/user/aws/install_upi: Add
'sed' call to zero compute replicas, 2019-05-02, #1649) to set the
stage for changing the 'replicas: 0' semantics from "we'll make you
some dummy MachineSets" to "we won't make you MachineSets". But that
hasn't happened yet, and since 64f96df (scheduler: Use schedulable
masters if no compute hosts defined, 2019-07-16, #2004) 'replicas: 0'
for compute has also meant "add the 'worker' role to control-plane
nodes". That leads to racy problems when ingress comes through a load
balancer, because Kubernetes load balancers exclude control-plane
nodes from their target set [1,2] (although this may get relaxed
soonish [3]). If the router pods get scheduled on the control plane
machines due to the 'worker' role, they are not reachable from the
load balancer and ingress routing breaks [4]. Seth says:
> pod nodeSelectors are not like taints/tolerations. They only have
> effect at scheduling time. They are not continually enforced.
which means that attempting to address this issue as a day-2 operation
would mean removing the 'worker' role from the control-plane nodes and
then manually evicting the router pods to force rescheduling. So
until we get the changes from [3], we can either drop the zeroing [5]
or adjust the scheduler configuration to remove the effect of the
zeroing. In both cases, this is a change we'll want to revert later
once we bump Kubernetes to pick up a fix for the service load-balancer
targets.
[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1671136#c1
[2]: https://github.com/kubernetes/kubernetes/issues/65618
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1744370#c6
[4]: https://bugzilla.redhat.com/show_bug.cgi?id=1755073
[5]: https://github.com/openshift/installer/pull/2402/
This is a bit more accessible than pointing folks at Godocs, since it
allows us to focus on the YAML property names (while Godocs
understandably focus on Go property names) and YAML renderings. Also
break up our old "one big example" install-config.yaml into a minimal
per-platform example and a series of small extentions excercising
groups of properties.
The vSphere docs are based heavily on [1].
Also drop proxy.md. It was added in e7edbf71fd (Add proxy
configuration to bootstrap node, 2019-06-24, #1832), but:
* Proxy testing and Squid configuration information belongs in
openshift/release, not in the installer repository.
* docs/user/customization.md now contains a more complete proxy-config
fragment.
OpenStack computeFlavor precedence is based on [2].
[1]: https://github.com/openshift/openshift-docs/blob/enterprise-4.2/modules/installation-vsphere-config-yaml.adoc
Last touched by commit openshift/openshift-docs@25afc7626d , 2019-08-19
[2]: https://github.com/openshift/installer/pull/2162#discussion_r322410878
Some users want to provide their own *.apps DNS records instead of
delegating that to the ingress operator [1]. With this commit, we
tell the ingress operator not to worry about managing any hosted
zones, and walk users through how they can create the expected records
[2] themselves.
Removing the zones from the YAML manifest via sed or other POSIX
command was too complicated, so I've given up on that and moved to
Python and PyYAML [3]. There are many possible alternatives, but
PyYAML seemed the most likely to be already installed, it's packaged
for many systems if users want to install it, and the syntax is fairly
readable if users want to accomplish the same task with a different
tool of their choice. The Python examples are more readable as
multi-line strings than if they were one-liners, and they can still be
copy-pasted into a shell. Once faq [4] or similar becomes more common
on user systems, we can replace this with:
$ DATA="$(faq '.compute[0].replicas=0' install-config.yaml)"
$ echo "${DATA}" >install-config.yaml
and similar.
For not, I'm not suggesting admins monitor for other DNSRecord objects
[5] and fullful them as they show up. In case we do decide to have
folks monitor them later, here's a sample:
$ oc -n openshift-ingress-operator get -o yaml dnsrecord default-wildcard
apiVersion: ingress.operator.openshift.io/v1
kind: DNSRecord
metadata:
creationTimestamp: "2019-08-22T20:45:00Z"
finalizers:
- operator.openshift.io/ingress-dns
generation: 1
labels:
ingresscontroller.operator.openshift.io/owning-ingresscontroller: default
name: default-wildcard
namespace: openshift-ingress-operator
ownerReferences:
- apiVersion: operator.openshift.io/v1
blockOwnerDeletion: true
controller: true
kind: IngressController
name: default
uid: b31db6db-c51d-11e9-8a7a-02ae97362ddc
resourceVersion: "8847"
selfLink: /apis/ingress.operator.openshift.io/v1/namespaces/openshift-ingress-operator/dnsrecords/default-wildcard
uid: b59fbbfa-c51d-11e9-8a7a-02ae97362ddc
spec:
dnsName: '*.apps.wking.devcluster.openshift.com.'
recordType: CNAME
targets:
- ab37f072ec51d11e98a7a02ae97362dd-240922428.us-west-2.elb.amazonaws.com
status:
zones:
- dnsZone:
tags:
Name: wking-nfnsr-int
kubernetes.io/cluster/wking-nfnsr: owned
- dnsZone:
id: Z3URY6TWQ91KVV
The route listing is from a cluster running [6].
[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1715635
[2]: 9ce86811e6/pkg/operator/controller/ingress/dns.go (L75-L115)
[3]: https://pyyaml.org/
[4]: https://github.com/jzelinskie/faq
[5]: d115a14661/pkg/api/v1/types.go (L18-L25)
[6]: https://mirror.openshift.com/pub/openshift-v4/clients/ocp-dev-preview/4.2.0-0.nightly-2019-08-25-233755/
API version v1beta4 of install-config deprecated a few options names. While
not dramatic -- the installer knows how to update to the new format --
we should stop using deprecated options in the configuration examples
we provide.
This isn't strictly required, because we're removing the resulting
MachineSets right afterwards. It's setting the stage for a future
where 'replicas: 0' means "no MachineSets" instead of "we'll make you
some dummy MachineSets". And we can always remove the sed later if
that future ends up not happening.
The sed is based on [1], to replace 'replicas' only for the compute
pool (and not the control-plane pool). While it should be
POSIX-compliant (and not specific to GNU sed or other
implementations), it is a bit finicky for a few reasons:
* The range matching will not detect matches in the first line, but
'replicas' will always follow its parent 'compute', so we don't have
to worry about first-line matches.
* 'compute' sorts before 'controlPlane', so we don't have to worry
about their 'replicas: ' coming first.
* 'baseDomain' is the only other property that sorts before 'compute',
but 'replicas: ' is not a legal substring for its domain-name value,
so we don't have to worry about accidentally matching that.
* While all of the above mean we're safe for now, this approach could
break down if we add additional properties in the future that sort
before 'compute' but do allow 'replicas: ' as a valid substring.
[1]: https://stackoverflow.com/a/33416489
Catching up with 13e4b702f7 (data/aws: create an api-int dns name,
2019-04-11, #1601), now that 052fceeeaf (asset/manifests: use internal
apiserver name, 2019-04-17, #1633) has moved some internal assets over
to that name.
Folks are free to opt-in to the machine API during a UPI flow, but
creating Machine(Set)s that match their host environment requires
matching a few properties (subnet, securityGroups, ...). Our default
templates are unlikely to do that out of the box, so just remove them
with the standard flow. Users who want to wade in can do so, and I've
adjusted our CloudFormation templates to set the same tags as our IPI
assets to make this easier. But with the rm call, other folks don't
have to worry about broken Machine(Set)s in their cluster confusing
the machine API or other admins.
The awkward join syntax for subnet names is because YAML doesn't
support nesting !s [1]:
You can't nest short form functions consecutively, so a pattern like
!GetAZs !Ref is invalid.
Also fix a few unrelated nits, e.g. the unused VpcId property in
06_cluster_worker_node.yaml.
[1]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getavailabilityzones.html#w2ab1c21c24c36c17b8