* changes for pulling binary from releases.hashicorp.com
* cahnges for getting release from release official site
* cahnges for getting release from release official site
* unit test cases
* unit test cases
* unit test coverage
* changes for getter releases.hashicorp.com
* lint fix
* lint fix
* lint fix
* manifest.json related changes
* manifest.json related changes
* manifest.json related changes
* manifest.json related changes
* github getter test cases
* added test cases for getters
* added test cases for getters
* added test cases for getters
* added test cases for getters
* added test cases for getters
* added test cases for plugins getter
* unit test cases for getting release from official site
* description to the methods
* formatting multiple hcl files
* recursive formating
* test case fixing
* test case fix
* removed not required code
* existing test code fix
* go formatting
* more test cases for new cases
* doc changes
* added additional check on error message in negative test case
* modifying the for loop to preserver user inputted variable files preference
* adding test cases
* pr comments + refactoring the for loop for better readability.
* pr comments + refactoring the for loop for better readability.
* pr comments | handling default case
* adding additional test case
* fixing test cases
* fixing test cases. creating a subdirectory to add a test case for both hcl and json auto var files
* adding test case for json auto var file
The GetBuilds function, available on both HCL2 and legacy JSON
configuration objects, used to return the Build interface.
This typing by interface is not useful in this instance, since all the
uses of `GetBuilds' are self-contained within Packer, and we're never
using any other implementation for it than `*CoreBuild`.
We've been relying on the dynamic type for all the builds being
*CoreBuild in some places of the code, so to avoid potential surprises
in the future, we'll change the signature now so it returns only
concrete types.
In the current state, a Packer build that succeeds but fails to push its
metadata to HCP for reasons other than a lack of artifact will always
succeed from the perspective of a user invoking `packer build`.
This can be a bit misleading, as users may expect their artifacts to
appear on HCP Packer if their build succeeded on Packer Core, so this
commit changes this behaviour, instead reporting HCP errors as a real
error if the build failed, so packer returns a non-zero error code if
this happens.
The hcp-sbom provisioner is a provisioner that acts essentially like a
download-only file provisioner, which also verifies the file downloaded
is a SPDX/CycloneDX JSON-encoded SBOM file, and sets up its upload to
HCP Packer later on.
As we're trying to move away from gob for serialising data over the
wire, this commit adds the capability for Packer to pick dynamically
between gob or protobuf for the serialisation format to communicate with
plugins.
As it stands, if all the plugins discovered are compatible with
protobuf, and we have not forced gob usage, protobuf will be the
serialisation format picked.
If any plugin is not compatible with protobuf, gob will be used for
communicating with all the plugins that will be used over the course of
a command.
`packer validate` would output the same error message four times per
unsupported root block type found in a template (e.g., 'src' instead of
'source'). This behavior was due to a function being called four times
for each file on each stage of the parsing.
The hcl2_upgrade command transforms a JSON template into an HCL2
template for use with Packer.
The command is quite fragile already, but given that this is the last
remaining fragment that causes Packer to depend on the AWS SDK directly,
we can do away with it.
This commit therefore imports the definitions for AWS access config, so
we can extract this information from the JSON template, and include it
in the definition of the output source for AWS, since we manage this one
differently from other sources.
This allows us to not depend on the AWS plugin directly, which in turn
makes Packer not need to link with the AWS plugin when compiling the
executable.
We are still depending on the AWS SDK for now since the SDK exposes a
aws_secretsmanager function that can be used for interpolation (legacy
JSON interpolation to be clear), so this cannot be removed from now, but
we should consider some form of remediation in the future.
For all the commands that call Initialise, we introduce a new flag:
UseSequential.
This disables DAG scheduling for evaluating datasources and locals as a
fallback to the newly introduced DAG scheduling approach.
`hcl2_upgrade` is a special case here, as the template is always JSON,
there cannot be any datasource, so the DAG in this case becomes
meaningless, and is not integrated in this code path.
Since the enumer implementation we used hadn't been updated for 5+
years, this didn't work with recent linux/go versions, and enumer
crashed while attempting to parse/analyse the source files.
There's another alternative on Github, forked from the one we used,
which seems more maintained now, and does produce the expected files in
Packer.
When remotely installing a plugin, constraints are used by Packer to
determine which version of a plugin to install.
These constraints can be arbitrarily complex, including operators and
ranges in which to look for valid versions.
However, the versions specified in those constraints should always be
final releases, and not a pre-release since we don't explicitly support
remotely installing pre-releases.
This commit therefore addds checks to make sure these are reported ASAP,
even before the source is contacted to list releases and picking one to
install.
Listing installed plugins on Windows requires the extension to be set in
the ListOptions, otherwise they are not discovered.
While working on the discovery code, and consolidating it in a single
location, we've forgotten to pass the argument to ListInstallations, so
that makes it impossible to automatically discover installed components
on Windows.
This commit fixes this issue for the plugins required, and the general
discovery process during build/validate.
The ParsePluginSource function can be invoked from either a HCL2 context
(when parsing a required_plugins block), or from the command-line
itself.
While in the first context a hcl.Diagnostics is coherent, in case the
source to parse is a command-line argument, for example when installing
or removing a plugin, the error message cannot have an HCL context,
leading to errors that are incorrectly prefixed by a <nil> string dure
to the lack of a reference to attach the diagnostic to.
Therefore, in order to fix this behaviour, the logic that parses plugin
sources now returns an error, and attaching the error to an HCL subject
is done independently, if needed.
When installing a plugin from a remote source, we list the installed
plugins that match the constraints specified, and if the constraint is
already satisfied, we don't do anything.
However, since remote installation is only relevant for releases of a
plugin, we should only look at the installed releases of a plugin, and
not consider pre-releases for that step.
This wasn't the case before this commit, as if a prerelease version of a
commit (ex: 10.8.1-dev), and we try to invoke `packer init` with a
constraint on this version specifically, Packer would locate that
pre-release and assume it was already installed, so would silently
succeed the command and do nothing.
This isn't the expected behaviour as we should install the final release
of that plugin, regardless of any prerelease installation of the plugin.
So this commit fixes that by only listing releases, so we don't report
the plugin being already installed if a prerelease is what's installed.
When packer init is invoked with a --force argument, but no --update, we
clamp the version to install based on the last one locally installed.
Doing this may however cause the constraint to always be false if the
latest available version of a plugin is a pre-release, as none of the
upstream constraints will match that.
Therefore this commit changes how the constraint is derived from the
local list of installations, so that only the last installation that
matches the original constraint will be used, and not a pre-release.
The source parsing logic was heavily directed towards Github compatible
source URIs, however if we want to support more cases, we need to make
sure we are able to specify those URIs, and to load plugins installed
from those sources.
Right now, since the getters available are only github.com, we will not
support remotely instlling plugins from sources other than github.com,
with the same set of constraints as before. However, we do support now
installing from a local plugin binary to any kind of source, and we
support loading them, including if a template wants this plugin
installed locally with version constraints.
When installing plugins with the `packer plugins install --path'
command, the metadata is now scrubbed from the file installed locally.
This is as a protection against collisions in the versions, as metadata
is meaningless for version comparison, so if two versions of the same
plugin are installed, the precedence order between them is undefined.
Therefore to avoid such collisions, we remove the metadata from the file
name, that way if two successive versions of a plugin include metadata
in the version, they won't coexist, and the last installed will be the
only installed version locally.
When installing a plugin from a local binary, Packer builds the name of
the plugin from the results of the `describe' command.
Depending on how the plugin is built, the version reported may or may
not contain a leading `v', which was not taken into account beforehand
and the leading `v' was always injected in the path.
This caused plugins that report a leading `v' in their version to
be installed with two v's in their path, making them impossible to load.
Therefore to fix this issue, we count on the version library to print
out a version without the leading v, and we inject that in the resulting
path.
The packer plugins remove command allows users to delete plugins
installed locally.
Previous versions of the command only allowed for the plugins to be
removed using the source for a plugin, and the versions to remove,
optionally.
This commit adds the capability for the plugins to be removed using
their local path, in addition to the regular source+version method, that
way we are able to pipe the results of `packer plugins installed' into
the plugins remove command for quick plugin removal.
The plugin and plugins command had a name that was close, and while
plugin is not supposed to be directly called by Packer users, this could
happen by accident while trying to execute packer plugins subcommands,
and when it does, the error messages are far from explicit, so unless
they understand what Packer is doing here, they'll likely be lost.
To reduce the risk of confusion, we rename the command to run packer
embedded components as execute.
The fmt command reformats HCL2 templates, provided it can parse the file
and reformat its contents according to the standards set by the HCL
library's formatters.
However, if the file is malformed for some reason, the command will fail
with a parse error, but while the parse error message is shown, the
actual errors in the template(s) are not forwarded, making it hard for
users to understand what went wrong with the contents of the file
they're trying to format.
In order to be more helpful with those errors, we now forward those
parsing errors to the UI.
In order to test the Packer subcommands, some tests rely on a plugin:
github.com/sylviamoss/comment.
This plugin is outdated compared to our SDK, and some of the binaries
releases don't match what is expected by Packer, i.e. v1.0.0 vs. v0.2.8.
To fix that, we migrate to use the hashicups plugin, which is our demo
plugin for Packer, and is actually maintained, and up-to-date, which
will make those tests more reliable moving forward.
Since we now support loading pre-releases with Packer subcommands, we
relax the contraints we had placed on the `--path' option, so it will
accept any `-dev' binary, in addition to final releases.
Since we added the capability for the plugin discovery to ignore
installed pre-releases of plugins, we give that capacity to the validate
and build commands, through a new command-line flag: release-only.
Since the plugins remove subcommand now only loads valid plugins, it
cannot run on mock data anymore, and the logic for creating a valid
plugin hierarchy is not present in this repository, but does in another,
so we move those tests from here to that repository in order to continue
testing them.
Since we'll only look in the plugin directory, and not from multiple
sources now, for installing/listing plugins, we can simplify the
structures which used to accept multiple directories so they only accept
one.
Since we're removing the alternative plugin installation directories in
favour of only supporting installing them in the PACKER_PLUGIN_PATH
directory, we only return one directory when getting the known plugin
directories.
When running a build with HCP Packer enabled, Packer attempts to push
the build status to HCP.
If the build fails, we update the status to BUILD_FAILED, and that's the
end of it.
If however the build succeeds, Packer attempts to get the HCP artifact
from the builder, which will only succeed if the builder supports it.
Otherwise, we'll get either nil, or an artifact type that is not
compatible with what is expected for HCP support.
When either of those happens, we warn that the builder may not support
HCP Packer at all, so users are aware of the problem.
However, when the error was introduced, it only looked at the fact that
an error was produced, independently of the type of error. This caused
legitimate errors while building to be reported as potential
incompatibility between the builder and HCP, which was confusing to
users.
This commit changes this by introducing a new error type, only produced
when the artifact either is nil, or failed to be deserialised into a HCP
artifact, which lets us produce the incompatibility warning with more
accuracy.
Single-component plugins are a relic from the past that has been
deprecated from version 1.7.0 and onwards.
Since we're revisiting how plugins are installed/loaded, and the changes
will be incompatible with those, we remove them in preparation of this
work.
Before Change
```
Options:
- path <path>: install the plugin from a locally-sourced plugin binary. This
installs the plugin where a normal invocation would, but will
not try to download it from a remote location, and instead
install the binary in the Packer plugins path.
This option cannot be specified with a version constraint.
- force: forces reinstallation of plugins, even if already installed.
```
After Change
```
Options:
-path <path> Install the plugin from a locally-sourced plugin binary.
This installs the plugin where a normal invocation would, but will
not try to download it from a remote location, and instead
install the binary in the Packer plugins path. This option cannot
be specified with a version constraint.
-force Forces reinstallation of plugins, even if already installed.
```
When installing a plugin with packer plugins install --path, we only
accept release versions of a plugin, as otherwise the loading can be
inconsistent if for example a user specifies a required_plugins block in
their template, in which case the plugins will be ignored.
Until we have a simpler loading scheme then, we will reject non-release
versions of plugins to avoid confusion.
To avoid plugins being installed with a specific version when a path is
used for installing a plugin from a locally sourced plugin binary, we
explicitly reject the combination of both a path and a version for
plugins install.
The --force option for packer init and packer plugins install enforces
installation of a plugin, even if it is already locally installed.
This will become useful if for some reason a pre-existing plugin
binary/version is already installed, and we want to overwrite it.
This new flag allows the `packer plugins install' command to install a
plugin from a local binary rather than from Github.
This command will only call `describe' on the plugin, and won't do any
further checks for functionality. The SHA256SUM will be directly
computed from the binary, so as with anything manual and potentially
sourced by the community, extra care should be applied when invoking
this.