As more scanners besides openscap become available, atomic
can now begin to leverage them. The new scan function has
been broken out into its on file (scan.py).
The scan command itself now defaults to openscap but can
also be switched to blackduck with --scanner.
Atomic now can use a configuration file which is stored
in /etc/atomic.conf. The location of the atomic conf
file can be overriden with the environment variable
'ATOMIC_CONF'. In the case of the scan function,
we need the scanner defined in the configuration file
as well as the fully qualified image name and the
scan arguments. Optionally, you can provide additional
custom docker arguments for the scanner as well
It's nicer for branding the command. The more correct thing would be
to add it to the rpm-ostree daemon and pass through there, but we have
more important problems to fix for the production code path. This is
just for local development, so the slightly dirty way is just fine.
Laymen users who are told to run a image may not understand
the docker run switches that have security implications. We
now look for the following switches:
* --privileged
* --cap-add
* --security-opt label:disable
* --net=host
* --pid=host
* --ipc=host
and output an appropriate security message.
Also, moved def run() from Atomic/atomic.py to Atomic/run.py
to reduce the size and the number of definitions in
Atomic/atomic.py.
Images or containers can now have an associated
man-like help page to help users understand more
about the image. Typical information included
are things like a longer description, if the image
needs to be installed, security implications, steps
to upgrade, etc.
The default behavior is for atomic to display
a file called help.1 (in man format) located in
the / of the docker object. This default
can be overriden with the HELP LABEL. The
HELP LABEL needs to be a fully qualified
command to work correctly.
Basic tests for atomic diff and top which should catch
basic code regressions.
In top.py, added -n for number of iterations. And added
tty detection so that tests can pass in a jenkins environment
where there is no tty.
With the remote_inspect function in RH docker, we can inspect
and image on a remote repository for various information like
versions (when present). This allows us to expand atomic
verify to check a local version against a remote version.
atomic verify now can take an image as input (as before) and
provide a greater level of detail when checking each base
image (defined as a non-intermediate image). It now iterates
on those base images looking for update status.
The output of atomic verify has also been changed slightly to
include a verbose option. When invoked, the verify output
will list each base image with its versioning information.
The non-verbose output remains largely the same, where only
base images that have identified updates are put to stdout.
If verbose is not called for by the user but we find base
images with no version information, we output a warning
message and print verbosely anyway.
Updated man page
Adding a new atomic sub-command that behaves like GNU top
but for processes being run for containers. It currently
displays the container id, container name,
pid, cpu% (as reported by docker
top), mem% (as reported by docker top), and the command.
You can optionally pass in -o ppid, stime, time to collect
more data on the processes themselves.
While in the interactive display, you can also sort on
the columns to re-organize the data as needed.
You can define an interval for refreshing the process
information.
atomic top can be run without any additional
parameters. If that is the case, it will by default
show processes for all active containers. You can also
add one or more container_ids for exclusive process
monitoring by container.
Also added an AtomicDocker class to atomic.py which
allows for custom docker, python-api calls without
having to re-invent the wheel.
Allow users to diff between two docker images|container. There
are two types of diffs that can be run -- a file diff or an
RPM diff. The file diff is always the default. The RPM diff
can be added with -r. The file diff can be excluded with -n.
Previously the old "--no-cache" was mentioned and "fetch-cves" was used
instead of "fetch_cves".
Using --fetch-cves resulted in:
atomic: unrecognized arguments: --fetch-cves=False
* The openscap-daemon can now optionally take a switch
to override the behavior defined in its configuration
file as to whether to fetch new CVE input data or
not.
--fetch_cves is now a bool (True|False) where true
means to pull new, false means do not attempt
to pull aka "phone home"
* Added timeout value of 99999 to override the default dbus
connection timeout of 25 seconds. This allows longer
scans to complete and not throw an Exception.
* Added a small def to detect a bool value based on True,
true, yes, y, 1 and so on
Users can do this themselves by using the ${NAME} field.
Hard coding behaviour of these three variables was a mistake, and this
is a lot more flexible. I could add back the variables and don't document them
if people think someone has used them
Add the ability to scan a container or image leveraging
a containerized version of the openscap-daemon and atomic.
i.e. atomic scan image_id
man page added
python3 fixes for this content
Added a push-to-satellite function, might have gotten the config stuff to work. Jenny Ramseyer
refactoring and adding documentation
rechecked API calls. More documentation
it builds. Still not sure the config file parser is going to work--need to check section headings, but it builds. Jenny Ramseyer
switched false to true in config.py
debugging the REST API calls
more bug fixing. May have fixed payload, need to check. Have to decide what to do about publish
Fixed. It should work now. Need to fiddle around with create_repo more, but the rest should be good. Thanks to David Davis for help with uploading an image.
debugging create_repo
fixing build error.
fixing some build errors. Still can't push to Satellite or Pulp. Not sure why, working on that. Jenny Ramseyer
more bug fixing, cleaning up CLI
minor tweaks
fixed the push to pulp/satellite problem! Thanks to Alec and Will for the help! Now new error
Seems to be working now
Hooray it works. Now we're getting connection errors, which I will try to debug. But I think it works. Might have messed up an API call, which is why it won't connect? Or maybe it is a server problem? I will investigate. --Jenny Ramseyer
committing before merge
Sending in this as a pull request.
satellite and pulp are now mutually exclusive arguments
Fixed the authentication problems in the code.
Debugging upload calls
Added debug mode
Figured out how to get content view id, org id, repo id, etc. This relies on the user knowing their activation key, which I am not sure we can count on, but I think we can. Currently it has the problem that we automatically take the first product id, which is wrong in many cases. But, we can't let the user input the product id (they don't have it--the satellite web UI gives them a product id, but it is different from the internal product ids). The other option would be to have the user input the name of the desired product, but that relies on them inputting it exactly as in the json output, which seems unlikely. Also not very elegant.
Fixed the post request such that now, when we send a no-data post request, the _no data_ is seen as a json file, so the request goes through (this was to fix the problem where the server was rejecting our requests to get an upload id)
Note: you need to run this with sudo, which means that you need the admin file on your root directory (at least, in my case I needed to run it that way)
Turns out that, while the user doesn't see the activation key number on the website, it is in the URL of the activation key page
updated manpage
fixed some bugs, upload still doesn't work
Switched to multipart request
Fixed the uploading bug. Now I have a different uploading bug, but it seems much more tractable. Will update as progress continues
Now I get a 405 method not allowed error. Not sure what's happening with that, but I will keep investigating
Now we require the user to pass in both the repo and the activation key. This is standard for Satellite, and saves us from the horrible hacks I did before. More robust code. Problem is that both of these numbers are only available in the URL to their respective pages in the Satellite UI
Added to man page. Also added better error handling for uploading
It works now. Hooray
fixed it so we don't actually need to take a tar file.
Better error handling
fixed upgroup, haven't fixed bash yet
got bash completion working.
Moving push image to satellite/pulp to their respective files, instead of util
adding helpful configuration comment
making everything pep8 compatible
reverting back to old system
modified to make pep8 compatible
Signed-off-by: jramseye <jramseye@redhat.com>
because it is error prone to use it this way. Using the same NAME in the
environment variable -e NAME=\${NAME} and in container name (--name) for
install and uninstall seems incorrect because both of them serve
different purposes. Also if NAME is being used to create a container in
the install script, which happens most of the time, it leads to failure
because the same NAME is used for the ephemeral container to process
LABEL INSTALL and due to that, the actual container can not be created
and it gives following error:
"Error response from daemon: Conflict. The name "etcd1" is
already in use by container b50ea8bf1d40."
Anyway assigning names (using -name NAME) to ephemeral containers during
install and uninstall does not make much sense in my point of view.
Use --display to view run or install commands without
executing the commands. This is useful when working with custom images
with LABEL methods defined.
Signed-off-by: Sally O'Malley <somalley@redhat.com>