Print out docker image hash code on jib build - jib

I'm building a docker image using jib and the jib-maven-plugin.
But something is rotten:
The maven build says, that the image is pushed:
Built and pushed image as ...
But there's no new image in the container registry.
So I would like print out the image 256-hash after build, so that I'm able to check, if I can find this hash in the image registry.
My solution
I was able when I'm activating the maven debug option -X.
But this produces too much noise so that I can enable it by default.
Question
But is there a better option for jib-maven-plugin to print out the hash of the built docker image?

Related

Yocto extra image feature seems to be not in the image

I've started to work with the Yocto project on a raspberry pi 3 and therefore followed the instructions in this guide.
It adds the dropbear ssh server for remote access
EXTRA_IMAGE_FEATURES_append = " ssh-server-dropbear"
After the creation of the image I checked the image manifest and it seems like dropbear has been added.
dropbear aarch64 2019.78
But it seems like when I run the image the application is not really there. I'd expect something inside /etc/init.d/ but there are no dropbear artifacts.
Also, altough the python meta-layer should be added, the py/python command is unknown on the target.
Can someone tell me what exactly I'm missing?
You're not specifying why you believe dropbear is not part of the image, if it is present on EXTRA_IMAGE_FEATURES (and you even see it in the manifest) I don't see any reason why it wouldn't be there, perhaps you are looking for it in the wrong location?
If you can post an update with the actual error you are getting that would help.
python3 does not come from the meta-python layer, it is part of the core layer (meta), meta-python contains other python related recipes which extend python's functionality.
To install python3 on your image do:
IMAGE_INSTALL += "python3"

How to see (not download) docker image from ghcr.io

I am publishing docker images to GitHub container registry (ghcr.io).
The process of doing that:
Build component.
Build docker image which include component.
Upload docker image to ghcr.io.
Deploy docker image.
Run integration tests on docker image.
Sometimes step 4 or 5 return an error which is not resolvable from within the Docker image, and after the issue is fixed, I need to redeploy and retest the artifact.
If this happens, the process of building the component, including junit tests, is a pain because the Docker image is already built and present on ghcr.io.
Is there a way I can see if a tagged Docker image is present in ghcr.io?
The GHCR API is still in early development. Until it is fleshed out a bit more, you may have to resort to using skopeo to get what you need.
docker run --rm quay.io/skopeo/stable list-tags docker://ghcr.io/$GITHUB_REPOSITORY/$IMAGE_NAME

GCR Cloud Run says "Image [name] not found"

I'm trying to take my first baby steps with podman (instead of Docker) and Google Cloud Run. I've managed to build an image with a gcr.io tag and push it to Google. I then create a new service, and I can select the image in the "Select Image URL" pop-up dialog. But then the service fails to start, saying "Image [full name] not found".
I can't find anything on Google's support pages, or anywhere else. I can pull the image, I can push new versions, and they appear on the pop-up dialog. But the service still reports that they can't be found.
What am I doing wrong?
Edit in answer to DazWilkin's questions below:
Can you run the podman-created container locally using Docker?
I can't run Docker locally because it is not compatible with Fedora 31 (hence podman). But I can run it locally using podman run
Can you deploy a Docker-created container in Cloud Run?
As above: F31. However podman is supposed to be a drop-in replacement.
Is the container registry in the same project as Cloud Run?
Yes. I did have a problem with that, but I got a permissions message rather than "not found".
Have you tried deploying via gcloud rather than the console?
Yes.
$ podman push eu.gcr.io/my-project/hs-hello-world
Getting image source signatures
Copying blob c7f3d2e0289b done
Copying blob def7032cea8e done
Copying config f1c2e2615f done
Writing manifest to image destination
Storing signatures
$ gcloud run deploy --image eu.gcr.io/my-project/hs-hello-world --platform managed
Service name (hs-hello-world):
Deploying container to Cloud Run service [hs-hello-world] in project [my-project] region [europe-west1]
X Deploying... Image 'eu.gcr.io/my-project/hs-hello-world' not found.
X Creating Revision... Image 'eu.gcr.io/my-project/hs-hello-world' not found.
. Routing traffic...
Deployment failed
ERROR: (gcloud.run.deploy) Image 'eu.gcr.io/my-project/hs-hello-world' not found.
When I used a Google-built container it worked fine.
Update: 5 March 2020
In the end I just carried on with the Google build service, and it works fine. My initial wish for local builds was in large part because a build on Google was taking over half an hour (lots of Haskell libraries to import), but now I've figured out how to use staged builds and multi-processor VMs to avoid this. I appreciate the efforts of those who have tried to help, but right now it's not broke so I'm not going to try to fix it.
I had the same issue: it seems Cloud Run is picky about the kind of manifest it can pull.
By building my images with --format docker and pushing them with --remove-signatures (inspired by this issue), podman will create and push docker-style manifests to the Container Registry and everything ran smoothly!
Too bad I spent a lot of time thinking it was a lack of permissions problem
I had the same error. My issue was that I was using the docker/setup-buildx-action in a GitHub action. When this was removed, Cloud Run was happy with the resulting manifest / container image.
Thanks to #André-Breda for providing the direction.
I've been having the same issue today. I'm using buildah to create the new image. I realized that the image I used successfully yesterday was built as root. So I built the new one as root and pushed it successfully.
Wish I knew why. The images built as my username ran fine locally with rootless podman.

How to avoid redundancy and time loss when re-building images during development?

As a Vagrant user, when trying Docker I noticed one significant difference between development workflow with Vagrant and with Docker - with Docker I need to rebuild my image every time from scratch, even if I made minor changes in code.
This is major problem for me, because the process of image rebuilding oftenly very redundant and time consuming.
Perhaps there are some smart workflows with Docker already invented, if so, what are they?
I filed a feature-request for the vagrant-cachier plugin for saving docker build data and attached a bash workaround for that process. If it's okay for you to hack yourself around you can implement the scripts in vagrant.
caching docker build data with vagrant
Note that this procedure needs the vagrant-cachier plugin to be installed and has to save and load +300MB files from disk if they are new to the machine. Thus it's really slow if you have dockerfiles with just 1-5 lines of code but it's fast if you have dockerfiles with a lot of LOCs or images that have to be downloaded from the net.
Also note that this approach saves every intermediate building step. So if you are building an image and change a line in the middle of a dockerfile and build again the docker build process will get all cached intermediate containers till the changed line.
Using baseimages is still the preferred way but you can combine both procedures.
Feel free to post improvements and subscribe so fgrehm will maybe implement this in his plugin natively.
As Mark O'Connor suggested, one of the tips may be building a base image to your container(s). This image should have the dependencies, package installation, downloads... or any other consuming activity. This base image should be supposed to be built less frequently than the other one(s). In a similar way, if the final states of the execution of each step of your dockerfile doesn't change, Docker don't build this layer again. Thus, you can trying execute the commands than may change this state almost every run (e.g.: apt-get update) as later as you can, so docker don't have to rebuild the steps before. And also you can try to edit your dockerfiles in the later steps better than in the first.
Another option if you compile/download something inside the container is to have it downloaded or compiled in a host folder, and attach it to the container using -v or --volume option in docker run.
Finally there is other approaches to this issue as the one used by chef with knife container. In this approach you build the container using chef cookbooks, and each time you build it (because you have edited your cookbooks...) these changes are applied as a new docker layer (AUFS layer) and you don't have to repeat all the process. I didn't recommend this solution unless you have experience with Chef and you have cookbooks to manage your software. You should work harder to get it working and if you want Chef only to manage docker containers I think it doesn't worth it (although chef is a great option to manage infrastructures).
To automate the building process in case you have several images dependents itself, you can have a bash script that helps you with that task (credits to smola#github):
#!/bin/bash
IMAGES="${IMAGES:-stratio/base:test stratio/mesos:test stratio/spark-mesos:test stratio/ingestion:test}"
LATEST_TAG="${LATEST_TAG:-test}"
for image in $IMAGES ; do
USER=${image/\/*/}
aux=${image/*\//}
NAME=${aux/:*/}
TAG=${aux/*:/}
DIR=${NAME}/${TAG}
pushd $DIR
docker build --tag=${USER}/${NAME}:${TAG} .
if [[ $TAG = $LATEST_TAG ]] ; then
docker tag ${USER}/${NAME}:${TAG} ${USER}/${NAME}:latest
fi
popd
done
There are a couple of tricks that might better your workflow (very web-focused)
Docker caching
Always make sure you are adding your source to your Docker image in Dockerfile at the very end.
Example;
COPY data/package.json /data/
RUN cd /data && npm install
COPY data/ /data
This will make sure you get optimal caching when building the image, and that Docker doesn't have to rebuild the npm packages when you are changing your source.
Also, make sure you don't have a base-image that adds folders/files that are often changed (like base images doing COPY . /data/
fig mount
Use fig (or another tool), and mount your source directory when developing. This way, you can develop with instant changes and still use the current version of your code when building the image.
development server
You can start your developer web-server when you are developing, and nginx when not (if you are developing an www app, but same idea applies to other apps).
Example, in your startup script, do something like:
if [[ $DEBUG ]]; then
/usr/bin/supervisorctl start gulp
else
/usr/bin/supervisorctl start nginx
fi
And have autostart=false in your supervisord.conf files.
auto-refresh app
If you are developing a web-app, use tools like gulp and eg gulp-connect, if you are developing a python/django app, use the runserver utility. Both reloads the server when detecting changes in the files.
If you are using the if [[ $DEBUG ]] ... trick, make them listen on the same port as your normal instance (nginx). That way, you can have 1 configuration for your reverse proxy, ie, just send the traffic to example www:8080, it will hit your web-page both in production and if you are developing.
Create a based image that holds the bulk of your application's dependencies. This will significantly reduce your docker build times.

Can you generate and apply patches to a docker container offline?

We're looking into replacing our update system with docker, but we have a unique constraint where all upgrades need to happen offline. The use case is very similar to how you would update router firmware or something from a LAN not connected to the internet.
Currently our users download a patch file, which they then upload to the web interface of our system over a private LAN. Our system applies the patch. It is all implemented with the diff and patch commands. We do the diffing because our codebase is pretty giant but relatively few files change from version to version.
We think switching to docker can help us tremendously for our development, but for production and our update system we need to make sure we can do offline, diff-based updates.
My question boils down to this: are there docker analogs to the diff and patch commands that can be used to update a container offline?
I know docker has commands like docker diff, but as per the documentation it just shows a list of files that have been added, removed or changed from a container. docker save and docker export look like they come close though, but they provide full images whereas I'm after a diff. Similarly there seems to be no way as far as I can tell to use docker load to load a diff.
Thanks!
IMHO you should use private registry for this. Docker image can contain many layers so your update will be just small layer users need to download. Also updating image on local system doesn't affect running containers, so your users will just download new layers on update and then restart running containers from new image. This will be almost same as patch system. So in short from user side it will look like this:
docker pull new layers from your private registry instance (need to be online)
docker stop old container (offline)
docker run new container from updated image
All config files should be stored outside image for example in a docker volume.