Can Docker plugins inspect label metadata from starting image? - plugins

I'm asking if it's possible to have a docker volume plugin inspect a starting image's label metadata in order to take some additional action? I'd like to see said action include mounting a device, but I'm not sure if such is yet possible with a driver plugin, nor have I seen any mention of say device plugin lately.
I'm trying to see if there are ways to enable the nvidia-docker-plugin in the scope of docker-compose files like so:
test:
image: ubuntu
volume_driver: nvidia-docker-driver
labels:
nvidia.gpu: "0,1"
command: nvidia-smi
Relevant issue with context: https://github.com/NVIDIA/nvidia-docker/issues/39

Just resurrecting this old issue to say that this is now solved by nvidia-docker-compose.

Related

Unable to load FormsFlow Web

I have configured all the required modules, but while loading the FormsFlow Web, I see the page is not loading at all. Attached image with inspection window. Can you please suggest what might be going wrong here
It looks like your environment variable KEYCLOAK_URL is missing. According to README mentioned here update environment variable by referring the sample.env.
After updating your environment variables ensure to rebuild the containers with command:
docker-compose -f docker-compose-windows.yml up --build -d
This should be due to the missing Keycloak env. And Remember to Redeploy your app after Environment change.
Check the Keycloak Env in the below line is added.
https://github.com/AOT-Technologies/forms-flow-ai/blob/5555c1b2a9a5496f1ab98e5339d66537e25974c2/deployment/docker/sample.env#L49
Refer this issue which is similar.
https://github.com/AOT-Technologies/forms-flow-ai/issues/182

Docker-compose Nuxtjs

I am creating my first Nuxtjs app but I want to use Docker-compose. I was able to Dockerize my application following this tutorial: https://dockerize.io/guides/docker-nuxtjs-guide
Now I want to bring it to the next level using compose but I'm not too familiar with Serverside-rendering and how this could affect my docker-compose file. Unfortunately I cannot find any guide on how to use docker-compose on NuxtJS apps. Do you know where I can find a good guide for it? Thanks.
UPDATE:
I created a docker-compose.yml file and is working but still I can't find any guide to see if it is a good yml file (best practices etc.)
version: '3'
services:
web:
build: .
command: npm run dev
ports:
- '3000:3000'
If you can already run your app in Docker, you won't gain much from Docker Compose, unless you need to run multiple containers. As stated in Overview of Docker Compose, Compose is a tool for defining and running multi-container Docker applications.
Based on the linked tutorial, docker-compose.yaml could look something like this:
version: "3"
services:
nuxt:
image: nuxtjs-tutorial:latest
ports:
- "3000:3000"
environment:
- NUXT_HOST="0.0.0.0"
- NUXT_PORT="3000"
The environment variables do not need to be set from the Compose file, this is just an example. Compose allows you to set many options, as described in the Compose file reference. For example, you could run the app in Compose using entrypoint instead of CMD in Dockerfile. Or you could only copy package.json in Dockerfile, build the dependencies during image build, and mount your code using volumes.
I found multiple example references online, but I wouln't consider any of them best practice. Best to read the official Documentation.
Regarding your update, based on the Dockerfile in the tutorial, you do not even need the build and command entries, only image and ports. But as I said above, you can set many options from Compose, best described in official documentation.

GCR Cloud Run says "Image [name] not found"

I'm trying to take my first baby steps with podman (instead of Docker) and Google Cloud Run. I've managed to build an image with a gcr.io tag and push it to Google. I then create a new service, and I can select the image in the "Select Image URL" pop-up dialog. But then the service fails to start, saying "Image [full name] not found".
I can't find anything on Google's support pages, or anywhere else. I can pull the image, I can push new versions, and they appear on the pop-up dialog. But the service still reports that they can't be found.
What am I doing wrong?
Edit in answer to DazWilkin's questions below:
Can you run the podman-created container locally using Docker?
I can't run Docker locally because it is not compatible with Fedora 31 (hence podman). But I can run it locally using podman run
Can you deploy a Docker-created container in Cloud Run?
As above: F31. However podman is supposed to be a drop-in replacement.
Is the container registry in the same project as Cloud Run?
Yes. I did have a problem with that, but I got a permissions message rather than "not found".
Have you tried deploying via gcloud rather than the console?
Yes.
$ podman push eu.gcr.io/my-project/hs-hello-world
Getting image source signatures
Copying blob c7f3d2e0289b done
Copying blob def7032cea8e done
Copying config f1c2e2615f done
Writing manifest to image destination
Storing signatures
$ gcloud run deploy --image eu.gcr.io/my-project/hs-hello-world --platform managed
Service name (hs-hello-world):
Deploying container to Cloud Run service [hs-hello-world] in project [my-project] region [europe-west1]
X Deploying... Image 'eu.gcr.io/my-project/hs-hello-world' not found.
X Creating Revision... Image 'eu.gcr.io/my-project/hs-hello-world' not found.
. Routing traffic...
Deployment failed
ERROR: (gcloud.run.deploy) Image 'eu.gcr.io/my-project/hs-hello-world' not found.
When I used a Google-built container it worked fine.
Update: 5 March 2020
In the end I just carried on with the Google build service, and it works fine. My initial wish for local builds was in large part because a build on Google was taking over half an hour (lots of Haskell libraries to import), but now I've figured out how to use staged builds and multi-processor VMs to avoid this. I appreciate the efforts of those who have tried to help, but right now it's not broke so I'm not going to try to fix it.
I had the same issue: it seems Cloud Run is picky about the kind of manifest it can pull.
By building my images with --format docker and pushing them with --remove-signatures (inspired by this issue), podman will create and push docker-style manifests to the Container Registry and everything ran smoothly!
Too bad I spent a lot of time thinking it was a lack of permissions problem
I had the same error. My issue was that I was using the docker/setup-buildx-action in a GitHub action. When this was removed, Cloud Run was happy with the resulting manifest / container image.
Thanks to #André-Breda for providing the direction.
I've been having the same issue today. I'm using buildah to create the new image. I realized that the image I used successfully yesterday was built as root. So I built the new one as root and pushed it successfully.
Wish I knew why. The images built as my username ran fine locally with rootless podman.

Does anyone have tried the HLF 2.0 feature "External Builders and Launchers" and wants to get in touch?

I'm getting my way through the HLF 2.0 docs and would love to discuss and try out the new features "External Builders and Launchers" and "Chaincode as an external service".
My goal is to run HLF2.0 on an K8s cluster (OpenShift). Does anyone wants to get in touch or has anyone already figured his way through?
Cheers from Germany
Also trying to use the ExternalBuilder. Setup core.yaml, rebuilt the containers to use it. I get an error that on "peer lifecycle chaincode install .tgz...", that the path to the scripts in core.yaml can not be found.
I've added volume bind commands in the peer-base.yaml, and in docker-compose-cli.yaml, and am using the first-network setup. Dropped out the part of the byfn.sh that would connect to the cli container, so that I do that part manually, do the create, join, update anchors successfully, and then try to do the install and fail. However, on the install, I'm failing on the /bin/detect, because it can't find that file to fork/exec it. To get that far, peer was able to read my external configuration, and read the core.yaml file. At the moment, trying the "mode: dev" in the core.yaml which seems to indicate that the scripts and the chaincode will be run "locally", which I think means it should run in the cli container. Otherwise, tried to walk the code to see how the docker containers are being created dynamically, and from what image, but haven't been able to nail that down yet.

Can you generate and apply patches to a docker container offline?

We're looking into replacing our update system with docker, but we have a unique constraint where all upgrades need to happen offline. The use case is very similar to how you would update router firmware or something from a LAN not connected to the internet.
Currently our users download a patch file, which they then upload to the web interface of our system over a private LAN. Our system applies the patch. It is all implemented with the diff and patch commands. We do the diffing because our codebase is pretty giant but relatively few files change from version to version.
We think switching to docker can help us tremendously for our development, but for production and our update system we need to make sure we can do offline, diff-based updates.
My question boils down to this: are there docker analogs to the diff and patch commands that can be used to update a container offline?
I know docker has commands like docker diff, but as per the documentation it just shows a list of files that have been added, removed or changed from a container. docker save and docker export look like they come close though, but they provide full images whereas I'm after a diff. Similarly there seems to be no way as far as I can tell to use docker load to load a diff.
Thanks!
IMHO you should use private registry for this. Docker image can contain many layers so your update will be just small layer users need to download. Also updating image on local system doesn't affect running containers, so your users will just download new layers on update and then restart running containers from new image. This will be almost same as patch system. So in short from user side it will look like this:
docker pull new layers from your private registry instance (need to be online)
docker stop old container (offline)
docker run new container from updated image
All config files should be stored outside image for example in a docker volume.