How can I specify the container runtime to use in docker-compose version 3? - docker-compose

I'm working on a container that requires the nvidia runtime. I can specify this runtime in a v2.3 docker-compose file like so:
version: "2.3"
services:
my-service:
image: "my-image"
runtime: "nvidia"
...
Running docker-compose up my-service works just fine. I get the nvidia runtime and everything works fine.
I've tried this just by changing the "2.3" to "3" and I get the following error when I do docker-compose up my-service:
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.my-service: 'runtime'
If I take out the runtime: "nvidia" line, this comes up without problems—except of course it's not using nvidia and I need access to the GPU on the host to get the performance I want.
Is there an equivalent for runtime in docker-compose v3? If not, why was this option dropped? Thanks in advance. :)

I realize this question is rather old but I ran into it yesterday.
TL;DR :
Upgrade you docker-compose to 1.27.0+
Details
There has been quite a discussion about the removal of the runtime keyword in the dedicated Docker bug thread : https://github.com/docker/compose/issues/6691
Finally, in the 1.27.0, Docker has decided to allow it back. So you just need to have the correct version of docker-compose.
I would recommend the pip install path as their versions are more up to date (current docker-compose version in Debian buster is 1.21).
And it seems there are other good reasons to do so, see here.

Related

dj-stripe referencing a non-existent migration

I have a django model where I have a OneToOneField to djstripe's Customer. When I run makemigrations a migration is created with the following dependency:
dependencies = [ ('djstripe', '0011_alter_invoice_charge_alter_invoice_customer_and_more'), ('users', '0007_rename_username), ]
Everything seems to be okay locally, but then when I deploy my code it fails with the following error:
django.db.migrations.exceptions.NodeNotFoundError: Migration users.0008_stripecustomer dependencies reference nonexistent parent node ('djstripe', '0011_alter_invoice_charge_alter_invoice_customer_and_more')
Our pipeline does not run makemigrations, only migrate, so this seems a little weird that a djstripe migration is created when I run makemigrations locally but then I cannot use it in deployment. Plus, such migration does not exist in djstripe GitHub repository.
dj-stripe version: 2.6.1
Python version: 3.9
Django version: 4.0.1
Stripe API version: 2.68.0
Database type and version: postgres 12.9
It's an known bug in v. 4 - https://github.com/dj-stripe/dj-stripe/issues/1649#issuecomment-1117774629
To solve it you'd have to downgrade django to 3.x.x. They should fix it in the next update as well.

Haskell and postgresql - build error "The program pg_config is required but it could not be found."

I am currently learning haskell and just tried using postgresql as a database.
I generated my project with stack (stack new <name> -> stack setup -> stack build)
and then all I changed was adding the dependencies needed to persistent and postgresql to the
package.yaml file (under "dependencies:").
These are:
persistent
persistent-postgresql
persistent-template
This however results in a failing build with the following message:
postgresql-libpq > setup.exe: The program 'pg_config' is required but it could not be found.
postgresql-libpq >
-- While building package postgresql-libpq-0.9.4.2 using:
C:\Users\\AppData\Local\Temp\stack14388\postgresql-libpq-0.9.4.2.stack-work\dist\e626a42b\setup\setup --builddir=.stack-work\dist\e626a42b configure --user --package-db=clear --package-db=global --package-db=C:\sr\snapshots\365a3dde\pkgdb --libdir=C:\sr\snapshots\365a3dde\lib --bindir=C:\sr\snapshots\365a3dde\bin --datadir=C:\sr\snapshots\365a3dde\share --libexecdir=C:\sr\snapshots\365a3dde\libexec --sysconfdir=C:\sr\snapshots\365a3dde\etc --docdir=C:\sr\snapshots\365a3dde\doc\postgresql-libpq-0.9.4.2 --htmldir=C:\sr\snapshots\365a3dde\doc\postgresql-libpq-0.9.4.2 --haddockdir=C:\sr\snapshots\365a3dde\doc\postgresql-libpq-0.9.4.2 --dependency=Cabal=Cabal-2.4.1.0-5rQrtDcYhR2LOcYye7obEr --dependency=Win32=Win32-2.6.1.0 --dependency=base=base-4.12.0.0 --dependency=bytestring=bytestring-0.10.8.2 -f-use-pkg-config --extra-include-dirs=C:\Users\\AppData\Local\Programs\stack\x86_64-windows\msys2-20180531\mingw64\include --extra-lib-dirs=C:\Users\\AppData\Local\Programs\stack\x86_64-windows\msys2-20180531\mingw64\lib --extra-lib-dirs=C:\Users\\AppData\Local\Programs\stack\x86_64-windows\msys2-20180531\mingw64\bin --exact-configuration --ghc-option=-fhide-source-paths
Process exited with code: ExitFailure 1
Does anyone know how to resolve this issue and why it even occurs?
Do I have to install postgresql just for being able to run build the project? If so, how would you
do this in production, when the database could basically lie everywhere?
It looks like Haskell is trying to build with the PostgreSQL client shared library libpq.dll and uses pg_config at build time to determine where PostgreSQL is installed and how it was built.
That would mean that you have to install PostgreSQL on the machine where you build Haskell, including the header files, build environment or however it is called by the installer.
For running Haskell you would only need libpq.dll and the dependent shared libraries.
I solved the issue in Ubuntu with the following command:
apt install libpq-dev

App Engine python flexible environment 3.6 or 3.7?

On https://cloud.google.com/appengine/docs/python/ the runtime for the flexible python environment is said to be 3.6. However on https://cloud.google.com/appengine/docs/flexible/python/runtime it is said to be 3.7.
My build is reported to fail as it depends on a package that requires 3.7 (using gcloud app deploy). So at least my build is using 3.6.
Is this an error in the doc or is also 3.7 available in the flex env?
Update 1
Although the feedback says the default python3 interpreter in the flex env is 3.7, I did have following error when trying to deploy my app when dependent on a module that requires 3.7:
Step #1: <my-dep-module> requires Python '>=3.7' but the running Python is 3.6.8
When I remove that dependency and I build I also see 3.6 mentioned in the build output:
Step #1: ---> f186f86e42ea Step #1: Step 2/9 : LABEL python_version=python3.6 Step #1: ---> Running in 7b76fdee165b
Step #1: Removing intermediate container 7b76fdee165b Step #1: ---> 482717f31b28
Step #1: Step 3/9 : RUN virtualenv --no-download /env -p python3.6 Step #1: ---> Running in b1d15ba3568d
Step #1: Running virtualenv with interpreter /opt/python3.6/bin/python3.6
Thus somehow gcloud app deploy is building using 3.6 nevertheless?
You can set the Python interpreter's version to the latest supported Python 3.x release, which is currently 3.7.2, in the app.yaml file by specifying the runtime_config element like so:
runtime: python
env: flex
runtime_config:
python_version: 3
You could set it to other versions by specifying 3.6 or 3.5 as documented here but 3 at this time refers to 3.7.2.
You can set for App Engine flex any version you want in the app.yaml as you can see here. If you just mention 3 at :
runtime_config:
python_version: <version number>
It will take by default the latest version possible ( now, 3.7.2 )

How to avoid Edeliver deployment error: "vm.args: No such file or directory"?

Context
We are trying to use edeliver to deploy a "Hot Upgrade" of a Phoenix Web Application to a remote Virtual Machine instance.
Our aim is to build an "upgrade" version of the app each time so that the app can be "hot" upgraded in production without any down-time.
We have succeeded in doing this "hot upgrade" on a "Hello World" phoenix app:
https://github.com/nelsonic/hello_world_edeliver which is automatically deployed from Travis-CI when the build passes. see: https://travis-ci.org/nelsonic/hello_world_edeliver/builds/259965752#L1752
So, in theory this technique should work for our "real" app.
Attempting to Deploy a "Real" Phoenix App using Edeliver
Ran the following command (to build the upgrade):
mix edeliver build upgrade --auto-version=git-revision --from=$(git rev-parse HEAD~) --to=$(git rev-parse HEAD) --verbose
i.e. "build the upgrade from the previous git revision to the current one"
So far, so good. "Release successfully built!"
Error: vm.args: No such file or directory
When we attempt to deploy the upgrade:
mix edeliver deploy upgrade to production --version=1.0.3+86d55eb --verbose
cat: /home/hladmin/healthlocker/releases/1.0.3+86d55eb/vm.args: No such file or directory
Note: we have a little bash script that reads the latest upgrade version available in .deliver/releases and deploys that see: version.sh
Question:
Is there a way to ignore the absence of the vm.args file and continue the deployment?
Or if the file is required to complete the deployment, is there some documentation on how to create the file?
Note: we have read the distillery "Runtime Configuration" docs: https://github.com/bitwalker/distillery/blob/master/docs/Runtime%20Configuration.md and are sadly none-the-wiser ...
Additional Info
Environment
Localhost: Mac running Elixir 1.4.2
Build Host: Ubuntu 16.04.2 LTS running Elixir 1.4.5
mix.exs file: https://github.com/healthlocker/healthlocker/blob/continuous-delivery/mix.exs
edeliver version: 1.4.4
Build tool: distillery version: 1.4.0
Umbrella project: yes.
This question was also asked on: https://github.com/edeliver/edeliver/issues/234
As mentioned by others, the vm.args file is necessary for BEAM to run the release. A default file is created by distillery during the release build process and should be located in releases/<version>/vm.args. From your log output it looks like expected directory is being checked.
Can you show us the contents of /home/hladmin/healthlocker/releases/?
Can you confirm that the default vm.args file is being created when building the release and extracting it (outside of the upgrade process)?
You also asked:
Or if the file is required to complete the deployment, is there some documentation on how to create the file?
If diagnosing the problem with the default vm.args file doesn't get you anywhere, you can also write your own file and configure distillery to use that file instead of the default. The details for this are in the distillery configuration docs. In short,
add the vm_args setting to your distillery config, which should be at rel/config.exs(relative to your project root), for example:
environment :prod do
set vm_args: "<path>/vm.args"
[...]
end

Edeliver failing to start release

When running mix edeliver version production locally it fails with the following output
EDELIVER MYAPP WITH VERSION COMMAND
-----> getting release versions from production servers
production node:
user : app_user
host : my_app
path : /home/app_user/my_app.io
response: bash: line 4: bin/my_app: No such file or directory
bash: line 47: bin/my_app: No such file or directory
VERSION DONE!
The error is obvious, as the executable lives in: ~/my_app.io/my_app/_build/prod/rel/my_app/bin
I'm also unable to run any of the start/stop/restart etc commands
The deployment was successful because when I ssh in, and run the start command it works.
I would like to know if anyone can point me in the direction of some config parameter that I'm missing, as the local commands are a lot more efficient.
Figured out the problem
I only built my app by running the following: env MIX_ENV=prod mix edeliver build release
I was probably too excited and forgot to actually deploy the release using something similar to the following mix edeliver deploy release to production --version=0.0.1
Hope someone else might benefit from this also.