docker-compose selectively thinks config.yml is a folder - github

I am running docker-compose 1.25.5 on a ubuntu 20 box and I have a github repo working "fine" in its home folder... I can docker-compose build and docker-compose up with no problem, and the container does what is expected. The github repo is current with the on-disk files.
As a test, however, I created a new folder, pulled the repo, and ran docker-compose build with no problem but when I tried to run docker-compose up, I get the following error:
Starting live_evidently_1 ... done
Attaching to live_evidently_1
evidently_1 | Traceback (most recent call last):
evidently_1 | File "app.py", line 14, in <module>
evidently_1 | with open('config.yml') as f:
evidently_1 | IsADirectoryError: [Errno 21] Is a directory: 'config.yml'
live_evidently_1 exited with code 1
config.yml on my host is a file (of course) and the docker-compose.yml file is unremarkable:
version: "3"
services:
evidently:
build: ../
volumes:
- ./data:/data
- ./config.yml:/app/config.yml
etc...
...
So, I am left with two inter-related problems. 1) Why does the test version of the repo fail and the original version is fine (git status is unremarkable, all the files I want on github are up to date), and 2) Why does docker-compose think that config.yml is a folder when it is clearly a file? I would welcome suggestions.

You need to use bind mount type. To do this you have to use long syntax.
Like this.
volumes:
- type: bind
source: ./config.yml
target: /app/config.yml

Related

Python3.8.14:ModuleNotFoundError: No module named 'commerce'

I'm building a Django project where project 1 is the core with Django project 2 inside it as as a feature. Project 2 is added as an app called mycommerce.
The objective is to have a common settings.py,urls.py,wsgi,manage.py for ease of use just like in a typical Django project. The necessary code from the above 4 .py files from project 2 has been added to project 1., keeping other aspects as is.
However, I'm getting an error when I'm building my docker container which while building is executing a script call setup.py on Ubuntu 22.04. This is where the error occurs.
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/app/setup.py", line 18, in <module>
from commerce import get_version # noqa isort:skip
ModuleNotFoundError: No module named 'commerce'
[end of output]
The setup.py lines of code which throw the error :
#!/usr/bin/env python
"""
Installation script:
To release a new version to PyPi:
- Ensure the version is correctly set in oscar.__init__.py
- Run: make release
"""
import os
import re
import sys
from setuptools import find_packages, setup
PROJECT_DIR = os.path.dirname(__file__)
sys.path.append(os.path.join(PROJECT_DIR, 'src'))
from commerce import get_version # noqa isort:skip -----> Line 18 in the error trace
My project structure :
myapp
|- __init__.py
|- manage.py
|- .docker
| |-commerce
| |-docker
| |-setup.py
|- docker-compose.yml
|- docker-compose.env
|- auth
|- posts
|- mycommerce
| |-src
| |-commerce
| |- __init__.py
| |- config.py
| |- defaults.py
| |-sandbox
| |- __init__.py
| |-manage.py
|-__init__.py
|- static
|- templates
|- .env
The init file in the project structure inside commerce folder is what the setup.py is trying to call while building the docker container. I have an understanding that this has to do with appending the path for setup.py to execute successfully. But its not working.
The init.py file inside which get_version() is called:
# Use 'alpha', 'beta', 'rc' or 'final' as the 4th element to indicate release type.
VERSION = (3, 2, 0, 'alpha', 2)
def get_short_version():
return '%s.%s' % (VERSION[0], VERSION[1])
def get_version():
version = '%s.%s' % (VERSION[0], VERSION[1])
# Append 3rd digit if > 0
if VERSION[2]:
version = '%s.%s' % (version, VERSION[2])
elif VERSION[3] != 'final':
mapping = {'alpha': 'a', 'beta': 'b', 'rc': 'c'}
version = '%s%s' % (version, mapping[VERSION[3]])
if len(VERSION) == 5:
version = '%s%s' % (version, VERSION[4])
return version
The docker file :
FROM python:3.8.14
ENV PYTHONUNBUFFERED 1
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
COPY requirements.txt /requirements.txt
RUN pip3 install -r /requirements.txt
RUN groupadd -r django && useradd -r -g django django
COPY . /opt/myapp/mycommerce
RUN chown -R django /opt/myapp/mycommerce
WORKDIR /opt/myapp/mycommerce
RUN make install
USER django
RUN make build_sandbox
RUN cp --remove-destination ./mycommerce/src/commerce/static/commerce/img/image_not_found.jpg ./mycommerce/sandbox/public/media/
VOLUME ["/opt/myapp/mycommerce"]
WORKDIR /opt/myapp/mycommerce/sandbox
CMD ["python", "manage.py", "runserver", "0.0.0.0:85","uwsgi --ini uwsgi.ini"]
EXPOSE 85
I'm using the following repo for commerce aspect of my django project.
https://github.com/django-oscar/django-oscar
However I have moved the docker file and other files like setup.py, make file and manifest file to my .docker(see project structure) folders which have other containers to enable docker related files to be in one place.
The core issue is I have 2 manage.py - one in the root folder-myapp and one inside the commerce folder which is a django-oscar project in itself. I have copied contents of settings.py of django-oscar to the core socialapp settings.py and done the same for urls as well. However other files are interlinked which I don't wish to move right away. I just need docker to find the manage.py command to execute the script.I tried pointing to the root-manage.py but it still doesn't work. I'm missing something which I cant figure.

Docker-compose failing on startup

I am running
docker-compose version 1.25.4, build 8d51620a
on
OS X Catalina, v10.15.4 (19E266)
I am using the system python.
When I run docker-compose, it crashes with the following error:
Traceback (most recent call last):
File "docker-compose", line 6, in <module>
File "compose/cli/main.py", line 72, in main
File "compose/cli/main.py", line 128, in perform_command
File "compose/cli/main.py", line 1077, in up
File "compose/cli/main.py", line 1073, in up
File "compose/project.py", line 548, in up
File "compose/service.py", line 355, in ensure_image_exists
File "compose/service.py", line 381, in image
File "site-packages/docker/utils/decorators.py", line 17, in wrapped
docker.errors.NullResource: Resource ID was not provided
[9018] Failed to execute script docker-compose
I have tried a fresh repo clone and a fresh install of docker, neither work. What could be causing this?
It turned out that I had uninitialized environment variables that were causing the crash.
The particular cause was the env vars setting image names in the docker-compose file and then trying to pull a blank image.
It can be initialized environment variables but for my case, it was some other command before docker-compose build which was failing.
I was pulling images from the registry but it could not find them/
I've seen this error when passing docker-compose files explicitly and omitting one. e.g.
docker-compose -f docker-compose.yml up # fails
docker-compose -f docker-compose.yml -f docker-compose.override.yml up # works
I faced with the same issue.
In my case, it was different.
I had 2 docker-compose files:
docker-compose.yml
version: "3"
networks:
web: {}
docker-compose.development.yml
version: "3"
services:
web:
image: ""
build:
context: .
dockerfile: Dockerfile
environment:
API_URL: http://example.com/api
ports:
- "11000:22000"
networks:
- web
restart: on-failure
The problem was from image property in docker-compose.development.yml file.
When I removed it & ran the below command:
docker-compose --project-name my-web -f docker-compose.yml -f docker-compose.development.yml up --detach
It was successful.
This is the new docker-compose.development.yml file:
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile
environment:
API_URL: http://example.com/api
ports:
- "11000:22000"
networks:
- web
restart: on-failure

Creating virtualenv inside veracypt error

I'm setting up a project inside veracrypt and it's throwing this error when I try to setup the environment.
admin#kali:/media/veracrypt1$ virtualenv --python=python3 venv
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /media/veracrypt1/venv/bin/python3
Also creating executable in /media/veracrypt1/venv/bin/python
Traceback (most recent call last):
File "/usr/local/bin/virtualenv", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/virtualenv.py", line 870, in main
symlink=options.symlink,
File "/usr/local/lib/python3.7/dist-packages/virtualenv.py", line 1162, in create_environment
install_python(home_dir, lib_dir, inc_dir, bin_dir, site_packages=site_packages, clear=clear, symlink=symlink)
File "/usr/local/lib/python3.7/dist-packages/virtualenv.py", line 1672, in install_python
os.symlink(py_executable_base, full_pth)
PermissionError: [Errno 1] Operation not permitted: 'python3' -> '/media/veracrypt1/venv/bin/python'
I've tried to look for the source of the issue and it seems it's related to how it's a virtualdrive with limited rights
admin#kali:/media/veracrypt1$ ln -s testfile
ln: failed to create symbolic link './testfile': Operation not permitted
Looks like you are running this in an environment with limited permissions.
Some report this behavior when running on Linux,
but in a folder that is mounted to a "FAT32" partition -
see Chris Lope's blog-post:
permissionerror: [errno 1] operation not permitted
I have experienced this behavior while running in an Ubuntu VM
in a folder that was mounted to the host-OS (Windows-NTFS) as type 'vboxsf'.
Solved it by moving to work in a partition that is native Unix.

Issues with Catkin Build

Never happen before but If I create a directory mkdir -p catkin_ws/src and then enter catkin build I have the following error:
emeric#emeric-desktop:~/catkin_plan_ws$ catkin build
------------------------------------------------------
Profile: default
Extending: [env] /opt/ros/kinetic
Workspace: /home/emeric
------------------------------------------------------
Source Space: [exists] /home/emeric/src
Log Space: [missing] /home/emeric/logs
Build Space: [exists] /home/emeric/build
Devel Space: [exists] /home/emeric/devel
Install Space: [unused] /home/emeric/install
DESTDIR: [unused] None
------------------------------------------------------
Devel Space Layout: linked
Install Space Layout: None
------------------------------------------------------
Additional CMake Args: DCMAKE_BUILT_TYPE=Release
Additional Make Args: None
Additional catkin Make Args: None
Internal Make Job Server: True
Cache Job Environments: False
------------------------------------------------------
Whitelisted Packages: None
Blacklisted Packages: None
------------------------------------------------------
Workspace configuration appears valid.
NOTE: Forcing CMake to run for each package.
------------------------------------------------------
Traceback (most recent call last):
File "/usr/bin/catkin", line 9, in <module>
load_entry_point('catkin-tools==0.4.4', 'console_scripts', 'catkin')()
File "/usr/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 267, in main
catkin_main(sysargs)
File "/usr/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 262, in catkin_main
sys.exit(args.main(args) or 0)
File "/usr/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/cli.py", line 420, in main
summarize_build=opts.summarize # Can be True, False, or None
File "/usr/lib/python2.7/dist-packages/catkin_tools/verbs/catkin_build/build.py", line 283, in build_isolated_workspace
workspace_packages = find_packages(context.source_space_abs, exclude_subspaces=True, warnings=[])
File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py", line 86, in find_packages
packages = find_packages_allowing_duplicates(basepath, exclude_paths=exclude_paths, exclude_subspaces=exclude_subspaces, warnings=warnings)
File "/usr/lib/python2.7/dist-packages/catkin_pkg/packages.py", line 146, in find_packages_allowing_duplicates
xml, filename=filename, warnings=warnings)
File "/usr/lib/python2.7/dist-packages/catkin_pkg/package.py", line 509, in parse_package_string
raise InvalidPackage('The manifest must contain a single "package" root tag')
catkin_pkg.package.InvalidPackage: The manifest must contain a single "package" root tag
Besides the build and devel folders are created in my home directory not in the catkin one.
I guess I messed up something but I do not what and thus how to fix it.
Thank you for your help
the root Folder of build, install, log, devel and src space should be your catkin root where you can call to catkin build (in your case it's ~/catkin_ws).
in a nutshell, you can't do a task outside of initiated catkin folder with catkin

Pytest "Error: could not load path/to/conftest.py"

I get the following error when I try to run pytest repo/tests/test_file.py:
$ pytest repo/tests/test_file.py
Traceback (most recent call last):
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/_pytest/config.py", line 329, in _getconftestmodules
return self._path2confmods[path]
KeyError: local('/Users/marlo/repo/tests/test_file.py')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/_pytest/config.py", line 329, in _getconftestmodules
return self._path2confmods[path]
KeyError: local('/Users/marlo/repo/tests')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/_pytest/config.py", line 362, in _importconftest
return self._conftestpath2mod[conftestpath]
KeyError: local('/Users/marlo/repo/conftest.py')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/_pytest/config.py", line 368, in _importconftest
mod = conftestpath.pyimport()
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/py/_path/local.py", line 686, in pyimport
raise self.ImportMismatchError(modname, modfile, self)
py._path.local.LocalPath.ImportMismatchError: ('conftest', '/home/venvuser/venv/conftest.py', local('/Users/marlo/repo/conftest.py'))
ERROR: could not load /Users/marlo/repo/conftest.py
My repo structure is
lib/
-tests/
-test_file.py
app/
-test_settings.py
pytest.ini
conftest.py
...
Other people have run this code fine, and according to this question (and this one), my structure is good and I am not missing any files. I can only conclude that something about my computer or project set-up is not right. If you have any suggestions or insights that I may be missing, please send them my way!
-------------------------------MORE DETAILS------------------------------
test_file.py:
def func(x):
return x + 1
def test_answer():
assert func(3) == 5
pytest.ini:
[pytest]
DJANGO_SETTINGS_MODULE = app.test_settings
python_files = tests.py test_* *_tests.py *test.py
I have docker as well as running pytest outside of docker too, and for me a much lower-impact fix whenever this crops up is to delete all the compiled python files
find . -name \*.pyc -delete
I figured it out and I'll answer in case others have the same issue:
I didn't even take into consideration that I had a docker container (of the same app) in the repo directory and, although I was not running the docker container, it was influencing the filepaths somehow.
To fix this:
I re-cloned the repo from the remote source into a new folder so that nothing from the old repo could "contaminate" it.
Updated my virtual environment with the .yml specifications of the clean repo
$ conda env update --name project --file project.yml
My project uses a postgres database, so I dropped it and created a new one
$ dropdb projectdb
$ createdb projectdb
Since my project uses mongo, I also dropped that database
$ mongo projectdb --eval "db.dropDatabase()"
Installed a clean pytest
$ pip uninstall pytest
$ pip install pytest
...and voilà! I could run pytest.
Many thanks to #hoefling and others who helped me debug.
I was running docker as well, but it seems my problem was different.
I was using an old version of pytest:
platform linux -- Python 3.9.7, pytest-3.7.2, py-1.10.0, pluggy-1.0.0
which stopped working after my ubuntu image was pulling python 3.10 by default.
my solution was to update (and fix) the dockerfile image to use:
FROM python:3.10
instead of python:latest, and update the pytest version as well.