Python3.8.14:ModuleNotFoundError: No module named 'commerce' - docker-compose

I'm building a Django project where project 1 is the core with Django project 2 inside it as as a feature. Project 2 is added as an app called mycommerce.
The objective is to have a common settings.py,urls.py,wsgi,manage.py for ease of use just like in a typical Django project. The necessary code from the above 4 .py files from project 2 has been added to project 1., keeping other aspects as is.
However, I'm getting an error when I'm building my docker container which while building is executing a script call setup.py on Ubuntu 22.04. This is where the error occurs.
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/app/setup.py", line 18, in <module>
from commerce import get_version # noqa isort:skip
ModuleNotFoundError: No module named 'commerce'
[end of output]
The setup.py lines of code which throw the error :
#!/usr/bin/env python
"""
Installation script:
To release a new version to PyPi:
- Ensure the version is correctly set in oscar.__init__.py
- Run: make release
"""
import os
import re
import sys
from setuptools import find_packages, setup
PROJECT_DIR = os.path.dirname(__file__)
sys.path.append(os.path.join(PROJECT_DIR, 'src'))
from commerce import get_version # noqa isort:skip -----> Line 18 in the error trace
My project structure :
myapp
|- __init__.py
|- manage.py
|- .docker
| |-commerce
| |-docker
| |-setup.py
|- docker-compose.yml
|- docker-compose.env
|- auth
|- posts
|- mycommerce
| |-src
| |-commerce
| |- __init__.py
| |- config.py
| |- defaults.py
| |-sandbox
| |- __init__.py
| |-manage.py
|-__init__.py
|- static
|- templates
|- .env
The init file in the project structure inside commerce folder is what the setup.py is trying to call while building the docker container. I have an understanding that this has to do with appending the path for setup.py to execute successfully. But its not working.
The init.py file inside which get_version() is called:
# Use 'alpha', 'beta', 'rc' or 'final' as the 4th element to indicate release type.
VERSION = (3, 2, 0, 'alpha', 2)
def get_short_version():
return '%s.%s' % (VERSION[0], VERSION[1])
def get_version():
version = '%s.%s' % (VERSION[0], VERSION[1])
# Append 3rd digit if > 0
if VERSION[2]:
version = '%s.%s' % (version, VERSION[2])
elif VERSION[3] != 'final':
mapping = {'alpha': 'a', 'beta': 'b', 'rc': 'c'}
version = '%s%s' % (version, mapping[VERSION[3]])
if len(VERSION) == 5:
version = '%s%s' % (version, VERSION[4])
return version
The docker file :
FROM python:3.8.14
ENV PYTHONUNBUFFERED 1
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
COPY requirements.txt /requirements.txt
RUN pip3 install -r /requirements.txt
RUN groupadd -r django && useradd -r -g django django
COPY . /opt/myapp/mycommerce
RUN chown -R django /opt/myapp/mycommerce
WORKDIR /opt/myapp/mycommerce
RUN make install
USER django
RUN make build_sandbox
RUN cp --remove-destination ./mycommerce/src/commerce/static/commerce/img/image_not_found.jpg ./mycommerce/sandbox/public/media/
VOLUME ["/opt/myapp/mycommerce"]
WORKDIR /opt/myapp/mycommerce/sandbox
CMD ["python", "manage.py", "runserver", "0.0.0.0:85","uwsgi --ini uwsgi.ini"]
EXPOSE 85
I'm using the following repo for commerce aspect of my django project.
https://github.com/django-oscar/django-oscar
However I have moved the docker file and other files like setup.py, make file and manifest file to my .docker(see project structure) folders which have other containers to enable docker related files to be in one place.
The core issue is I have 2 manage.py - one in the root folder-myapp and one inside the commerce folder which is a django-oscar project in itself. I have copied contents of settings.py of django-oscar to the core socialapp settings.py and done the same for urls as well. However other files are interlinked which I don't wish to move right away. I just need docker to find the manage.py command to execute the script.I tried pointing to the root-manage.py but it still doesn't work. I'm missing something which I cant figure.

Related

docker-compose selectively thinks config.yml is a folder

I am running docker-compose 1.25.5 on a ubuntu 20 box and I have a github repo working "fine" in its home folder... I can docker-compose build and docker-compose up with no problem, and the container does what is expected. The github repo is current with the on-disk files.
As a test, however, I created a new folder, pulled the repo, and ran docker-compose build with no problem but when I tried to run docker-compose up, I get the following error:
Starting live_evidently_1 ... done
Attaching to live_evidently_1
evidently_1 | Traceback (most recent call last):
evidently_1 | File "app.py", line 14, in <module>
evidently_1 | with open('config.yml') as f:
evidently_1 | IsADirectoryError: [Errno 21] Is a directory: 'config.yml'
live_evidently_1 exited with code 1
config.yml on my host is a file (of course) and the docker-compose.yml file is unremarkable:
version: "3"
services:
evidently:
build: ../
volumes:
- ./data:/data
- ./config.yml:/app/config.yml
etc...
...
So, I am left with two inter-related problems. 1) Why does the test version of the repo fail and the original version is fine (git status is unremarkable, all the files I want on github are up to date), and 2) Why does docker-compose think that config.yml is a folder when it is clearly a file? I would welcome suggestions.
You need to use bind mount type. To do this you have to use long syntax.
Like this.
volumes:
- type: bind
source: ./config.yml
target: /app/config.yml

How to install postgresql in my docker image?

I am trying to fetch data from Postgresql in my spark application.But now I am confused how to install postgresql driver in my docker image. I also tried to install postgresql as apt-get install command as mentioned below (Dockerfile).
Dockerfile:
FROM python:3
ENV SPARK_VERSION 2.3.2
ENV SPARK_HADOOP_PROFILE 2.7
ENV SPARK_SRC_URL https://www.apache.org/dist/spark/spark-$SPARK_VERSION/spark-${SPARK_VERSION}-
bin-hadoop${SPARK_HADOOP_PROFILE}.tgz
ENV SPARK_HOME=/opt/spark
ENV PATH $PATH:$SPARK_HOME/bin
RUN wget ${SPARK_SRC_URL}
RUN tar -xzf spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE}.tgz
RUN mv spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE} /opt/spark
RUN rm -f spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE}.tgz
RUN apt-get update && \
apt-get install -y openjdk-8-jdk-headless \
postgresql && \
rm -rf /var/lib/apt/lists/*
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY my_script.py ./
CMD [ "python", "./my_script.py" ]
requirements.txt :
pyspark==2.3.2
numpy
my_script.py :
from pyspark import SparkContext
from pyspark import SparkConf
#spark conf
conf1 = SparkConf()
conf1.setMaster("local[*]")
conf1.setAppName('hamza')
print(conf1)
sc = SparkContext(conf = conf1)
print('hahahha')
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
print(sqlContext)
from pyspark.sql import DataFrameReader
url = 'postgresql://IP:PORT/INSTANCE'
properties = {'user': 'user', 'password': 'pass'}
df = DataFrameReader(sqlContext).jdbc(
url='jdbc:%s' % url, table=query, properties=properties
)
Getting this error :
Traceback (most recent call last):
File "./my_script.py", line 26, in <module>
, properties=properties
File "/usr/local/lib/python3.7/site-packages/pyspark/sql/readwriter.py", line 527, in jdbc
return self._df(self._jreader.jdbc(url, table, jprop))
File "/usr/local/lib/python3.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/lib/python3.7/site-packages/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/usr/local/lib/python3.7/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o28.jdbc.
: java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
Kindly guide me how to setup this driver
Thanks
Adding these lines in Dockerfile solved the issue :
ENV POST_URL https://jdbc.postgresql.org/download/postgresql-42.2.5.jar
RUN wget ${POST_URL}
RUN mv postgresql-42.2.5.jar /opt/spark/jars
Thanks everyone
This is not the Docker way of doing things. Docker approach is not having all services inside one container but splitting them into several, where each container should have one main process, like database, you application or etc.
Also, when using separate containers, you dont care about intalling all necessary stuff in your Dockerfile - you simply select ready-to-use containers with desired database types. By the way, if you are using python:3 docker image, how do you know, maintainers wont change the set of installed services, or even the OS type? They can do it easily because they only provide 'Python` service, everything else is not defined.
So, what I recommend is:
Split you project into different containers (Dockerfiles)
Use standard postgres image for you database - all services and drivers are already onboard
Use docker-compose (or whatever) for launching both containers and linking them together in one network.

Unable to use jsonpath_rw_ext with pyspark

These are the steps I am following:
mkdir spark_lib; cd spark_lib
pip install jsonpath_rw_ext==1.1.3 -t .
zip -r9 ../spark_lib.zip *
Initialize spark context variable sc
sc.addPyFile('spark_lib.zip')
def f(x):
import jsonpath_rw_ext
return jsonpath_rw_ext.match('$.a.b', x)
sc.parallelize([{"a":{"b":10}}]).map(f).collect()
This gives me a pbr versioning error
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "<ipython-input-11-2619fc011e24>", line 6, in f
File "./spark_lib.zip/jsonpath_rw_ext/__init__.py", line 18, in <module>
File "./spark_lib.zip/pbr/version.py", line 467, in version_string
return self.semantic_version().brief_string()
File "./spark_lib.zip/pbr/version.py", line 462, in semantic_version
self._semantic = self._get_version_from_pkg_resources()
File "./spark_lib.zip/pbr/version.py", line 449, in _get_version_from_pkg_resources
result_string = packaging.get_version(self.package)
File "./spark_lib.zip/pbr/packaging.py", line 824, in get_version
name=package_name))
Exception: Versioning for this project requires either an sdist tarball,
or access to an upstream git repository. It's also possible that
there is a mismatch between the package name in setup.cfg
and the argument given to pbr.version.VersionInfo.
Project name jsonpath_rw_ext was given, but was not able to be found.
After reading through another similar bug https://bugs.launchpad.net/python-swiftclient/+bug/1379579, found out that pbr needs updated setuptools.
I included setuptools in my pip installation command and everything worked fine.
pip install jsonpath_rw_ext setuptools -t .

Pytest "Error: could not load path/to/conftest.py"

I get the following error when I try to run pytest repo/tests/test_file.py:
$ pytest repo/tests/test_file.py
Traceback (most recent call last):
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/_pytest/config.py", line 329, in _getconftestmodules
return self._path2confmods[path]
KeyError: local('/Users/marlo/repo/tests/test_file.py')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/_pytest/config.py", line 329, in _getconftestmodules
return self._path2confmods[path]
KeyError: local('/Users/marlo/repo/tests')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/_pytest/config.py", line 362, in _importconftest
return self._conftestpath2mod[conftestpath]
KeyError: local('/Users/marlo/repo/conftest.py')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/_pytest/config.py", line 368, in _importconftest
mod = conftestpath.pyimport()
File "/Users/marlo/anaconda3/envs/venv/lib/python3.6/site-packages/py/_path/local.py", line 686, in pyimport
raise self.ImportMismatchError(modname, modfile, self)
py._path.local.LocalPath.ImportMismatchError: ('conftest', '/home/venvuser/venv/conftest.py', local('/Users/marlo/repo/conftest.py'))
ERROR: could not load /Users/marlo/repo/conftest.py
My repo structure is
lib/
-tests/
-test_file.py
app/
-test_settings.py
pytest.ini
conftest.py
...
Other people have run this code fine, and according to this question (and this one), my structure is good and I am not missing any files. I can only conclude that something about my computer or project set-up is not right. If you have any suggestions or insights that I may be missing, please send them my way!
-------------------------------MORE DETAILS------------------------------
test_file.py:
def func(x):
return x + 1
def test_answer():
assert func(3) == 5
pytest.ini:
[pytest]
DJANGO_SETTINGS_MODULE = app.test_settings
python_files = tests.py test_* *_tests.py *test.py
I have docker as well as running pytest outside of docker too, and for me a much lower-impact fix whenever this crops up is to delete all the compiled python files
find . -name \*.pyc -delete
I figured it out and I'll answer in case others have the same issue:
I didn't even take into consideration that I had a docker container (of the same app) in the repo directory and, although I was not running the docker container, it was influencing the filepaths somehow.
To fix this:
I re-cloned the repo from the remote source into a new folder so that nothing from the old repo could "contaminate" it.
Updated my virtual environment with the .yml specifications of the clean repo
$ conda env update --name project --file project.yml
My project uses a postgres database, so I dropped it and created a new one
$ dropdb projectdb
$ createdb projectdb
Since my project uses mongo, I also dropped that database
$ mongo projectdb --eval "db.dropDatabase()"
Installed a clean pytest
$ pip uninstall pytest
$ pip install pytest
...and voilà! I could run pytest.
Many thanks to #hoefling and others who helped me debug.
I was running docker as well, but it seems my problem was different.
I was using an old version of pytest:
platform linux -- Python 3.9.7, pytest-3.7.2, py-1.10.0, pluggy-1.0.0
which stopped working after my ubuntu image was pulling python 3.10 by default.
my solution was to update (and fix) the dockerfile image to use:
FROM python:3.10
instead of python:latest, and update the pytest version as well.

Ansible error due to GMP package version on Centos6

I have a Dockerfile that builds an image based on CentOS (tag: centos6):
FROM centos
RUN rpm -iUvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
RUN yum update -y
RUN yum install ansible -y
ADD ./ansible /home/root/ansible
RUN cd /home/root/ansible;ansible-playbook -v -i hosts site.yml
Everything works fine until Docker hits the last line, then I get the following errors:
[WARNING]: The version of gmp you have installed has a known issue regarding
timing vulnerabilities when used with pycrypto. If possible, you should update
it (ie. yum update gmp).
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 317, in <module>
sys.exit(main(sys.argv[1:]))
File "/usr/bin/ansible-playbook", line 257, in main
pb.run()
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 319, in run
if not self._run_play(play):
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 620, in _run_play
self._do_setup_step(play)
File "/usr/lib/python2.6/site-packages/ansible/playbook/__init__.py", line 565, in _do_setup_step
accelerate_port=play.accelerate_port,
File "/usr/lib/python2.6/site-packages/ansible/runner/__init__.py", line 204, in __init__
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
File "/usr/lib64/python2.6/subprocess.py", line 642, in __init__
errread, errwrite)
File "/usr/lib64/python2.6/subprocess.py", line 1234, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Stderr from the command:
package epel-release-6-8.noarch is already installed
I imagine that the cause of the error is the gmp package not being up to date.
There is a related issue on GitHub: https://github.com/ansible/ansible/issues/6941
But there doesn't seem to be any solutions at the moment ...
Any ideas ?
Thanks in advance !
My site.yml playbook:
- hosts: all
pre_tasks:
- shell: echo 'hello'
Make sure that the files site.yml and hosts are present in the directory you're adding to /home/root/ansible.
Side note, you can simplify your Dockerfile by using WORKDIR:
WORKDIR /home/root/ansible
RUN ansible-playbook -v -i hosts site.yml