I follow this tutorial to submit machine learning job to Google ML engine. Then I faced the error ImportError: No module named matplotlib.pyplot but solved it by adding the Matplotlib into RequiredPackages in setup.py. Then I face another error which is ImportError: No module named _tkinter, please install the python-tk package. I found this solution and this solution but do not help and give me another error.
Failed building wheel for my-package
Command "/usr/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-T6kjZl-build/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-4LpzWh-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-T6kjZl-build/
Command '['pip', 'install', '--user', '--upgrade', '--force-reinstall', '--no-deps', u'my-package-0.1.1.tar.gz']' returned non-zero exit status 1
My setup.py.
"""Setup script for object_detection."""
import logging
import subprocess
from setuptools import find_packages
from setuptools import setup
from setuptools.command.install import install
class CustomCommands(install):
def RunCustomCommand(self, command_list):
p = subprocess.Popen(
command_list,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
# Can use communicate(input='y\n'.encode()) if the command run requires
# some confirmation.
stdout_data, _ = p.communicate()
logging.info('Log command output: %s', stdout_data)
if p.returncode != 0:
raise RuntimeError('Command %s failed: exit code: %s', (command_list, p.returncode))
def run(self):
self.RunCustomCommand(['apt-get', 'update'])
self.RunCustomCommand(
['apt-get', 'install', '-y', 'python-tk'])
install.run(self)
REQUIRED_PACKAGES = ['Pillow>=1.0', 'Matplotlib>=2.1']
setup(
name='object_detection',
version='0.1',
install_requires=REQUIRED_PACKAGES,
include_package_data=True,
packages=[p for p in find_packages() if
p.startswith('object_detection')],
description='Tensorflow Object Detection Library',
cmdclass={
'install': CustomCommands,
})
The tutorial blog link that you are following is outdated I think. I followed this one
you need to change runtime to 1.9 instead of 1.8 in cloud ml training command
From tensorflow/models/research/
gcloud ml-engine jobs submit training `whoami`_object_detection_pets_`date +%m_%d_%Y_%H_%M_%S` \
--runtime-version 1.9 \
--job-dir=gs://${YOUR_GCS_BUCKET}/model_dir \
--packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz,/tmp/pycocotools/pycocotools-2.0.tar.gz \
--module-name object_detection.model_main \
--region us-central1 \
--config object_detection/samples/cloud/cloud.yml \
-- \
--model_dir=gs://${YOUR_GCS_BUCKET}/model_dir \
--pipeline_config_path=gs://${YOUR_GCS_BUCKET}/data/faster_rcnn_resnet101_pets.config
after changing runtime everything worked perfect. No need to change any setup file. No Library errors.
Related
I am trying to install ruamel.yaml on a raspberry Pi system without a C compiler and encounter a build error installing ruamel.yaml.clib (pasted below).
I see this was previously addressed for ruamel.yaml>=0.15.41, <0.16.0 (How to install ruamel.yaml w/o native extension).
Note in the output below that the path from buildroot is not valid on the device running pip install but is from the device that built the image.
$ pip install ruamel.yaml~=0.16
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting ruamel.yaml~=0.16
Using cached ruamel.yaml-0.17.10-py3-none-any.whl (108 kB)
Collecting ruamel.yaml.clib>=0.1.2; platform_python_implementation == "CPython" and python_version < "3.10"
Using cached ruamel.yaml.clib-0.2.6.tar.gz (180 kB)
ERROR: Command errored out with exit status 1:
command: /usr/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-2t0ptfu4/ruamel-yaml-clib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-2t0ptfu4/ruamel-yaml-clib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-zdtfb19x
cwd: /tmp/pip-install-2t0ptfu4/ruamel-yaml-clib/
Complete output (3 lines):
sys.argv ['/tmp/pip-install-2t0ptfu4/ruamel-yaml-clib/setup.py', 'egg_info', '--egg-base', '/tmp/pip-pip-egg-info-zdtfb19x']
test compiling /tmp/tmp_ruamel_5les1064/test_ruamel_yaml.c -> test_ruamel_yaml compile error: /tmp/tmp_ruamel_5les1064/test_ruamel_yaml.c
Exception: command '{path from buildroot}/aarch64-buildroot-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
A preferred solution would be to have a ruamel.yaml.clib wheel for ARM architectures or to make clib dependency optional (pip install ruamel.yaml[clib])
I'm not sure why this broke after 0.16, but I'll try to have a look as to why this fails again (it might be that setuptools now throws a different Exception that is not caught).
Wheels for ruamel.yaml.clib in aarch64 architecture are availalbe on piwheels.
You should be able to install those after adding:
[global]
extra-index-url=https://www.piwheels.org/simple
to /etc/pip.conf.
Disclaimer: I have no control over how wheels on piwheels are generated.
I use debhelper with python setuptools in order to build my packages.
I recently updated the compatibility level 9 to 11 in order to use the systemd timers.
From that moment every time I upgrade the package the service contained is restarted.
I tried to build with the following rules:
#! /usr/bin/make -f
#export DH_VERBOSE = 1
export PYBUILD_NAME=my_pkg
export DH_ALWAYS_EXCLUDE=CVS:.svn:.git:.vscode*
export PYBUILD_INTERPRETERS=python3
%:
dh $# --with python3 --buildsystem=pybuild
override_dh_installinit:
dh_installinit --no-stop-on-upgrade --no-restart-on-upgrade --no-restart-after-upgrade --no-start
override_dh_systemd_enable:
dh_systemd_enable --name=my_pkg
override_dh_systemd_start:
dh_systemd_start --no-stop-on-upgrade --no-restart-on-upgrade --no-restart-after-upgrade --no-start
python3 setup.py clean --all
According to the documentation those tags should do what I look for but probably there is something I'm missing:
dh_systemd_start
dh_installinit
Every time I update it the service contained is restarted.
The service is running the update itself so when is restarted the update is left uncompleted.
As expected I was looking in the wrong direction.
The correct option to use is:
dh_installsystemd
change the rule file into:
#! /usr/bin/make -f
#export DH_VERBOSE = 1
export PYBUILD_NAME=my_pkg
export DH_ALWAYS_EXCLUDE=CVS:.svn:.git:.vscode*
export PYBUILD_INTERPRETERS=python3
%:
dh $# --with python3 --buildsystem=pybuild
override_dh_systemd_enable:
dh_systemd_enable --name=my_pkg
override_dh_installsystemd:
dh_installsystemd --no-restart-after-upgrade
override_dh_systemd_start:
python3 setup.py clean --all
I hope this will help someone else in the same situation.
I tried setting up virtual environment for Python 3.9 using virtualenvwrapper and got this error.
➜ mkvirtualenv env --python=$(which python3.9)
RuntimeError: failed to query /usr/bin/python3.9 with code 1 err: 'Traceback (most recent call last):\n File "/home/sid/.local/lib/python3.6/site-packages/virtualenv/discovery/py_info.py", line 16, in <module>\n from distutils import dist\nImportError: cannot import name \'dist\' from \'distutils\' (/usr/lib/python3.9/distutils/__init__.py)\n'
What am I missing?
apt-get install -y python3.9-distutils
then it should work fine.
I am trying to fetch data from Postgresql in my spark application.But now I am confused how to install postgresql driver in my docker image. I also tried to install postgresql as apt-get install command as mentioned below (Dockerfile).
Dockerfile:
FROM python:3
ENV SPARK_VERSION 2.3.2
ENV SPARK_HADOOP_PROFILE 2.7
ENV SPARK_SRC_URL https://www.apache.org/dist/spark/spark-$SPARK_VERSION/spark-${SPARK_VERSION}-
bin-hadoop${SPARK_HADOOP_PROFILE}.tgz
ENV SPARK_HOME=/opt/spark
ENV PATH $PATH:$SPARK_HOME/bin
RUN wget ${SPARK_SRC_URL}
RUN tar -xzf spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE}.tgz
RUN mv spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE} /opt/spark
RUN rm -f spark-${SPARK_VERSION}-bin-hadoop${SPARK_HADOOP_PROFILE}.tgz
RUN apt-get update && \
apt-get install -y openjdk-8-jdk-headless \
postgresql && \
rm -rf /var/lib/apt/lists/*
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY my_script.py ./
CMD [ "python", "./my_script.py" ]
requirements.txt :
pyspark==2.3.2
numpy
my_script.py :
from pyspark import SparkContext
from pyspark import SparkConf
#spark conf
conf1 = SparkConf()
conf1.setMaster("local[*]")
conf1.setAppName('hamza')
print(conf1)
sc = SparkContext(conf = conf1)
print('hahahha')
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
print(sqlContext)
from pyspark.sql import DataFrameReader
url = 'postgresql://IP:PORT/INSTANCE'
properties = {'user': 'user', 'password': 'pass'}
df = DataFrameReader(sqlContext).jdbc(
url='jdbc:%s' % url, table=query, properties=properties
)
Getting this error :
Traceback (most recent call last):
File "./my_script.py", line 26, in <module>
, properties=properties
File "/usr/local/lib/python3.7/site-packages/pyspark/sql/readwriter.py", line 527, in jdbc
return self._df(self._jreader.jdbc(url, table, jprop))
File "/usr/local/lib/python3.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/lib/python3.7/site-packages/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/usr/local/lib/python3.7/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o28.jdbc.
: java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
Kindly guide me how to setup this driver
Thanks
Adding these lines in Dockerfile solved the issue :
ENV POST_URL https://jdbc.postgresql.org/download/postgresql-42.2.5.jar
RUN wget ${POST_URL}
RUN mv postgresql-42.2.5.jar /opt/spark/jars
Thanks everyone
This is not the Docker way of doing things. Docker approach is not having all services inside one container but splitting them into several, where each container should have one main process, like database, you application or etc.
Also, when using separate containers, you dont care about intalling all necessary stuff in your Dockerfile - you simply select ready-to-use containers with desired database types. By the way, if you are using python:3 docker image, how do you know, maintainers wont change the set of installed services, or even the OS type? They can do it easily because they only provide 'Python` service, everything else is not defined.
So, what I recommend is:
Split you project into different containers (Dockerfiles)
Use standard postgres image for you database - all services and drivers are already onboard
Use docker-compose (or whatever) for launching both containers and linking them together in one network.
This Problem regards Openembedded/Yocto.
I have source code which needs to be compiled by a custom python3 script.
That means, that some python3 script should run during the do_compile() process.
The script imports setuptools, therefore, I added DEPENDS += "python3-setuptools-native" to the recipe. As far as I understand the documentation, this should make the setuptools module available for the building process (native).
But when bitbake executes the do_compile() process, I get this error: no module named 'setuptools'.
Let me break it down to a minimal (non-)working example:
FILE: test.bb
LICENSE = "BSD"
LIC_FILES_CHKSUM = "file://test/LICENSE;md5=d41d8cd98f00b204e9800998ecf8427e"
DEPENDS += "python3-setuptools-native"
SRC_URI = "file://test.py \
file://LICENSE"
do_compile() {
python3 ${S}/../test.py
}
FILE: test.py
import setuptools
print("HELLO")
bitbaking:
$ bitbake test
ERROR: test-1.0-r0 do_compile: Function failed: do_compile (log file is located at /path/to/test/1.0-r0/temp/log.do_compile.8532)
ERROR: Logfile of failure stored in: /path/to/test/1.0-r0/temp/log.do_compile.8532
Log data follows:
| DEBUG: Executing shell function do_compile
| Traceback (most recent call last):
| File "/path/to/test-1.0/../test.py", line 1, in <module>
| import setuptools
| ImportError: No module named 'setuptools'
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_compile (log file is located at /path/to/test/1.0-r0/temp/log.do_compile.8532)
ERROR: Task (/path/to/test.bb:do_compile) failed with exit code '1'
NOTE: Tasks Summary: Attempted 400 tasks of which 398 didn't need to be rerun and 1 failed.
NOTE: Writing buildhistory
Summary: 1 task failed:
/path/to/test.bb:do_compile
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
Is my exepectation wrong, that DEPENDS += "python3-setuptools-native" makes the python3 module 'setuptools' available to the python3 script in do_compile()? How may I accomplish this?
Under the hood quite a bit more is needed to get working setuptools support. Luckily there's a class to handle that:
inherit setuptools3
This should be all that's need to package a setuptools based project with OE-Core. As long as your project has a standard setup.py you don't need to write any do_compile() or do_install() functions.
If you do need to look at the details, meta/classes/setuptools3.bbclass and meta/classes/distutils3.bbclass should contain what you need (including the rather unobvious way to call native python from a recipe).