How to dynamically pass env variable to supervisorctl? - supervisord

I mean not in the supervisord.conf file, but instead - when you start/restart some process via supervisorctl restart procname. I've tried ENVTEST=something supervisorctl start env-test but it didn't work.
Here are some pieces of what I have:
supervisord.conf:
[program:env-test]
command=python env_test.py
stdout_logfile=logs/env_test.log
autostart=false
env_test.py:
import os
print('envtest:', os.environ.get('ENVTEST'))
command I've tried: ENVTEST=something supervisorctl start env-test
Solution that comes to my mind is to make my programs use some env-file and change it before restarting.
Big Thanks!

So far I went with:
pip install python-dotenv
env_test.py:
import os
from dotenv import load_dotenv
load_dotenv()
print('envtest:', os.environ.get('ENVTEST'))
.env file:
ENVTEST=something
and start it as usual: supervisorctl start env-test
This way the variable is available to the python code.

Related

Jupyter ImportError: No module named py4j.protocol despite py4j is installed

I read some posts regarding to the error I am seeing now when import pyspark, some suggest to install py4j, and I already did, and yet I am still seeing the error.
I am using a conda environment, here is the steps:
1. create a yml file and include the needed packages (including the py4j)
2. create a env based on the yml
3. create a kernel pointing to the env
4. start the kernel in Jupyter
5. running `import pyspark` throws error: ImportError: No module named py4j.protocol
The issue is resolved with adding environment section in kernel.json and explicitely specify the variables of the following:
"env": {
"HADOOP_CONF_DIR": "/etc/spark2/conf/yarn-conf",
"PYSPARK_PYTHON":"/opt/cloudera/parcels/Anaconda/bin/python",
"SPARK_HOME": "/opt/cloudera/parcels/SPARK2",
"PYTHONPATH": "/opt/cloudera/parcels/SPARK2/lib/spark2/python/lib/py4j-0.10.7-src.zip:/opt/cloudera/parcels/SPARK2/lib/spark2/python/",
"PYTHONSTARTUP": "/opt/cloudera/parcels/SPARK2/lib/spark2/python/pyspark/shell.py",
"PYSPARK_SUBMIT_ARGS": " --master yarn --deploy-mode client pyspark-shell"
}

How to solve numpy import error when calling Anaconda env from Matlab

I want to execute a Python script from Matlab (on a Windows 7 machine). The libraries necessary are installed in an Anaconda virtual environment. When running the script from command line, it runs flawlessly.
When calling the script from Matlab as follows:
[status, commandOut] = system('C:/Users/user/AppData/Local/Continuum/anaconda3/envs/tf/python.exe test.py');
or with shell commands, I get an Import Error:
commandOut =
'Traceback (most recent call last):
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\core\__init__.py", line 16, in <module>
from . import multiarray
ImportError: DLL load failed: The specified path is invalid.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 2, in <module>
import numpy as np
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\lib\site-packages\numpy\core\__init__.py", line 26, in <module>
raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed. Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control). Otherwise reinstall numpy.
Original error was: DLL load failed: The specified path is invalid.
I already changed the default Matlab Python version to the Anaconda env, but no change:
version: '3.5'
executable: 'C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\python.exe'
library: 'C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf\python35.dll'
home: 'C:\Users\user\AppData\Local\Continuum\anaconda3\envs\tf'
isloaded: 1
Just running my test script without importing numpy works. Reloading numpy (py.importlib.import_module('numpy');) didn't work but threw the same error as before.
Does anyone have an idea how to fix this?
So after corresponding with Matlab support I found out that Matlab depends on the path environment (paths which are deliberately not set when using a virtual environment) and therefore numpy fails to find the necessary paths when called from within Matlab (even if the call contains the path to the virtual environment).
The solution is either to call Matlab from within the virtual environment (via command line) or add the missing paths manually in the path environment.
Maybe this information can help someone else.
First Method
You can change the python interpreter with:
pyversion("/home/nibalysc/Programs/anaconda3/bin/python");
And check it with:
pyversion();
You could also do this in a
startup.m
file in your project folder and every time you start MATLAB from this folder the python interpreter will be changed automatically.
Now you can try to use:
py.importlib.import_module('numpy');
Read up the documentation on how to use the integrated python in MATLAB:
Call user defined custom module
Call modified python module
Alternative Method
Alternative method would be to create a
matlab_shell.sh
file with following content, this is basically the appended code from .bashrc when anaconda is installed and asks you if the installer should modify the .bashrc file:
#!/bin/bash
__conda_setup="$(CONDA_REPORT_ERRORS=false '$HOME/path/to/anaconda3/bin/conda' shell.bash hook 2> /dev/null)"
if [ $? -eq 0 ]; then
\eval "$__conda_setup"
else
if [ -f "$HOME/path/to/anaconda3/etc/profile.d/conda.sh" ]; then
CONDA_CHANGEPS1=false conda activate base
else
\export PATH="$HOME/path/to/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda init <<<
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('$HOME/path/to/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "$HOME/path/to/anaconda3/etc/profile.d/conda.sh" ]; then
. "$HOME/path/to/anaconda3/etc/profile.d/conda.sh"
else
export PATH="$HOME/path/to/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
conda activate base
eval $2
Then you need to set the MATLAB_SHELL environment variable either before running MATLAB or in MATLAB itself. The best thing in my opinion would be to do it also in the startup.m file like that:
setenv("MATLAB_SHELL", "/path/to/matlab_shell.sh");
Afterwards you can use the system(...) function to run conda python with all your modules installed like that...
String notation:
system("python -c ""python code goes here"");
Char notation:
system('python -c "python code goes here"');
Hope this helps!
Firstly, if you execute your Python script like a regular system command ([status, commandOut] = system('...python.exe test.py'))
the pyversion (and pyenv, since R2019b) got no effect at all. It only matters if you utilize the py. integration, as in the code below (and, in most cases, this is a way better approach).
Currently (I use R2019b update 5) there's a number of pitfalls, that might cause issues similar to yours. I'd recommend to start from the following:
Create a new clean conda environment:
conda create -n test_py36 python=3.6 numpy
Create the following dummy demo1.py:
def dummy_py_method(x):
return x+1
Create the following run_py_code.m:
function run_py_code()
% explicit module import sometimes show more detailed error messages
py.importlib.import_module('numpy');
% to reloads if there would be any changes:
pymodule = py.importlib.import_module('demo1');
py.importlib.reload(pymodule);
% passing data back and forth
x = rand([3 3]);
x_np = py.numpy.array(x);
y_np=pymodule.dummy_py_method(x_np);
y = double(y_np);
disp(y-x);
Create the following before_first_run.m:
setenv('PYTHONUNBUFFERED','1');
setenv('path',['C:\Users\username\Anaconda3\envs\test_py36\Library\bin;'...
getenv('path')]);
pe=pyenv('Version','C:\users\username\Anaconda3\envs\test_py36\pythonw.exe',...
'ExecutionMode','InProcess'...
);
% add "demo1.py" to path
py_file_path = 'W:\tests\Matlab\python_demos\call_pycode\pycode';
if count(py.sys.path,py_file_path) == 0
insert(py.sys.path,int32(0),py_file_path);
end
Run the before_first_run.m first and run the run_py_code.m next.
Notes:
As already mentioned in this answer, one key point is to add the folder, containing the necessary dll files to the %PATH%, before starting python. This could be achieved with setenv from withing Matlab. Usually, the Library\bin is what should be added.
It might be a good idea to try clean officially-supported CPython distribution (e.g. CPython 3.6.8 ). Only install numpy (python -m pip install numpy). To my experience, the setenv is not necessary in this case.
For me, OutOfProcess mode proved to be buggy. Thus, I'd recommend to explicitly setting InProcess mode (for versions before R2019b, the OutOfProcess option is not present, as well as pyenv).
Do not concatenate the two .m files above into one - the py.importlib statements seem to be pre-executed and thus conflict with pyenv.

Rasbberrypi - picamera not working

I have installed the picamera package and when i try to run the following program, i end up with an error.
Code:import time
import picamera
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
camera.stop_preview()
Error:
AttributeError: 'module' object has no attribute 'Picamera'
Please do suggest on resolving the above error.
Regards,
Richi
In this case, you should update your installation with command
sudo apt-get update
sudo apt-get upgrade
and enable camera port using
sudo raspi-config
It happened with me too. My script was as simple as this
import picamera
camera = picamera.PiCamera()
camera.capture('test.jpg')
tried everything - from updating to rewriting the script. Always got
AttributeError: 'module' object has not attribute 'PiCamera'
raspberry pi camera was functioning fine (raspistill -o testshot.jpgworked fine)
tried on python shell and that too gave the same error!?! - suprising was it was mentioning the filename (camera.py) although I was not running a file - i was in the python shell!
I just deleted the file lest i go insane - it worked perfect fine then. Sometimes these machines get brain farts

uWSGI + virtualenv 'No module named site'

So this seems to be a really common problem with this setup, but I can't find any solutions that work on SO. I've setup a very new Ubuntu 15.04 server, then installed nginx, virtualenv (and -wrapper), and uWSGI (via apt-get, so globally, not inside the virtualenv).
My virtualenv is located at /root/Env/example. Inside of the virtualenv, I installed Django, then at /srv/www/example/app ran Django's startproject command with the project name example, so I have vaguely this structure:
-root
-Env
-example
-bin
-lib
-srv
-www
-example
-app
-example
manage.py
-example
wsgi.py
...
My example.ini file for uWSGI looks like this:
[uwsgi]
project = example
plugin = python
chdir = /srv/www/example/app/example
home = /root/Env/example
module = example.wsgi:application
master = true
processes = 5
socket = /run/uwsgi/app/example/example.socket
chmod-socket = 664
uid = www-data
gid = www-data
vacuum = true
But no matter whether I run this via uwsgi --ini /etc/uwsgi/apps-enabled/example.ini or via daemon, I get the exact same error:
Python version: 2.7.9 (default, Apr 2 2015, 15:37:21) [GCC 4.9.2]
Set PythonHome to /root/Env/example
ImportError: No module named site
I should note that the Django project works via the built-in development server ./manage.py runserver, and that when I remove home = /root/Env/example the thing works (but is obviously using the global Python and Django rather than the virtualenv versions, which means it's useless for a proper virtualenv setup).
Can anyone see some obvious path error that I'm not seeing? As far as I can tell, home is entirely correct based on my directory structure, and everything else in the ini too, so why is it not working with this ImportError?
In my case, I was seeing this issue because the django app I was trying to run was written in python 3 whereas uwsgi was configured for python 2. I fixed the problem by:
recompiling uwsgi to support both python 2 and python 3 apps
(I followed this guide)
adding this to my mydjangoproject_uwsgi.ini:
plugins = python35 # or whatever you specified while compiling uwsgi
For other folks using Django, you should also make sure you are correctly specifying the following:
# Django dir that contains manage.py
chdir = /var/www/project/myprojectname
# Django wsgi (myprojectname is the name of your top-level project)
module = myprojectname.wsgi:application
# the virtualenv you are using (full path)
home = /home/ubuntu/Env/mydjangovenv
plugins = python35
As #Freek said, site refers to a python module.
The error claims that python cannot find that package, which is because you have specified python_home to the wrong location.
I've encountered with the same problem and my uwsgi.ini is like below:
[uwsgi]
# variable
base = /home/xx/
# project settings
chdir = %(base)/
module = botservice.uwsgi:application
home = %(base)/env/bin
For this configuration uwsgi can find python executable in /env/bin but no packages could be found under this folder. So I changed home to
home = %(base)/env/
and it worked for me.
In your case, I suggest digging into home directive and point it to a location which contains both python executable and packages.
The site module is in the root of django.
First check is to activate the virtualenv manually (source /root/Env/example/bin/activate, start python and import site). If that fails, pip install django.
Assuming that django is correctly installed in the virtualenv, make sure that uWSGI activates the virtualenv. Relevant uWSGI configuration directives:
plugins = python
virtualenv = /root/Env/example
and in case you have error importing example.wsgi:
pythonpath = /srv/www/example/app/example

How to import modules in IPython Clusters

I am trying to import some of my personal modules into my IPython Clusters. I am using Anacondas on Windows Vista 64 bit
from IPython.parallel import Client
rc = Client()
dview = rc[:]
with dview.sync_imports():
import lib.rf
It is giving me this error:
No module named 'lib.rf'
I can import the module in the rest of my IPython notebook, as I have this .bat file to start ipython notebook:
cd C:\Users\Jon\workspace\bf
set PYTHONPATH=%PYTHONPATH%;C:\Users\Jon\workspace\bf
C:\Anaconda\envs\p33\scripts\ipython notebook
I am using this similar code to start my ip clusters:
cd C:\Users\Jon\workspace\bf
set PYTHONPATH=%PYTHONPATH%;C:\Users\Jon\workspace\bf
C:\Anaconda\envs\p33\Scripts\ipcluster start --n=7
Why is this not working?
More info:
If I print out sys.path, I get a list that contains C:\Users\Jon\workspace\bf
If I print out the paths of my clusters, I get the same list:
%px sys.path
['',
'',
'',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\distribute-0.6.28-py3.3.egg',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\pykalman-0.9.5-py3.3.egg',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\patsy-0.2.1-py3.3.egg',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\joblib-0.8.3_r1-py3.3.egg',
'C:\\Users\\Jon\\workspace\\bf',
'C:\\Users\\Jon\\workspace\\bf\\my_numba',
'C:\\Anaconda\\envs\\p33\\python33.zip',
'C:\\Anaconda\\envs\\p33\\DLLs',
'C:\\Anaconda\\envs\\p33\\lib',
'C:\\Anaconda\\envs\\p33',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\Sphinx-1.2.3-py3.3.egg',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\win32',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\win32\\lib',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\Pythonwin',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\runipy-0.1.1-py3.3.egg',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\setuptools-7.0-py3.3.egg',
'C:\\Anaconda\\envs\\p33\\lib\\site-packages\\IPython\\extensions']
In [45]:
Further analysis:
%px lib.__path__
Out[0:11]: _NamespacePath(['C:\\Anaconda\\envs\\p33\\lib\\site-packages\\win32\\lib'])
lib.__path__
Out[57]: ['.\\lib']
Looks like the ipcluster and notebook are looking at lib in different places. I have tried renaming lib to mylib. It has not helped.
It seems that with dview.sync_imports() is being run someplace other than your IPython Notebook environment and is therefore relying a different PYTHONPATH. It is definitely not being run on one of the cluster engines and so wouldn't expect it to leverage your cluster settings of PYTHONPATH.
I'm thinking you'll need to have that directory in your PYTHONPATH (not your PATH) for the calling python environment because that is the location from which you are importing the modules.
The impact of the bit you have about setting the PYTHONPATH in the DOS shell from which you invoke ipclusters isn't clear to me. I can see that one might expect this to let the engines know about your directory, but I'm wondering if that PYTHONPATH gets initilized to the environment from which you call IPython.parallel.Client.