I want to monitor django server using monit. However, it won't let me to run "python manage.py runserver" because it is environment specific to the virtualenv that i use.
So i want to do..
workon myvirtualenv
and then run
python manage.py runserver
however I can i achieve that?
Can you string the commands together for your start command?
"workon myvirtualenv && python manage.py runserver"
Related
I've been stuck with this problem for a couple of days now. I am trying to dockerize a django REST API + react (create-react-app) application.
problem: $ docker-compose run --rm app sh -c "flake8"
sh: flake8: not found
I am trying to configure flake8 in gitbash using the code $ docker-compose run --rm app sh -c "flake8"
But there is a problem and it says:-
sh: flake8: not found
I tried running- docker-compose build but the same problem still persists, please tell how to solve it?
Use $ which flake8 to understand how
your python virt env is supplying that dependency.
Adjust the PATH env var, perhaps via $ conda activate myproject
or $ poetry run flake8,
to let the dockerized container access that dependency.
I am testing AWS for launching web service.
I stuck with pg_config. Error log is
/app/requirements.txt (line 1))
Using cached psycopg2-2.6.2.tar.gz
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
There are many solutions in stackoverflow, but it doesn't work for me.
packages:
yum:
python-devel: []
postgresql95-devel: []
libjpeg-devel: '6b'
container_commands:
01_migrate:
command: "python manage.py migrate"
02_collectstatic:
command: "python manage.py collectstatic --noinput"
03_createsu:
command: "python manage.py createsu"
leader_only: true
option_settings:
"aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "onreview.settings"
PYTHONPATH: "$PYTHONPATH"
"aws:elasticbeanstalk:container:python":
WSGIPath: "onreview/wsgi.py"
This is my .ebextensions/python.config files contents. And I'm uploading through zipping my source code.
I changed postgresql95-devel to postgresql-devel, 93, 94, all of it. And I use 9.5 version db right now.
I think --pg-config's path is problem. but I can't change it.
Is there any solution??
p.s I do not want to setup inside the EC2 instance through SSH or something.
You have a syntax error in your original post, packages: should not be indented. I don't know why you have python-devel when python is included in the install, so I can't say that it isn't interfering. Likewise with the line setting the python path.
packages:
yum:
python-devel: []
postgresql95-devel: []
libjpeg-devel: '6b'
container_commands:
01_migrate:
command: "python manage.py migrate"
02_collectstatic:
command: "python manage.py collectstatic --noinput"
03_createsu:
command: "python manage.py createsu"
leader_only: true
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: "onreview.settings"
PYTHONPATH: "$PYTHONPATH"
aws:elasticbeanstalk:container:python:
WSGIPath: "onreview/wsgi.py"
I love the terminal feature and works very well for our use case where I would like students to do some work directly from a terminal so they experience that environment. The shell that launches automatically is sh and does not pick up all of my bash defaults. I can type "bash" and everything works perfectly. How can I make "bash" the default?
Jupyter uses the environment variable $SHELL to decide which shell to launch. If you are running jupyter using init then this will be set to dash on Ubuntu systems. My solution is to export SHELL=/bin/bash in the script that launches jupyter.
I have tried the ultimate way of switching system-wide SHELL environment variable by adding the following line to the file /etc/environment:
SHELL=/bin/bash
This works on Ubuntu environment. Every now and then, the SHELL variable always points to /bin/bash instead of /bin/sh in Terminal after a reboot.
Also, setting up CRON job to launch jupyter notebook at system startup triggered the same issue on jupyter notebook's Terminal.
It turns out that I need to include variable setting and sourcing statements for Bash init file like ~/.bashrc in CRON job statement as follows via the command $ crontab -e :
#reboot source /home/USERNAME/.bashrc && \
export SHELL=/bin/bash && \
/SOMEWHERE/jupyter notebook --port=8888
In this way, I can log in the Ubuntu server via a remote web browser (http://server-ip-address:8888/) with opening jupyter notebook's Terminal default to Bash as same as local environment.
You can add this to your jupyter_notebook_config.py
c.NotebookApp.terminado_settings = {'shell_command': ['/bin/bash']}
With Jupyter running on Ubuntu 15.10, the Jupyter shell will default into /bin/sh which is a symlink to /bin/dash.
rm /bin/sh
ln -s /bin/bash /bin/sh
That fix got Jupyter terminal booting into bash for me.
I have being using fabric to deploy an app with virtualenv. I was using fabric 1.4 and upgraded to 1.5.1 last week. My script stopped working.
It can't install the requirements. It seems it's not activating the virtualenv. In my code, I have:
with cd('%(path)s' % env):
with prefix('source bin/activate'):
run('pip install -U distribute')
I'm getting a permission denied error: error: could not delete '/usr/local/lib/python2.7/dist-packages/pkg_resources.py': Permission denied
The command being executed is:
Executed: /bin/bash -l -c "cd /var/www/myproject && source bin/activate && export PATH=\"\\$PATH:\\"/var/www/myproject\\" \" && pip install -U distribute"
If I ssh to the remote machine and run cd /var/www/myproject && source bin/activate && pip install -U distribute, it works just fine.
Why is my fabric script not working?
Thanks in advance
Instead of the serial approach with..
source bin/activate
pip install -U distribute
..directly use the pip executable of the virtualenv:
myenv/bin/pip install -U distribute
Although not exactly a solution, fabtools has a number of functions related to virtualenvs that are very handy. They do (almost) all of the hard work for you, and are probably worth using to check it isn't something else going wrong.
# Cut (and modified) from the fabtools documentation
from fabric.api import *
from fabtools import require
import fabtools
#task
def setup():
# Require a Python package
with fabtools.python.virtualenv('/home/myuser/env'):
require.python.package('pyramid')
Something strange and unexpected is happening with the sys.path of any virtual environment I set. For example, a clean env:
$ virtualenv test
$ source test/bin/activate
(test) $
This is the expected PYTHONPATH:
(test) $ python
>>> import sys
>>> print '\n'.join(sys.path)
/home/user/test/local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg
/home/user/test/local/lib/python2.7/site-packages/pip-1.1-py2.7.egg
/home/user/test/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg
/home/user/test/lib/python2.7/site-packages/pip-1.1-py2.7.egg
/home/user/test/lib/python2.7
/home/user/test/lib/python2.7/plat-linux2
/home/user/test/lib/python2.7/lib-tk
/home/user/test/lib/python2.7/lib-old
/home/user/test/lib/python2.7/lib-dynload
/usr/lib/python2.7
/usr/lib/python2.7/plat-linux2
/usr/lib/python2.7/lib-tk
/home/user/test/local/lib/python2.7/site-packages
/home/user/test/lib/python2.7/site-packages
But this is the one I really get:
(test) $ bpython
>>> import sys
>>> print '\n'.join(sys.path)
/usr/bin
/usr/lib/python2.7
/usr/lib/python2.7/plat-linux2
/usr/lib/python2.7/lib-tk
/usr/lib/python2.7/lib-old
/usr/lib/python2.7/lib-dynload
/usr/local/lib/python2.7/dist-packages
/usr/lib/python2.7/dist-packages
I can't figure out the reason of the two different sys.paths.
Because of that, no pip installation works!
I'm using Virtualenv 1.7.2, Ubuntu 12.04, Python 2.7.3.
Any help will be appreciated.
Rather than installing one copy of bpython per virtualenv, I've added this function to my shell profile (for example ~/.bashrc or ~/.zshrc). It wraps the bpython command with some logic to load the virtual environment's python path (if you have an active virtual environment).
bpython() {
if test -n "$VIRTUAL_ENV"
then
PYTHONPATH="$(python -c 'import sys; print ":".join(sys.path)')" \
command bpython "$#"
else
command bpython "$#"
fi
}
I found that I needed to deactivate and reactivate my virtualenv after installing bpython for it to work.
pip install bpython
deactivate
. bin/activate # or your equivalent activation command
My hypothesis is that you have not installed bpython after you have activated the new virtualenv.
I followed it up exactly like you mentioned:
mkvirtualenv bpython
(bpython)~ $ pip install bpython
(bpython)~ $bpython
and then ran the commands:
>>> import sys
>>> print '\n'.join(sys.path)
/Users/xxxx/.virtualenvs/bpython/bin
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/site-packages/pip-1.1-py2.7.egg
/Users/xxxx/.virtualenvs/bpython/lib/python27.zip
/Users/xxxx/.virtualenvs/bpython/lib/python2.7
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/plat-darwin
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/plat-mac
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/plat-mac/lib-scriptpackages
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/lib-tk
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/lib-old
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/lib-dynload
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/site-packages
and did the same thing again by running python under the activated virtualenv
(bpython)~ $ python
.....
>>> import sys
>>> print '\n'.join(sys.path)
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/site-packages/pip-1.1-py2.7.egg
/Users/xxxx/.virtualenvs/bpython/lib/python27.zip
/Users/xxxx/.virtualenvs/bpython/lib/python2.7
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/plat-darwin
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/plat-mac
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/plat-mac/lib-scriptpackages
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/lib-tk
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/lib-old
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/lib-dynload
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/Users/xxxx/.virtualenvs/bpython/lib/python2.7/site-packages
I saw no difference in the two results
I also discovered that if you have bpython installed locally, you need to create your virtualenv with --no-site-packages for it to work properly. If you created your virtualenv without that flag, you can create an empty file named no-global-site-packages.txt in ~/.virtualenvs/<env-name>/lib/python2.7/ as noted in this Stack Exchange answer.