$python manage.py reset
Unknown command: 'reset'
Type 'manage.py help' for usage.
In django 1.6 whether to cancel this command parameters yet?
This one worked for me:
./manage.py sqlclear AppName | ./manage.py dbshell
Found here.
I'm working with Django 1.6 and the previous answer didn't work for me. I've been seeking for a way to easily update my database tables, and the best way I've found 'till now is to use migrations of South.
1- Install south via pip pip install south.
2- Add south to your INSTALLED_APPS.
3- Then run python manage.py convert_to_south Appname.
4- And finally run python manage.py migrate Appname.
You'll do the three first steps only once, then all you have to do ti update your changes is step 4.
Related
I am trying to push my SQLAlchemy-models to create Tables in my heroku-postgres-database. I use this command:
heroku run alembic upgrade head
It starts to Run as expected. But after a while I just get error,
Bash: alembic: command not found.
How to resolve it?
I had the same problem and solved it, not sure it will work for everyone. I noticed that in my requirements.txt file, I did not have the alembic package listed. So alembic was not installed loading the files on Heroku! I tried using heroku run pip install alembic, which installed the package successfully but still didn't solve the problem! I tried heroku run alembic --version, still the same.
The way I solved it is by deleting the local requirments.txt folder and generated it again using pip freeze > requirements.txt, this time alembic was there of course! After pushing my changes to heroku everything worked fine!
I think you are using quotes in the command. If so, remove the quotation marks
I am installing bench from https://github.com/frappe/bench. I have opted for the easy installation option. However, everytime the installation is getting stuck at [TASK] init bench. Any solutions?
It takes some time in that step.
If it shows error try to run the command using sudo:
sudo python install.py --develop --user frappe
Use erpnext forum for more help: EPRNext Forum
I'm trying to imagine a workflow that could be applied on a scientific work environment. My work involves doing some scientific coding, basically with Python, pandas, numpy and friends. Sometimes I have to use some modules that are not common standards in the scientific community and sometimes I have to integrate some compiled code in my chain of simulations. The code I run is most of the time parallelised with IPython notebook.
What do I find interesting about docker?
The fact that I could create a docker containing my code and its working environment. I can then send the docker to my colleges, without asking them to change their work environment, e.g., install an outdated version of a module so that they can run my code.
A rough draft of the workflow I have in mind goes something as follows:
Develop locally until I have a version I want to share with somebody.
Build a docker, possibly with a hook from a git repo.
Share the docker.
Can somebody give me some pointers of what I should take into account to develop further this workflow? A point that intrigues me: code running on a docker can lunch parallel process on the several cores of the machine? e.g., an IPython notebook connected to a cluster.
Docker can launch multiple process/thread on multiple core. Multiple processes may need the use of a supervisor (see : https://docs.docker.com/articles/using_supervisord/ )
You should probably build an image that contain the things you always use and use it as a base for all your project. (Would save you the pain of writing a complete Dockerfile each time)
Why not develop directly in a container and use the commit command to save your progress on a local docker registry? Then share the final image to your colleague.
How to make a local registry : https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
Even though you'll have a full container, I think a package manager like conda can still be a solid part of the base image for your workflow.
FROM ubuntu:14.04
RUN apt-get update && apt-get install curl -y
# Install miniconda
RUN curl -LO http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh
RUN bash Miniconda-latest-Linux-x86_64.sh -p /miniconda -b
RUN rm Miniconda-latest-Linux-x86_64.sh
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
* from nice example showing docker + miniconda + flask
Wrt doing source activate <env> in the Dockerfile you need to:
RUN /bin/bash -c "source activate <env> && <do something in the env>"
I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.
Running on MAC os 10.6.8
with postgresSQL installed, as well django - using python2.7
Also installed psycopg2 and dj-database-url using pip in my virtual env
And added these two lines to my setting.py:
import dj_database_url
DATABASES = {'default': dj_database_url.config(default='postgres://localhost')}
Based on instructions for Heroku in:
https://devcenter.heroku.com/articles/django#database_settings
When running:
python manage.py runserver
I am getting this error:
ImportError: dlopen(/Users.... venv/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _PQbackendPID
Referenced from: /Users.... venv/lib/python2.7/site-packages/psycopg2/_psycopg.so
Expected in: dynamic lookup
I kept searching for hours and tried all kind of thing including the advice on:
Mac OS X Lion Psycopg2: Symbol not found: _PQbackendPID
to no avail.
Wonder if anyone had such an issue and had any luck.
I had the same problem. Instead of installing the dependencies as Heroku suggests using
pip install Django psycopg2 dj-database-url
clone whatever repo you're hoping to run in venv, keeping its original settings.py. Then:
source venv/bin/activate
to activate the new environment, cd into your new repo, and python manage.py runserver. Should be set.
Alternatively, you could remake PostGreSQL, and run again, but that's a bit more of a task - it would work for psycopg2, though. As far as I can tell that issue comes from using an 64 or i386 build when you should be using a 32 build - but I'm not sure about this, and the above solution works well to solve the problem and use venv for what you're actually going to be using it for, most likely.
I had the same problem as you guys and I had read many pages and I couldn't find the answer in any of them. Many solution was about installing from source file and don't relate to the virtual environment.
I've found and tested following solution and it solve my problem.
1- Make sure your Postgres is NOT higher than 9.4 version according to psycopg2. Check python version as well. I use Postgres 3.9.9.
2- The problem is behind different version of Python(32/64 bit). It should comply with your operation system's bit architecture which is 64bit. Uninstall all versions of Python and pip from your system. Instruction you can find here but do NOT remove Python2.7 which is Apple-supplied system Python.
3- Install "Mac OS X 64-bit/32-bit" installer from python official website and install it.
After that install pip. Note that you should use the command "python3.5" for using Python version 3.5. You might install virtualenv from the new pip as well.
4- After all that you can go on your virtualenv and type "pip3 install -r requirement.txt" for installing all dependencies on your local machine.
Hope this can help you.