I've been using pip with virtualenv, and I'm really liking it. I keep all my requirements in a requirements.txt file obtained with this command:
`pip freeze > requirements.txt`
The only thing that I've really been trying to figure out is this:
How can I remove packages that aren't in my requirements file?
This would be really helpful for when I am moving between different branches.
One possible solution is to do the following:
virtualenv --clear PATH_TO_VENV
pip install -r requirements.txt
However, it's a bit of a drastic solution...
Related
Whenever I want to install a repo, the standard instructions are to import a public key with "rpm --import" , then rpm "-Uvh" for the actual rpm file. Like for the ELRepo here: http://elrepo.org/tiki/tiki-index.php
However, why can't I just use yum to install the rpm file? So basically instead of:
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
I just type:
yum install http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
And everything works fine! It's much nicer to use yum, since it's easy to keep track of what I've installed (or even better - dnf!), remove things etc. Surely that's what a package manager is for right? Why use the rpm command at all?
There are many ways to do it. And, yes, you can install the package using yum. No problem. And if you did not imported gpg key previously, then yum will ask you.
I have a hard time understanding where is the right place to place a code that will install the needed packages for the given docker container managed by dokku.
We have a scala application and, unfortunately, we need to have one shell call that is dependent on an environment. I would like to install the given package for the given container using "apt-get install". Right now I am using a custom plugin with a file named "post-release-build". However, I don't have the permission to install anything in that phase.
Basically, my script that should be invoked looks like this (based on a dockerfile that is available online):
apt-get update
apt-get install -y build-essential xorg libssl-dev libxrender-dev wget gdebi
wget http://download.gna.org/wkhtmltopdf/0.12/0.12.2.1/wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
gdebi --n wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
echo "-----> wkhtmltox installed!"
Is there a way how to make it work? I would also prefer to have such a file somewhere in the application so I don't need to setup environment before pushing the app (in the future).
EDIT:
I have found a plugin that should be capable of installing packages using apt-get (https://github.com/F4-Group/dokku-apt) however, I am a little bit unlucky because it downloads a package that is not working properly.
Since just downloading with apt-get will download a package that fails, I investigated deeper into dokku and came out with a new plugin that should install the package for you.
I have created a script, documented how to use it and licenced it over MIT license so feel free to use it. Hopefully it will save you the time I had to spend realizing what is going on.
URL: https://github.com/mbriskar/dokku-wkhtmltopdf
I want to install a python package from github. It seems that pip install https://github.com/codelucas/newspaper/archive/python-2-head.zip is the way to go. However, this only installs the python files you can find here without the other folders. The package breaks because of this.
If I run pip install newspaper (which refers to the same code) the other repos are correctly installed.
I could not get if the problem is coming from pip or the package I'm trying to save (I'm kind of new to python packaging :)
The reason I don't want to use pip install newspaper is that I'm working on a fork of that code that I want to pull from github to my server directly. I have the same problem with my fork.
You can install the latest snapshot from the github repository with the following command
pip install git+https://github.com/codelucas/newspaper.git
You can find further information in the corresponding section in pip's documentation.
I'm using Fabric to automate my deployment routines for my projects.
One of them concerns the virtualenv replication.
Automating the installation of new packages is pretty straight forward with
local $ pip freeze > requirements.txt
remote $ pip install -r requirements.txt
Now if I don't need a package anymore, I can simply
local $ pip uninstall unused_package
But as pip install won't remove the packages not present in the requirements anymore,
How can I automate the remove of packages from the virtualenv not present in the requirements ?
I'd like to have a command like:
remote $ pip flush -r requirements.txt
Another approach could be - and I know this is not answering your question perfectly - to use the power of the virtualenv you already have:
It is convenient to have known stable package and application environments, let's say identified by revision control tags, to be able to roll back to a known working combination (this is no replacement for testing or a staging environment, though).
So you could simply setup a new virtual environment ("workon your-tag"), populate it again with "pip install -r" and leave the old behind (for some time, e.g. until the new your-tag release is considered stable) and finally remove the old virtual-env('s).
In your fabfile do something like
with cd(stage_dir):
run("./verify_virtual_env.sh %s" % your-tag)
and the "verify_virtual_env.sh" script updates via pip for the given environment.
Why not just a diff with sets? It might require using a get operation though if you're operating on a remote box
On remote
from fabric.api import get, run
run("pip freeze > existing_pkgs.txt")
get("/path/to/existing_pkgs.txt")
So now existing_pkgs is on your local machine. Assuming you have a new requirements file...
with open("requirements.txt", "r") as req_file:
req_pkgs = set(req_file.readlines())
with open("existing_pkgs.txt", "r") as existing_pkgs:
existing = set(existing_pkgs.readlines())
Do an operation that gives you the differences in sets
uninstall_these = existing.difference_update(req_pkgs)
Then uninstall the pkgs from your remote host
for pkg in uninstall_these:
run("pip uninstall {}".format(pkg))
I ended up by keeping the install/uninstall jobs separated.
Install:
pip install -r requirements.txt
Uninstall:
pip freeze | grep -v -f requirements.txt - | xargs pip uninstall -y
I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.