I'm using Fabric to automate my deployment routines for my projects.
One of them concerns the virtualenv replication.
Automating the installation of new packages is pretty straight forward with
local $ pip freeze > requirements.txt
remote $ pip install -r requirements.txt
Now if I don't need a package anymore, I can simply
local $ pip uninstall unused_package
But as pip install won't remove the packages not present in the requirements anymore,
How can I automate the remove of packages from the virtualenv not present in the requirements ?
I'd like to have a command like:
remote $ pip flush -r requirements.txt
Another approach could be - and I know this is not answering your question perfectly - to use the power of the virtualenv you already have:
It is convenient to have known stable package and application environments, let's say identified by revision control tags, to be able to roll back to a known working combination (this is no replacement for testing or a staging environment, though).
So you could simply setup a new virtual environment ("workon your-tag"), populate it again with "pip install -r" and leave the old behind (for some time, e.g. until the new your-tag release is considered stable) and finally remove the old virtual-env('s).
In your fabfile do something like
with cd(stage_dir):
run("./verify_virtual_env.sh %s" % your-tag)
and the "verify_virtual_env.sh" script updates via pip for the given environment.
Why not just a diff with sets? It might require using a get operation though if you're operating on a remote box
On remote
from fabric.api import get, run
run("pip freeze > existing_pkgs.txt")
get("/path/to/existing_pkgs.txt")
So now existing_pkgs is on your local machine. Assuming you have a new requirements file...
with open("requirements.txt", "r") as req_file:
req_pkgs = set(req_file.readlines())
with open("existing_pkgs.txt", "r") as existing_pkgs:
existing = set(existing_pkgs.readlines())
Do an operation that gives you the differences in sets
uninstall_these = existing.difference_update(req_pkgs)
Then uninstall the pkgs from your remote host
for pkg in uninstall_these:
run("pip uninstall {}".format(pkg))
I ended up by keeping the install/uninstall jobs separated.
Install:
pip install -r requirements.txt
Uninstall:
pip freeze | grep -v -f requirements.txt - | xargs pip uninstall -y
Related
pandoc-crossref must match the pandoc version, and also only the 3.10.0 release works on OSX Big Sur. Thus, it is not possible to get pandoc and pandoc-crossref running in a conda environment from the official channel or from conda-forge.
I could easily download the matching binaries from https://github.com/lierdakil/pandoc-crossref/releases/tag/v0.3.10.0 and copy them e.g. to the binpath:
$ which pandoc-crossref
/usr/local/bin/pandoc-crossref
$ curl -OL https://github.com/lierdakil/pandoc-crossref/releases/download/v0.3.10.0/pandoc-crossref-macOS.tar.xz
$ tar -xzvf pandoc-crossref-macOS.tar.xz
$ mv pandoc-crossref /usr/local/bin/pandoc-crossref
But I think that is not a clean approach, because conda will not know that I updated the version for pandoc-crossref.
What is a clean approach for updating a package managed by conda from a binary available on Github?
Update Feedstock
I updated it on the Conda Forge feedstock, which is what I regard as the "cleanest" solution.
How does one do that? First, OP had posted a comment on the feedstock in the PR that they wanted merged. This was the appropriate first step and hopefully in future cases that should be sufficient to prompt maintainers to act. In this case, it was not sufficient. So, as a follow up, I chatted on the Conda Forge Gitter to point out that the feedstock had gone stale and had non-responding maintainer(s). One of the core Conda Forge members suggested I make a PR bumping the version and adding myself as maintainer, and they merged it for me. In all, this took about 10 mins of work and ~2 hours from start to having an updated package on Anaconda Cloud.
Custom Conda Build
Otherwise, there isn't really a clean solution for non-Python packages outside of building a Conda package. That is, clone the feedstock or write a new recipe, modify it to build from the GitHub reference, then install that build into your environment. It may also be worth uploading to an Anaconda Cloud user account, so there is some non-local reference for it.
Pip Install (Python Packages Only)
In the special case that it is a Python package, one could dump the environment to YAML, edit to install the package through pip, then recreate the environment.
This started as a question, but I think I've figured out most of the parts, so am posting it here for reference. It is relatively involved, but I think it may be useful to others contemplating this scenario.
I'm a newb with some of these areas, so if mistakes are made in regards to security issues in Apache or other bad practices, please correct.
Also note that, as it stands, the local development version that is produced from following the steps below no longer has git enabled on it due to changes between it and the production code. So I will keep the local git repo in another location.
Desired Behaviour
Option One:
Replicate my current Python 2.7, Bottle, MongoDB OpenShift application locally to speed up development time (during git push etc).
Option Two (if significantly easier):
Replicate my current Python 2.7, Bottle, MongoDB Openshift application locally *without the OpenShift platform* to speed up development time.
Current Behaviour
I have a Python 2.7, Bottle, MongoDB application on OpenShift.
My current workflow is:
Edit locally.
git add --all
git commit -m "here is a message"
git push origin master (this updates the live site on openshift)
git push github master (this updates github repo)
Obviously this is not ideal for developing due to the time each push takes before I can see the results.
Directory Structure
This is the structure of my app now that it is running locally:
Environment
Linux Mint 17 Cinnamon
Steps To Replicate Locally
01) MongoDB 2.4.9 - DONE
Install instructions for MongoDB 2.4.9 on Linux Mint 17:
http://docs.mongodb.org/v2.4/tutorial/install-mongodb-on-ubuntu
02) RockMongo 1.1 (which requires Apache, PHP and MongoDB Driver) - DONE
sudo apt-get install apache2 php5
sudo apt-get install php5-dev php5-cli
sudo apt-get install php-pear
pear version
pecl version
sudo pecl install mongo
At this point, I was prompted with something that included [no] and I just pressed Enter.
cd /etc/php5/apache2
sudo vi php.ini
Add this to the end of the file:
extension=mongo.so
Then restart:
/etc/init.d/apache2 restart
Then install RockMongo:
cd /var/www/html
wget https://github.com/iwind/rockmongo/archive/1.1.7.zip
unzip 1.1.7.zip
mv rockmongo-1.1.7 rockmongo
rm 1.1.7.zip
03) Create clean virtualenv environment and install packages to it - DONE
virtualenv is a Python package that lets you create independent, virtual environments containing their own Python installation and packages.
Install virtualenv through Synaptic Package Manager.
How To Create
https://code.google.com/p/modwsgi/wiki/VirtualEnvironments
By the author of mod_wsgi, Graham Dumpleton.
Why To Create
http://www.dabapps.com/blog/introduction-to-pip-and-virtualenv-python/
This article is so brilliant it almost make me want to cry, kudos, kudos.
Commands
Before doing the following, install python2.7-dev, libxml2-dev, libxslt1-dev and apache2-dev via Synaptic Package Manager to resolve errors when doing pip installs later.
# change to your html folder
cd /var/www/html
# this will create a folder called ENV that contains its own instance
# of python without inheriting your system's installed python packages.
# it will also install independent instances of pip and setuptools.
# the --no-site-packages option is the default setting in recent versions
# however I added it just to be sure.
virtualenv --no-site-packages ENV
New python executable in ENV/bin/python
Installing setuptools, pip...done.
# you can 'activate' the virtual environment so that each time you use
# pip it automatically installs packages in the virtual environment.
# change to your virtual environment folder
cd /var/www/html/ENV
# activate the virtual environment
source bin/activate
# you can deactivate this by typing 'deactivate` and it is also
# automatically deactivated each time you close the terminal.
deactivate
# from time to time you can save the names of the packages you have
# installed to your virtual environment via pip to a text file with:
pip freeze > requirements.txt
# note, after installing virtualenv as shown above, you will have some
# packages installted by default.
pip freeze
argparse==1.2.1
wsgiref==0.1.2
# requirements.txt would allow installation of all required packages via:
pip install -r requirements.txt
# install packages, whilst virtualenv is activated
pip install bottle
pip install https://github.com/FedericoCeratto/bottle-cork/archive/master.zip
pip install requests
pip install pymongo==2.6.2
pip install beautifulsoup4
pip install lxml
pip install Beaker
pip install pycrypto
pip install pillow
pip install tldextract
04) Copy existing application files to new location - DONE
cp -r path/to/open_shift_apps/my-app/. /var/www/html
05) Remove files and folder unnecessary for *local* production from var/www/html - DONE
rm -r data
rm -r libs
rm -r .openshift
rm -r .git
rm setup.py
rm setup.pyc
rm setup.pyo
06) mod_wsgi - DONE
Through Synaptic Package Manager.
Apache wouldn't work for me unless mod_wsgi was installed at system level, ie it didn't work when mod_wsgi was installed within virtualenv.
07) Understand the relationship between the Apache server, mod_wsgi and your application - DONE
Apache
To run a dynamic website locally, you need a server, in this case we used Apache.
mod_wsgi
mod_wsgi is an Apache module which extends Apache so that rules can be added to its configuration which point to your Python code so that it can be run when a user visits a particular path.
08) Configure Apache rules
/etc/apache2/sites-available/000-default.conf
WSGIPythonHome /var/www/html/ENV
WSGIPythonPath /var/www/html:/var/www/html/ENV/lib/python2.7/site-packages:/var/www/html/wsgi
<VirtualHost *:80>
# for all content in static folder - css, js, img, fonts
Alias /static/ /var/www/html/wsgi/static/
# for rockmongo
Alias /rockmongo /var/www/html/rockmongo
<Directory /var/www/html/rockmongo>
Order deny,allow
Allow from all
</Directory>
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
WSGIScriptAlias / /var/www/html/wsgi/application
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
When a user visits a particular path, Apache looks for an application object which contains code which will run your Python program.
In this case, the object is located at wsgi/application and is triggered when the user goes to localhost.
/var/www/html/wsgi/application
from mybottleapp import application
09) Check file ownership and permissions
If things don't work at any stage of the process, be sure to look at the permissions of your local files. Not having the right permissions could mean that your application is not imported.
10) mongodump from OpenShift and mongorestore locally
How to mongodump from OpenShift and mongorestore locally on MongoDB 2.4.9?
Further Reading
How Python web frameworks, WSGI and CGI fit together
https://docs.python.org/2/howto/webservers.html
http://wsgi.readthedocs.org/en/latest/servers.html
https://code.google.com/p/modwsgi/
https://www.python.org/dev/peps/pep-0333
http://bottlepy.org/docs/dev/deployment.html#apache-mod-wsgi
I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.
I've been using pip with virtualenv, and I'm really liking it. I keep all my requirements in a requirements.txt file obtained with this command:
`pip freeze > requirements.txt`
The only thing that I've really been trying to figure out is this:
How can I remove packages that aren't in my requirements file?
This would be really helpful for when I am moving between different branches.
One possible solution is to do the following:
virtualenv --clear PATH_TO_VENV
pip install -r requirements.txt
However, it's a bit of a drastic solution...
Is there a tool in the Cygwin package similar to apt-get on Debian or yum on redhat that allows me to install components from the command line?
Cygwin's setup accepts command-line arguments to install packages from the command-line.
e.g. setup-x86.exe -q -P packagename1,packagename2 to install packages without any GUI interaction ('unattended setup mode').
(Note that you need to use setup-x86.exe or setup-x86_64.exe as appropriate.)
See https://cygwin.com/packages/ for the package list.
For a more convenient installer, you may want to use
apt-cyg as your package manager. Its syntax similar to
apt-get, which is a plus. For this, follow the above
steps and then use Cygwin Bash for the following steps
wget https://raw.githubusercontent.com/transcode-open/apt-cyg/master/apt-cyg
chmod +x apt-cyg
mv apt-cyg /usr/local/bin
Now that apt-cyg is installed. Here are few examples of
installing some packages
apt-cyg install nano
apt-cyg install git
apt-cyg install ca-certificates
There is no tool specifically in the 'setup.exe' installer that offers the
functionality of apt-get. There is, however, a command-line package installer
for Cygwin that can be downloaded separately, but it is not entirely stable and
relies on workarounds.
apt-cyg: http://github.com/transcode-open/apt-cyg
Check out the issues tab for the project to see the known problems.
There exist some scripts, which can be used as simple package managers for Cygwin. But it’s important to know, that they always will be quite limited, because of...ehm...Windows.
Installing or removing packages is fine, each package manager for Cygwin can do that. But updating is a pain since Windows doesn’t allow you to overwrite an executable, which is currently running. So you can’t update e.g. Cygwin DLL or any package which contains the currently running executable from the Cygwin itself. There is also this note on the Cygwin Installation page:
"The basic reason for not having a more full-featured package manager is that
such a program would need full access to all of Cygwin’s POSIX functionality.
That is, however, difficult to provide in a Cygwin-free environment, such as
exists on first installation. Additionally, Windows does not easily allow
overwriting of in-use executables so installing a new version of the Cygwin
DLL while a package manager is using the DLL is problematic."
Cygwin’s setup uses Windows registry to overwrite executables which are in use
and this method requires a reboot of Windows. Therefore, it’s better to close
all Cygwin processes before updating packages, so you don’t have to reboot
your computer to actually apply the changes. Installation of a new package
should be completely without any hassles. I don’t think any of package managers
except of Cygwin’s setup.exe implements any method to overwrite files in use,
so it would simply fail if it cannot overwrite them.
Some package managers for Cygwin:
apt-cyg
Update: the repository was disabled recently due to copyright issues (DMCA takedown). It looks like the owner of the repository issued the DMCA takedown on his own repository and created a new project called Sage (see bellow).
The best one for me. Simply because it’s one of the most recent. It doesn’t use Cygwin’s setup.exe, it rather re-implements, what setup.exe does. It works correctly for both platforms - x86 as well as x86_64. There are a lot of forks with more or less additional features. For example, the kou1okada fork is one of the improved versions, which is really great.
apt-cyg is just a shell script, there is no installation. Just download it (or clone the repository), make it executable and copy it somewhere to the PATH:
chmod +x apt-cyg # set executable bit
mv apt-cyg /usr/local/bin # move somewhere to PATH
# ...and use it:
apt-cyg install vim
There is also bunch of forks with different features.
sage
Another package manager implemented as a shell script. I didn't try it but it actually looks good.
It can search for packages in a repository, list packages in a category, check dependencies, list package files, and more. It has features which other package managers don't have.
cyg-apt
Fork of abandoned original cyg-apt with improvements and bugfixes. It has quite a lot of features and it's implemented in Python. Installation is made using make.
Chocolatey’s cyg-get
If you used Chocolatey to install Cygwin, you can install the package cyg-get, which is actually a simple wrapper around Cygwin’s setup.exe written in PowerShell.
Cygwin’s setup.exe
It also has a command line mode. Moreover, it allows you to upgrade all installed packages at once (as apt-get upgrade does on Debian based Linux).
Example use:
setup-x86_64.exe -q --packages=bash,vim
You can create an alias for easier use, for example:
alias cyg-get="/cygdrive/d/path/to/cygwin/setup-x86_64.exe -q -P"
Then you can, for example, install Vim package with:
cyg-get vim
First, download installer at: https://cygwin.com/setup-x86_64.exe (Windows 64bit), then:
# move installer to cygwin folder
mv C:/Users/<you>/Downloads/setup-x86_64.exe C:/cygwin64/
# add alias to bash_aliases
echo "alias cygwin='C:/cygwin64/setup-x86_64.exe -q -P'" >> ~/.bash_aliases
source ~/.bash_aliases
# add bash_aliases to bashrc if missing
echo "source ~/.bash_aliases" >> ~/.profile
e.g.
# install vim
cygwin vim
# see other options
cygwin --help
I wanted a solution for this similar to apt-get --print-uris, but unfortunately apt-cyg doesn't do this. The following is a solution that allowed me to download only the packages I needed, with their dependencies, and copy them to the target for installation. Here is a bash script that parses the output of apt-cyg into a list of URIs:
#!/usr/bin/bash
package=$1
depends=$( \
apt-cyg depends $package \
| perl -ne 'while ($x = /> ([^>\s]+)/g) { print "$1\n"; }' \
| sort \
| uniq)
depends=$(echo -e "$depends\n$package")
for curpkg in $depends; do
if ! grep -q "^$curpkg " /etc/setup/installed.db; then
apt-cyg show $curpkg \
| perl -ne '
if ($x = /install: ([^\s]+)/) {
print "$1\n";
}
if (/\[prev\]/) {
exit;
}'
fi
done
The above will print out the paths of the packages that need downloading, relative to the cygwin mirror root, omitting any packages that are already installed. To download them, I wrote the output to a file cygwin-packages-list and then used wget:
mirror=http://cygwin.mirror.constant.com/
uris=$(for line in $(cat cygwin-packages-list); do echo "$mirror$line"; done)
wget -x $uris
The installer can then be used to install from a local cache directory. Note that for this to work I needed to copy setup.ini from a previous cygwin package cache to the directory with the downloaded files (otherwise the installer doesn't know what's what).
Old question, but still relevant. Here is what worked for me today (6/26/16).
From the bash shell:
lynx -source rawgit.com/transcode-open/apt-cyg/master/apt-cyg > apt-cyg
install apt-cyg /bin
Dawid Ferenczy's answer is pretty complete but after I tried almost all of his options I've found that the Chocolatey’s cyg-get was the best (at least the only one that I could get to work).
I was wanting to install wget, the steps was this:
choco install cyg-get
Then:
cyg-get wget
Usually before installing a package one has to know its exact name:
# define a string to search
export to_srch=perl
# get html output of search and pick only the cygwin package names
wget -qO- "https://cygwin.com/cgi-bin2/package-grep.cgi?grep=$to_srch&arch=x86_64" | \
perl -l -ne 'm!(.*?)<\/a>\s+\-(.*?)\:(.*?)<\/li>!;print $2'
# and install
# install multiple packages at once, note the
setup-x86_64.exe -q -s http://cygwin.mirror.constant.com -P "<<chosen_package_name>>"