I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.
Related
I updated NPM and Node before installing NativeScript, without errors I might add, but when I attempt to create a new project using tns create MyProjectName, I get the error tns command not found.
After much reading, I'm getting the feeling it has something to do with my PATH.
This is what is outputted in terminal during the NativeScript install regarding TNS:
sudo npm install -g nativescript --unsafe-perm
/Users/martingeldart/.npm-global/bin/tns -> /Users/martingeldart/.npm-global/lib/node_modules/nativescript/bin/tns
/Users/martingeldart/.npm-global/bin/nativescript -> /Users/martingeldart/.npm-global/lib/node_modules/nativescript/bin/tns
...
If I run echo $PATH, this is what outputs:
echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/usr/local/bin/lib/node_modules/nativescript/bin
Which looks really odd to me but I'm no command line expert by any means. In fact, I'm incredibly inexperienced with the whole command line system.
Why am I not able to access the tns command? What is going on with that PATH I echoed?
MacOSX
Between version of npm the location of the global package moved location in the OS. Since the installation has moved the terminal does not know where to find the command. The PATH variable is used to tell the terminal where all the command may be located. In this case this seems a standard.
Now the best way to access command from an installed package is to use npx which is included by default with the new installation of npm.
https://docs.npmjs.com/downloading-and-installing-packages-globally
npx tns
# In your case
npx tns create MyProjectName
There 2 other ways to resolve this.
Either your global package folder is not set up correctly.
https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally
Add the bin folder where nativescript was install to your path manually (usually in .bash_profile), open a new terminal.
I use nativescript for various project and I have a suggestion for package management. Usually I avoid installing global package because in case of multiple project there may be conflict between version if some project are updated and other not.
I usually create a folder with the version I am installing. Go to the folder, npm init and install locally the package.
mkdir nativescript-project-6-0
cd nativescript-project-6-0
npm init
npm i --save nativescript
Now I have a fix version to work with and can create other project with the same version even if I have other project with newer version of the tool or lib. Now in nativescript-project-6-0 I create my project.
npx tns create MyProjectName
This should create a folder nativescript-project-6-0/MyProjectName. All set and ready to go. Remember that is is always a good idea to use npx in this case since we want to use the local package.
I have a hard time understanding where is the right place to place a code that will install the needed packages for the given docker container managed by dokku.
We have a scala application and, unfortunately, we need to have one shell call that is dependent on an environment. I would like to install the given package for the given container using "apt-get install". Right now I am using a custom plugin with a file named "post-release-build". However, I don't have the permission to install anything in that phase.
Basically, my script that should be invoked looks like this (based on a dockerfile that is available online):
apt-get update
apt-get install -y build-essential xorg libssl-dev libxrender-dev wget gdebi
wget http://download.gna.org/wkhtmltopdf/0.12/0.12.2.1/wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
gdebi --n wkhtmltox-0.12.2.1_linux-trusty-amd64.deb
echo "-----> wkhtmltox installed!"
Is there a way how to make it work? I would also prefer to have such a file somewhere in the application so I don't need to setup environment before pushing the app (in the future).
EDIT:
I have found a plugin that should be capable of installing packages using apt-get (https://github.com/F4-Group/dokku-apt) however, I am a little bit unlucky because it downloads a package that is not working properly.
Since just downloading with apt-get will download a package that fails, I investigated deeper into dokku and came out with a new plugin that should install the package for you.
I have created a script, documented how to use it and licenced it over MIT license so feel free to use it. Hopefully it will save you the time I had to spend realizing what is going on.
URL: https://github.com/mbriskar/dokku-wkhtmltopdf
I have a fresh Julia instalation on a machine that I want to use as a number-crunching server for various persons on a lab. There seems to be this nice package called jupyterhub wich makes the Jupyter Notebook interface avaible to various clients simultaneusly. A web page which I am unable to find again began suggesting something like "first install IJulia globally, then install JupyterHub..."
I cannot seem to find a nice way to install ONE package globally.
Update
In Julia-v0.7+, we need to use JULIA_DEPOT_PATH instead of JULIA_PKGDIR and the LOAD_PATH looks something like this:
julia> LOAD_PATH
3-element Array{Any,1}:
Base.CurrentEnv()
Any[Base.NamedEnv("v0.7.0"), Base.NamedEnv("v0.7"), Base.NamedEnv("v0"), Base.NamedEnv("default"), Base.NamedEnv("v0.7", create=true)]
"/Users/gnimuc/Codes/julia/usr/share/julia/stdlib/v0.7"
Old Post
"first install IJulia globally, then install JupyterHub..."
I don't know whether this is true or not, by following these steps below, you can install IJulia after you installed Jupyterhub.
Install packages system-wide/globally for every user
this question has already been answered here by Stefan Karpinski. so what we need is just use this method to install the IJulia.jl package.
There's a Julia variable called LOAD_PATH that is arranged to point at two system directories under your julia installation. E.g.:
julia> LOAD_PATH
2-element Array{Union(ASCIIString,UTF8String),1}:
"/opt/julia-0.3.3/usr/local/share/julia/site/v0.3"
"/opt/julia-0.3.3/usr/share/julia/site/v0.3"
If you install packages under either of those directories, then everyone using that Julia will see them. One way to do this is to run julia as a user who can write to those directories after doing export JULIA_PKGDIR=/opt/julia-0.3.3/usr/share/julia/site in the shell. That way Julia will use that as it's package directory and normal package commands will allow you to install packages for everyone....
Make IJulia working with Jupyterhub
in order to make IJulia and Jupyterhub working with each other for all the users, you should copy the folder your/user/.local/share/jupyter/kernels/julia/ to /usr/local/share/jupyter/kernels/. I write down some of the steps that I used in my test Dockerfile. the code is ugly, but it works.
Steps: (after you successfully installed Jupyterhub)
note that, you should do the following steps as root and I assume that your julia was globally installed at /opt/julia_0.4.0/.
make our global package directory and set up JULIA_PKGDIR:
mkdir /opt/global-packages
echo 'push!(LOAD_PATH, "/opt/global-packages/.julia/v0.4/")' >> /opt/julia_0.4.0/etc/julia/juliarc.jl
export JULIA_PKGDIR=/opt/global-packages/.julia/
install "IJulia" using package manager:
julia -e 'Pkg.init()'
julia -e 'Pkg.add("IJulia")'
copy kernelspecs to /usr/local/share/jupyter/kernels/ which can be used by any new user added by Jupyterhub:
jupyter kernelspec list
cd /usr/local/share/ && mkdir -p jupyter/kernels/
cp -r /home/your-user-name/.local/share/jupyter/kernels/julia-0.4-your-julia-version /usr/local/share/jupyter/kernels/
I'm using Fabric to automate my deployment routines for my projects.
One of them concerns the virtualenv replication.
Automating the installation of new packages is pretty straight forward with
local $ pip freeze > requirements.txt
remote $ pip install -r requirements.txt
Now if I don't need a package anymore, I can simply
local $ pip uninstall unused_package
But as pip install won't remove the packages not present in the requirements anymore,
How can I automate the remove of packages from the virtualenv not present in the requirements ?
I'd like to have a command like:
remote $ pip flush -r requirements.txt
Another approach could be - and I know this is not answering your question perfectly - to use the power of the virtualenv you already have:
It is convenient to have known stable package and application environments, let's say identified by revision control tags, to be able to roll back to a known working combination (this is no replacement for testing or a staging environment, though).
So you could simply setup a new virtual environment ("workon your-tag"), populate it again with "pip install -r" and leave the old behind (for some time, e.g. until the new your-tag release is considered stable) and finally remove the old virtual-env('s).
In your fabfile do something like
with cd(stage_dir):
run("./verify_virtual_env.sh %s" % your-tag)
and the "verify_virtual_env.sh" script updates via pip for the given environment.
Why not just a diff with sets? It might require using a get operation though if you're operating on a remote box
On remote
from fabric.api import get, run
run("pip freeze > existing_pkgs.txt")
get("/path/to/existing_pkgs.txt")
So now existing_pkgs is on your local machine. Assuming you have a new requirements file...
with open("requirements.txt", "r") as req_file:
req_pkgs = set(req_file.readlines())
with open("existing_pkgs.txt", "r") as existing_pkgs:
existing = set(existing_pkgs.readlines())
Do an operation that gives you the differences in sets
uninstall_these = existing.difference_update(req_pkgs)
Then uninstall the pkgs from your remote host
for pkg in uninstall_these:
run("pip uninstall {}".format(pkg))
I ended up by keeping the install/uninstall jobs separated.
Install:
pip install -r requirements.txt
Uninstall:
pip freeze | grep -v -f requirements.txt - | xargs pip uninstall -y
I am trying to install ZODB on a new machine. I would like to match the same installation I have on another machine (the newest ZODB does not install correctly with easy_install). I have the easy-install.pth from the original that I would like to install on the new machine:
import sys; sys.__plen = len(sys.path)
./zodb3-3.10.0b1-py2.5-win32.egg
./zope.interface-3.8.0-py2.5-win32.egg
./zope.event-3.5.1-py2.5.egg
./zdaemon-2.0.4-py2.5.egg
./zconfig-2.9.0-py2.5.egg
./zc.lockfile-1.0.0-py2.5.egg
./transaction-1.1.1-py2.5.egg
import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new)
Is there a way install these exact files on the new machine? I tried copying the folders onto the new machine, but python does not see the module.
pip supports a manifest or you can use zc.buildout. pip may be the least friction if you are comfortable with easy_install.
I ended up just copying over the .egg folders over to the new machine as well as the easy-install.pth (which I renamed to zodb.pth) and everything worked great. Not perfect, but exactly what I was looking to do.