make virtualenvwrapper to work with different python versions - virtualenvwrapper

fix: sorry, all is fine, error was because of no module installed in this new environment, jinja2.
First time using virtualenvwrapper so I am little confused.
Setup went fine, I read the docs, but still I don't understand few things.
In my .bashrc file I've set:
# virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Snakepit
source /usr/bin/virtualenvwrapper.sh
I already have my project files, so I thougt I should do the following:
Go into ~/Snakepit/ directory, run mkvirtualenv -p /usr/bin/python2 [ envname ]
(I need this specific version for my project), and I saw it created in
~/.virtualenvs/ dir.
My command promt changes showing me that my new environment is [ envname ].
When I do now: python -V, it shows that I am using version 2.7 of python, so
all is well!
But when I move now, my project files into Snakepit directory, and try
running my program with python myprogram.py it shows me errors because it
still tries to run my program with python 3.
How is that possible when python -V shows version 2.7?

Error was not about python version being run, but instead module missing in newly created environment. I will leave it, for a feature reference.

Related

Debug Error Occurred in Eclipse

I'm trying to debug an open source package, called libprotoident in Eclipse, Kepler version, within Debian. As it has the Makefile, I choose to make an empty Makefile project, and then add all the sources into the workspace. So after that the source compiled and run successfully as in the command line using the Makefile.
As it has 4 apps you can use, I choose to run lpi_protoident package in the run configuration window, as the following image shown.
So the Program ran successfully. Now I'm trying to debug it but it generates the following error.
How can I solve this error and debug the Project?
The file you are trying to debug is most likely a shell script created by automake that acts as a wrapper around the real executable, which has been built in a hidden directory.
Instead of telling Eclipse that tools/protoident/lpi_protoident is your application, try using tools/protoident/.libs/lpi_protoident instead.
General Answer about the error you are getting
What not in executable format: File format not reconized error means is that lpi_protoident is not an executable on the platform you are working on.
Are you sure that is an executable you can run (E.g. from the command line)?
There is also the small chance that the GDB you are using is somehow incompatible with the executable, but that is less likely.
Building libprotoident from source
(Assuming you are trying to build https://github.com/wanduow/libprotoident)
You are trying to build an automake project. The normal way to do that is by configuring to create Makefile, you shouldn't be making your own makefile. Please refer to the README in the project, but the key parts you need to do are:
Installation
After having installed the required libraries, running the following series of commands should install libprotoident
./bootstrap.sh (only if you've cloned the source from GitHub)
./configure
make
make install
By default, libprotoident installs to /usr/local - this can be changed
by appending the --prefix= option to ./configure.
The libprotoident tools are built by default - this can be changed by
using the
--with-tools=no option with ./configure.

Install just one package globally on Julia

I have a fresh Julia instalation on a machine that I want to use as a number-crunching server for various persons on a lab. There seems to be this nice package called jupyterhub wich makes the Jupyter Notebook interface avaible to various clients simultaneusly. A web page which I am unable to find again began suggesting something like "first install IJulia globally, then install JupyterHub..."
I cannot seem to find a nice way to install ONE package globally.
Update
In Julia-v0.7+, we need to use JULIA_DEPOT_PATH instead of JULIA_PKGDIR and the LOAD_PATH looks something like this:
julia> LOAD_PATH
3-element Array{Any,1}:
Base.CurrentEnv()
Any[Base.NamedEnv("v0.7.0"), Base.NamedEnv("v0.7"), Base.NamedEnv("v0"), Base.NamedEnv("default"), Base.NamedEnv("v0.7", create=true)]
"/Users/gnimuc/Codes/julia/usr/share/julia/stdlib/v0.7"
Old Post
"first install IJulia globally, then install JupyterHub..."
I don't know whether this is true or not, by following these steps below, you can install IJulia after you installed Jupyterhub.
Install packages system-wide/globally for every user
this question has already been answered here by Stefan Karpinski. so what we need is just use this method to install the IJulia.jl package.
There's a Julia variable called LOAD_PATH that is arranged to point at two system directories under your julia installation. E.g.:
julia> LOAD_PATH
2-element Array{Union(ASCIIString,UTF8String),1}:
"/opt/julia-0.3.3/usr/local/share/julia/site/v0.3"
"/opt/julia-0.3.3/usr/share/julia/site/v0.3"
If you install packages under either of those directories, then everyone using that Julia will see them. One way to do this is to run julia as a user who can write to those directories after doing export JULIA_PKGDIR=/opt/julia-0.3.3/usr/share/julia/site in the shell. That way Julia will use that as it's package directory and normal package commands will allow you to install packages for everyone....
Make IJulia working with Jupyterhub
in order to make IJulia and Jupyterhub working with each other for all the users, you should copy the folder your/user/.local/share/jupyter/kernels/julia/ to /usr/local/share/jupyter/kernels/. I write down some of the steps that I used in my test Dockerfile. the code is ugly, but it works.
Steps: (after you successfully installed Jupyterhub)
note that, you should do the following steps as root and I assume that your julia was globally installed at /opt/julia_0.4.0/.
make our global package directory and set up JULIA_PKGDIR:
mkdir /opt/global-packages
echo 'push!(LOAD_PATH, "/opt/global-packages/.julia/v0.4/")' >> /opt/julia_0.4.0/etc/julia/juliarc.jl
export JULIA_PKGDIR=/opt/global-packages/.julia/
install "IJulia" using package manager:
julia -e 'Pkg.init()'
julia -e 'Pkg.add("IJulia")'
copy kernelspecs to /usr/local/share/jupyter/kernels/ which can be used by any new user added by Jupyterhub:
jupyter kernelspec list
cd /usr/local/share/ && mkdir -p jupyter/kernels/
cp -r /home/your-user-name/.local/share/jupyter/kernels/julia-0.4-your-julia-version /usr/local/share/jupyter/kernels/

MATLAB 2014a (8.3) Compiler Runtime Errors libmwlaunchermain.so

MATLAB 2014a (8.3) Runtime Compiler (MCR) Errors when trying to launch deployed (using
deploy tool) application in Ubuntu 13.04.
Right after installation of MCR if one runs the deployed application following error appears:
error while loading shared libraries: libmwlaunchermain.so: cannot open shared object file: No such file or directory.
Since I have already found a solution to this problem wasting a day, I just want to share it:
This seems to be a problem of MATLAB MCR installation script designed for Linux by MathWorks. Furthermore, it is a result of a known Ubuntu bug. To fix it, add your MCR to the $PATH as shown below:
First make sure to add the missing files to the right folder, in terminal:
sudo cp /usr/local/MATLAB/MATLAB_Compiler_Runtime/v83/runtime/glnxa64/* /usr/local/MATLAB/MATLAB_Compiler_Runtime/v83/bin/glnxa64
Add the proper library folder to your .profile, such that this change will stay after logout
ubuntu: gedit .profile
In the end of the file add following lines:
#MATLAB MCR
export LD_LIBRARY_PATH=/usr/local/MATLAB/MATLAB_Compiler_Runtime/v83/bin/glnxa64
export XAPPLRESDIR=/usr/local/MATLAB/MATLAB_Compiler_Runtime/v83/X11/app-defaults
export PATH=$PATH:$LD_LIBRARY_PATH
export PATH=$PATH:$XAPPLRESDIR
Invoke following code in the terminal to make sure that Ubuntu bug doesn't re-write your variable:
echo STARTUP=\"/usr/bin/env LD_LIBRARY_PATH=\${LD_LIBRARY_PATH} \${STARTUP}\" | sudo tee /etc/X11/Xsession.d/90preserve_ld_library_path
Reboot
If this solution doesn't work, try to reinstall MATLAB MCR 8.3 from the MathWorks website and repeat the steps.
In my case (Matlab R2016b = v91), the binary generated by Matlab was accompanied by a shell script which sets up the LD_LIBRARY_PATH for me. If I just run
./run_scriptname.sh
it complains about the missing <deployedMCRroot>. So running the script with
./run_scriptname.sh /home/user/MatlabMCR/v91
it worked out of the box.
For me, it was not obvious that the path shown above is the <deployedMCRroot> because I chose /home/user/MatlabMCR as installation directory. And with the wrong path specified, it led to the same error message.

How to migrate virtualenv

I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.

Why does Capistrano need modifications to use something like pythonbrew?

As I understand, all that Capistrano does is ssh into the server and execute the commands we want it to (mostly).
I've used rvm in some past couple of projects, and had to install the rvm-capistrano gem. Otherwise, it failed to find the executables (or so I recall), even though we had a proper .rvmrc file (with the correct ruby and the correct gemset) in the repository.
Similarly, today I was setting up deployment for a project for which I'm using pythonbrew, and a simple "cd #{deploy_to}/current && pythonbrew venv use myenv && gunicorn_django -c gunicorn.py" gave me an error message saying "cannot find the executable gunicorn_django". This, I suppose is because the virtualenv was not activated correctly. But didn't we activate the environment when we did "pythonbrew venv use myenv"? The complete command works fine if I ssh into the server and execute it on the shell, but it doesn't when I do it via Capistrano.
My question is - why does Capistrano need modifications to play along with programs like rvm and pythonbrew, even though all it's doing is executing a couple of commands over ssh?
Thats because their ssh'ing in doesn't activate your shell's environment. So it's not picking up the source statements that enable the magic. Just do an rvm use ... before running commands instead of assuming the cd will pick that up automatically. Should be fine then. If you had been using fabric there is the env() context manager that you could use to be sure thats run before each command.