Ubuntu 14.04 upgrade broke all my virtualenvs - virtualenv

I've seen a couple of fixes for this, but none have worked for me, but I gather that its my virtualenvs that got broken. I just upgraded to 14.04 from 12.04, and now all my pyramid applications no longer work.
When I run ../bin/pserve development.ini, I get the following error:
ImportError: No module named _ctypes
When I run ../bin/python setup.py develop, (also when I try run pshell) I get:
ImportError: No module named _io
I've fixed one project (each pyramid app is in a separate virtualenv) by first removing the old project folder, then reinstalling the virtualenv instance and then copying my scripts back into it. But this is time consuming, and I have several projects.
Is there a quick fix for this?
I've seen removing duplicates of python and simple reinstall of virtualenv, but removing duplicates is not a good option, and the second solution didn't work for me. But maybe I did something wrong there.
I really think that there should be a quick fix for this. Surely reinstalling all virtualenvs cannot be the only solution?

You can simply do
cp /usr/bin/python2 /path/to/my-virtualenv/bin/python2
or
cp /usr/bin/python3 /path/to/my-virtualenv/bin/python3
(Don't need to make a new virtualenv.)

A quick fix that works is to create a new virtualenv and copy its bin/python to the broken virtualenvs. Five simple steps:
mkvirtualenv lero
cd ~/.virtualenvs
for d in */; do cp lero/bin/python $d/bin/python; done
deactivate
rmvirtualenv lero

Related

fix setuptools version when installing libraries using setup.py

This question is somewhat similar to the one here, but I cannot make it work.
So suppose that I have a set of packages (say 2) to install and I want to use pipenv. If I do pipenv install on the directory with a suitable Pipfile the installation fails because there is some metadata issue when installing one of the libraries (say libX) contained in install_requirements of one of the packages. It seems that the problem can be fixed by downgrading the version of setuptools to <=58.0.0.
OK. Now, if I first install that version of setuptools<=58.0.0 in the venv and then install my packages, everything works fine. The issue is that the Pipenvfile does not respect the order when installing, so something like
[packages]
setuptools = "<=58.0.0"
pckg1 = {<github path 1>}
pckg2 = {<github path 2>}
is not ensured to work. Also, by default the seed packages added to the venv include setuptools==65.6.3.
So the idea is to be able to restrict the version of setuptools that is used to check the metadata of the libraries in libX, to mimic the above scenario in which setuptools was installed first. Is there a way to do that?
I have tried placing setuptools<=58.0.0 at the top of the requirements.txt that defines the install_requirements of the problematic package, but it does not work.
If have also tried to fix or restrict the version of libX contained in that requirements.txt file but, surprisingly, pipenvdoes not seem to care: a verbose install shows that it keeps downgrading libX well below the restriction - "using cached libX-vX.X.X"- until it uses a version for which the metadata generation fails (why on earth does it do that, even if I call it with pipenv --clear install?).
I am a bit lost about what could be the best solution here. Any help would be very appreciated.

Cannot use Swift on Ubuntu 18.04

After conscientiously following the install instructions on Linux from swift.org, I encounter an issue where it is not possible to compile anything on a Ubuntu 18.04 machine. The REPL seems to work but during compilation (when calling swift build) the following error appears:
/usr/bin/ld: cannot find -lstdc++
There are more details in the full bug report [SR-9093]. I don't know at all what to do to solve this issue, there are similar problems already mentioned in other bug reports, for instance on this really old one [SR-35].
What should I do?
Thank you
I am assuming that you had already installed the libstdc++ successfully and you have set the permissions properly. But I really doubt that it was installed correctly but it was installed with corruption of some sort. The corruption occurred because you didn't install libstdc++ via a package manager. Result was some form of weirdness in the package manager database which effected the overall functioning system. Exactly why adding something to a folder should change anything at all. I don't know why this happens, unless the folder is hot i.e symbolically linked to a program which doesn't have any tolerance for hacks like simply copying a file into the folder. So for now try to install the libstdc++ again. Below is the link to the file to again download the correct program and this is compatible with amd64.
http://security.ubuntu.com/ubuntu/pool/main/g/gcc-5/libstdc++6_5.4.0-6ubuntu1~16.04.10_amd64.deb
And below are some link to help
https://ubuntuforums.org/showthread.php?t=1425470
https://ubuntuforums.org/showthread.php?t=808045
https://ubuntuforums.org/showthread.php?t=808045
https://packages.ubuntu.com/search?keywords=libstdc%2B%2B
https://packages.ubuntu.com/xenial/amd64/libstdc++6
Install libstdc++
sudo apt install libstdc++6
It seems possible that the apt install did not run the ldconfig program, which should be run to add the library to the list of those which ld.so knows about.
It looks like you can do it manually:
sudo ldconfig
IMPORTANT CAVEAT: I don't have Ubuntu and haven't been able to test this. And it's a sudo command. Run at your own risk, YMMV, etc.
If this does not work, it's possible that a file called /etc/ld.so.conf is not set up to search the directory where libstdc++ ended up. I wouldn't dare try to describe how to fix that.
sudo apt install -f
The command above should install any missing dependencies.

How do I install the hg-git plugin on Debian Stretch?

Debian Jessie, as well as sid, have a mercurial-git package which contains the hg-git plugin. However, this package was (auto-)removed from Debian Stretch to to a release-critical bug.
But - I need it installed and running. Surely this should be possible, right?
Well, I followed the installation instructions on the plugin page:
I ran apt-get install python-setuptools python-setuptools-git python4-setuptools python3-setuptools-git
I ran easy_install hg-git and it seemed to work
But still, when I run various mercurial operations I get, as the first line, the error message:
*** failed to import extension hgext.git: No module named git
(regardless of whether I'm doing anything git-related or not.)
My questions:
Why is this happening?
What do I need to do in order to make the error message go away while having hggit working?
Now,
How do I correctly install dulwich to get hg-git working on Windows?
Apparently, that critical bug doesn't manifest always (and perhaps only under very specific circumstances), so you can try installing the Debian sid version of the mercurial-git package (that is, version 0.8.11-1 at the time of writing). There's a SuperUser question about how to do this:
https://linuxaria.com/howto/how-to-install-a-single-package-from-debian-sid-or-debian-testing
my personal opinion in this case is to simply install the .deb file, which you can get from here (it's not platform-specific; at the link you'll need to choose a mirror.) That makes the error message go away, at least assuming you have:
[extensions]
hgext.bookmarks =
hggit =
in your ~/.hgrc file.

problems installing multiple versions of perl including latest

I have perl 5.8.8 installed in /usr/bin/perl
I need to use a later version, so am trying to install another version in a different place.
(nb. I started out trying to install perlbrew but on the linux server I'm using but that's not working - am getting all sorts of certificate problems).
I logged in as root and followed the instructions here, to install perl from source:
http://www.cpan.org/src/
This gave me an install of perl in
/root/localperl/bin/perl
I thought that didn't look right so I copied that the localperl directory to /usr
cp -r localperl/ /usr/
Now I can run a script in my /home/myusername/ directory by using
/usr/localperl/bin/perl
So I guess that looks more normal for an alternate install of perl, though:
a) Am not sure this is correct. So the question is if I stick #!/usr/localperl/bin/perl as the first line in every script, will all be fine?
b) Have no idea what to do to install modules for this new version. So:
i) What to do to build latest versions of modules for this version?
ii) Can I copy across all my existing modules that work with 5.8.8?
(Yes, I did attempt to read the doc and saw there were lots of options for configuring the install, but having tried one or two found this was even more confusing). Any specific help on the above appreciated.

How to migrate virtualenv

I have a relatively big project that has many dependencies, and I would like to distribute this project around, but installing these dependencies where a bit of a pain, and takes a very long time (pip install takes quite some time). So I was wondering if it was possible to migrate a whole virtualenv to another machine and have it running.
I tried copying the whole virtualenv, but whenever I try running something, this virtualenv still uses the path of my old machine. For instance when I run
source activate
pserve development.ini
I get
bash: ../bin/pserve: /home/sshum/backend/bin/python: bad interpreter: No such file or directory
This is my old directory. So is there a way to have virtualenv reconfigure this path with a new path?
I tried sed -i 's/sshum/dev1/g' * in the bin directory and it solved that issue. However, I'm getting a different issue now, my guess is that this sed changed something.
I've confirmed that I have libssl-dev installed but when I run python I get:
E: Unable to locate package libssl.so.1.0.0
E: Couldn't find any package by regex 'libssl.so.1.0.0'
But when I run aptitude search libssl and I see:
i A libssl-dev - SSL development libraries, header files and documentation
I also tried virtualenv --relocatable backend but no go.
Export virtualenvironment
from within the virtual environment:
pip freeze > requirements.txt
as example, here is for myproject virtual environment:
once in the new machine & environment, copy the requirements.txt into the new project folder in the new machine and run the terminal command:
sudo pip install -r requirements.txt
then you should have all the packages previously available in the old virtual environment.
When you create a new virtualenv it is configured for the computer it is running on. I even think that it is configured for that specific directory it is created in. So I think you should always create a fresh virtualenv when you move you code. What might work is copying the lib/Pythonx.x/site-packages in your virtualenv directory, but I don't think that is a particularly good solution.
What may be a better solution is using the pip download cache. This will at least speed up the download part of pip install. Have a look at this thread: How do I install from a local cache with pip?
The clean way seems to be with virtualenv --relocatable.
Alternatively, you can do it manually by editing the VIRTUAL_ENV path in bin/activate to reflect the changes. If you choose to do so, you must also edit the first line (#) of bin/pserve which indicates the interpreter path.