How do I prepare a Raspberry Pi with Raspbian so I can cross compile Qt5 programs from a Linux host? - raspberry-pi

I want to setup a cross compile environment on Linux for the Raspberry Pi 1.
Especially I want to try bleeding edge version, i.e. Raspbian testing + Qt5 dev branch.
This question:
How can I create a modern cross compile toolchain for the Raspberry Pi 1?
...explains how to get a gcc compiler, which can create code for the Raspberry Pi 1. Are there changes necessary on Raspbian itself to use it? If so, which ones?

A full toolchain is what you need
A toolchain is a set of tools working together to generate binaries for your system. Depending on how you build your toolchain, it might end up in being only functional for your own image, that's not, in fact a problem, you just clone your image and upgrade it at will.
First, understand what you need:
Functional flagship system. This is your reference board and your reference distro bundle, your packages and your stuff. You might want a standard Raspbian or you might want some extra stuff, like OpenCV or less stuff like removing Xorg. You say you want bleeding edge, so fit your taste.
Sysroot. Ideally this is a copy of your Functional Flagship System with the added development headers. In my case its exactly the same, for Raspbian this is an image of your second partition, the one that hosts /.
Cross compiler. This is a compiler that generates code for ARM while running on x86 or x86_64. This is generally a specialized gcc.
Cross compiled qmake. For Qt you need a cross qmake, this is a qmake that will generate Qt binaries and uses things you need to generate your arm Qt software.
ARM Qt libraries. This is part of your Functional Flagship System, I just enumerate it here for the sake of clarity. They will get compiled by you using your sysroot and your cross compiler.
Qt Libraries for Cross Compiling. This are a product of the steps you will follow when generating your cross compiling qmake and ARM Qt Libraries. This will be installed in your host x86 system.
So how do you get all of this?
Gather Your Very Own Toolchain
Functional Flagship System (FFF). Just get your raspbian image and install your additional software at will, whatever you want to be in, just install it on a live Raspberry.
Sysroot. Once you have your FFF, then use dd to generate an image of your second raspbian partition. Get your card off, insert it into a x86 system and use dd. There are other ways using mount and offsets but this is a lot simpler.
Cross compiler. Unless you really know what you are doing, just refrain from creating it yourself. There are functional cross compilers.
Qmake for cross compiling, ARM Qt and Qt Libraries. This is the interesting part...
Cross Compiling Qt 5
You can go as bleeding edge as you please with Qt as you get it from git. As this is not really a Wiki I will just enumerate the steps. This guide explains it with a lot more detail.
Get your FFF, image and cross compiler working.
git clone your Qt, pick a tag (version)
mount your sysroot
Get ia32-libs if you are under x64
Compile qtbase then make install. IMPORTANT: After you get qtbase it generates it's own qmake, use it from now on.
Use the generated and installed qmake from qtbase to build any other Qt module you want.
Remember to use make install on all Qt modules you build. All these 'installs' will copy those binaries to your sysroot.
Get your Qt into your FFF. Either you copy the folder and avoid messing with permissions, or more easily just umount your sysroot, then use dd to dump the modified image to the very same physical partition you got it from. These are the ARM Qt Libraries.
When building qtbase it will install some stuff into your own x86 system. This is Qmake for cross compiling, use it into Qt Creator to generate cross compiled binaries along with your cross compiler.
Some notes nobody tells you
There seem to be no toolchains ready to download. This is because they depend a lot on your specific setup.
Do not use system or regular qmake to cross compile. Use your generated qmake, as it fits perfectly with your FFF, it has paths and other specific stuff baked in.
I repeat, do not bother creating your cross compiler
What if you need additional development files? Install them on your FFF, then copy your partition to have your new sysroot.
Yes, you can auto-deploy with Qt and even debug remotely on a live Pi.

Installing a bleeding edge development system/toolchain is a bit of a problem... It is a moving target. The following steps did work for me March 2015. If they still 100% work or how long they will work... But if one have read and understood the following 'walktrough' it should not be difficult to adjust the process for future Raspian or Qt5 versions.
Fist step should be to update Raspian. I upgraded to testing. To do this, change the repository in /etc/apt/sources.list to:
deb http://mirrordirector.raspbian.org/raspbian/ testing main contrib
non-free rpi
Followed by the usual 'apt-get update, apt-get upgrade, apt-get dist-upgrade'. Or an analogue aptitude command. After this step one has upgraded to the most recent Raspian. With all the risks and benefits of a testing release.
Next a couple of packages needs to be installed. Probably not all necessary, e.g. xcb does not work on a RPi, and the RPi hat its own set of opengl files. But some Raspian packages don't know this and might pull them in anyways. The packages below allow to compile a Qt5 with QMultimedia and
apt-get install -y "^libxcb.*" libx11-xcb-dev libglu1-mesa-dev libxrender-dev libxi-dev libicu-dev libxslt1-dev
apt-get install -y libssl-dev libxcursor-dev libxrandr-dev libfontconfig1-dev libcap-dev libbz2-dev libgcrypt11-dev
apt-get install -y libpci-dev libnss3-dev libxtst-dev libasound2-dev libcups2-dev libpulse-dev libudev-dev
apt-get install -y libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libproxy-dev libmtdev-dev libts-dev
apt-get install -y libxkbcommon-x11-dev libxkbcommon-dev libinput-dev libgbm-dev libjpeg8-dev libgif-dev libopenjpeg-dev
apt-get install -y libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev sqlite3 libsqlite3-dev libwayland-dev
apt-get install -y libdirectfb-dev libegl1-mesa-dev libsystemd-journal-dev libharfbuzz-dev xutils-dev libcairo2-dev
apt-get install -y libffi-dev libpam0g-dev
The next and most important step is also the most unpleasant one. A couple of libraries in Raspian are symbolic links with absolute paths. This is bad since those libraries are later on not found when Qt5 is compiled. All symlinks of relevant libs must be turned into symlinks with relative paths. With Google's help a script can be found, which did this almost automatically, but for some reason it did not work for me. Therefore I did it manually. If I have to do this more often, I certainly will write my own. This is also the step, which is most likely to break. Library versions change... so don't blindly copy/paste the commands below.
Not all of the libs below are necessary to compile Qt5, but all of them could be a problem eventually. After this step the Raspberry Pi is ready to be used. Next step is to compile and install Qt5.
EDIT: One of the side effects of writing such a mini-tutorial: One thinks again about certain things one has done. There is a much easier way to convert absolute links into relative links: symlinks.
So:
apt-get install symlinks
And then in /usr/lib/ on the Raspberry Pi:
symlinks -cr .

Related

Trouble installing SUMO 0.30.0 in Ubuntu 16.04 from source code

I need to install SUMO 0.30.0 to be used with the VEINS_INET subproject in veins 4.6. I have tried following the instructions here and suggestions from forums but haven't had any luck being able to install sumo. I run ./configure (trying various tool/library options) then run sudo make but all I get is target marouter failed or nothing to be done for 'install-exec-am' 'install-data-am'.
Does anyone know how to install sumo-0.30.0 from source and/or make the veins_inet subproject work with the latest version of sumo-0.32.0?
Don't run sudo make.
Don't run sudo make.
Your problem is probably related to a dependency/packaging change in 16.04, which is explicitly pointed out in the veins tutorial:
Note that Ubuntu 16.04 no longer includes libproj0; this can be worked around by temporarily adding the packet repository of, e.g., Ubuntu Vivid when installing this package.
Short answer: Unfortunately this means that long-term, you're going to either have to package SUMO yourself, use the versions someone else compiled (see this launchpad for example) or rely on an old version.
Long answer:
In general, I would recommend building SUMO from source by building its' dependencies from source, since I've encountered this problem on various distributions. In particular, the fox, proj and gdal libraries tend to be packaged in different versions, and along with changes in the SUMO source code. I currently use this script (with the package versions downloaded) to compile SUMO -- but this is for 0.30.0, and it breaks if any of the referenced source packages are moved (which happens quite often). My general recommendation would be to either use a completely isolated version of SUMO (i.e., compiling by hand as much as possible) or relying on a pre-packaged version (see above), as long as that version is recent enough to work with VEINS.

Swift toolchain location on Linux

I'm looking into running Swift on a Ubuntu 16.04 server. However I want to be certain about where I should install the toolchain.
From swift.org:
If you installed the Swift toolchain on Linux to a directory other than the system root, you will need to run the following command, using the actual path of your Swift installation...
Then from Kitura's Setting Up instructions:
After extracting the .tar.gz file, update your PATH environment variable so that it includes the extracted tools:
$ export PATH=<path to uncompressed tar contents>/usr/bin:$PATH
Where is the best place to install these type of things? In the past I would rely on apt-get or installation scripts provided by maintainers but this doesn't seem to be the case with Swift.
Are there any benefits or disadvantages to not installing it at the system root?
Note: This question borders on "best practices", which I believe is frowned upon here. I'm sorry about that; I've googled around and this seems to be something that people know implicitly. However, I don't yet and need some guidance
The versions of the software in your system root - in /usr/bin, /usr/share, /usr/lib, etc. - are carefully coordinated by the maintainers of your distribution to handle all reasonable dependencies. The maintainers also keep the software up-to-date with bug fixes.
When you need to install software that isn't supplied by your distribution, it's best to install it in a separate directory, such as /opt (in your case, one possibility is /opt/swift-3.1.1). This will avoid overwriting existing installed software (in your case, /usr/bin/lldb and /usr/lib/lldb) with something that's possibly incompatible with other software. And it will make it easy to uninstall (just rm -r /opt/swift-3.1.1 rather than having to get a list of files from the original tarball that are potentially strewn all over /usr).
There is some extra effort: you'll need to add /opt/swift-3.1.1/usr/bin to your PATH1. With some software, you'll need to add the directory containing dynamic library files to LD_LIBRARY_PATH. The software's installation instructions typically explains what you need to do.
[1]An alternative to changing PATH is to add a symlink to each new executable, in a directory that's already in your PATH. GNU Stow can help you do this.

How to install python libxml2 in solaris?

I'm good at installing package in Linux environment but newbie to Solaris OS. I need to install Python - libxml2 package to my project. Does the below command also work in Solaris server for installation??
sudo apt-get install libxml2 libxml2-dev
I have tried googling, unfortunately not able to get.
What you proposed is specific to Debian-based Linux distributions.
IMHO, the fastest way would be to download the libxml2 source code in order to compile and install it yourself.
If you're running Solaris 11, then pkg install libxml2 with sufficient privilege would be the right invocation. Determining the right package name is as simple as pkg search with a reasonable query (assuming that you're still connected to the repository from which you installed the system).
If you're running Solaris 10 or older, then you'll need the original install media, plus whatever patches have been issued that intersect SUNWlxml. But frankly, installing from source is probably easier at that point.

How to build gstreamer ugly plugins from source

I would like to change some code in one element X in gstreamer ugly plugin and rebuild and use it.
How I can do it?
I have gstreamer-0.10 and installed gstreamer-ugly plugin.
I would like to download only gstreamer0-10 ugly plugin code and change it and would like to use the new lib file. How I can do it?
unfortunately gstreamer-ugly depends on a lot of stuff in at least libgstreamer and plugins-base (if you're using linux and your distro provides *-dev packages as debian/ubuntu does).
If you're on debian you could use dpkg-buildpackage after checking out the source using apt-source. The big advantage here is that all the build dependencies can be easily installed.
The manual way will probably need you to first build all the other gstreamer packages have a close look on what ./configure tells you
I'm workin on debian and have already built gstreamer+plugins to backport the recent ones to ubuntu (although I'm not sure if I did it in a best-practice way ;) )
/edit: I'll try to cover the basic steps for ubuntu here:
add the source repositories to apt (check the "source code" checkbox in the ubuntu software center's "software sources" tool
sudo apt-get install dpkg-dev devscripts
sudo apt-get build-dep gst-plugins-ugly0.10
apt-get source gst-plugins-ugly0.10
change to the newly created gst-plugins-base* folder
dpkg-buildpackage (and make sure it works)
change the source to your needs
you can rebuild it any time using dpkg-buildpackage (to simply see if it compiles make might be faster though). This creates a .deb file in the parent folder that you can simply install using dpkg -i
If it's a useful change you might want to get in touch with the gstreamer-devs ;)
On a debian system, run apt-get build-dep gstreamer0.10-plugins-ugly to get all the build dependencies for that package. After that you can build the package from git, source tarball or even rebuild the debian package (using dkgp-buildpackage).

Django OS X Wrong JPEG library version: library is 80, caller expects 62 sorl.thumbnail

Im using sorl.thumbnail for django locally on my mac and have been having trouble with PIL, but today i finally managed to get it installed - was some trouble with libjpeg.
I can now upload and use images - but I cant resize them using sorl.thumbnail.
When i try i get the following error:
Wrong JPEG library version: library is 80, caller expects 62
Does anyone know a good solution for this.
I dont know wether whatever sorl uses requires an earlier version of libjpeg or wether there is some ghost install of something still left behind from all of my tries with various methods.
I have :
PIL 1.1.7
libjpeg 8.
anyone know an approach?
For the benefit of the people from the future who are encountering this error and don't know why, I'd like to post my findings. I hope to give a general understanding of what's gone wrong since the exact commands to fix it may be different on your machine than on my OSX Lion install.
First, since it's easy to get lost in the potential solutions, it's important to understand that the error message is correct when it says Wrong JPEG library version: library is 80, caller expects 62 or some other combination of 62, 70, and 80. These numbers correspond to the different incompatible versions of libjpeg. There are two moving pieces here, the dynamically loaded jpeg library, and the PIL (or Pillow) install. What the error message is saying is that your PIL install was compiled with headers from libjpeg version 6.2, but when it goes to load up the actual shared library, it's being linked to version 8.0.
The fix is to download, build, and install the libjpeg version you want (any will do, though the later versions build easier on OSX Lion):
wget http://www.ijg.org/files/jpegsrc.v8d.tar.gz
tar xzf jpegsrc*
cd jpeg-*
./configure
make
sudo make install
This should drop 2 files of note in '/usr/local/'. Namely /usr/local/lib/libjpeg.8.dylib and /usr/local/include/jpeglib.h. Now we just have to get PIL (or Pillow) to use these two files at install time, and we're home free. I know there's a better way to do this, but the hack (as recommended by the PIL docs) is to edit the setup.py file of the PIL distribution before you install it. You may get away with just setting JPEG_ROOT = libinclude('/usr/local') near the top of setup.py, though further directory manipulation may be necessary elsewhere in the file.
As you fiddle with the paths, you have to make sure PIL does a full rebuild before you test out whether it linked up to the right library or not. I used a command like rm -rf build && python setup.py install to make sure the library was always freshly linked to the current path I was testing.
I'm sorry this is a rambling answer, but it was very disheartening to have tried every other copy & paste solution out there and have none of them work. Hopefully this answer keeps at least a few folks from wasting numerous hours in search of a simplistic solution.
Good Luck!
If you have macports installed, you should do a:
$ sudo port selfupdate
$ sudo port install py27-pil
It's easier than the easy_install method since macports install the right dependencies.
I had a slightly different problem than the OP, but I wanted to share my solution here to help someone in the future.
OS: OSX El Capitan
I installed libjpeg-turbo from the precompiled binaries on their website. However, I did not know that I already had a different version of libjpeg installed on my mac. I was building my c file like this gcc myfile.c -o myfile.out -L /opt/libjpeg-turbo/lib -ljpeg. This got the library from the correct location, but the the linker was getting the included header file jpeglib.h from the pre-installed location. I changed my build command to this: gcc myfile.c -o myfile.out -I/opt/libjpeg-turbo/include/ -L /opt/libjpeg-turbo/lib -ljpeg and it worked. No more library is 80, caller expects 62!
Like a previous answer, I had a slightly different problem than the OP, but I wanted to share my solution here to help someone in the future.
The only thing that worked for me was forcing pip to build pillow from source after installing the dev version of the needed libraries (my code was editing a jpg and adding a label using a custom font). This was on a ARM based embedded device running Ubuntu Linux using Python 3.7.3
apt-get install -y libjpeg-dev libfreetype6-dev
pip3 install pillow --global-option="build_ext" --global-option="--enable-jpeg" --global-option="--enable-freetype"