Bazel on CentOS for Tensorflow with CUDA - centos

We had to use Bazel on CentOS 6 (no choice) in order to install TensorFlow, and it worked.
Unfortunately, we were not able to install TensorFlow with CUDA, thus so far it runs only on CPUs.
We think that it is because the link to the CUDA compiler is wrong.
How can we modify/tune Bazel in order to give the proper link?

Related

How to cross-build a debian package for Raspberry Pi OS 64 bit

I have a working debian package that I'd like to backport to the current version of Raspberry Pi OS 64 bit (not 32-bit Raspbian).
Confusingly, while Debian itself seems to be robust about enabling cross-builds in their own package, there seems to be much less official documentation about how raspberry Pi OS (64 bit) packages are built¹.
Since I'm relatively certain this should be possible, I ask:
How to take a debian .dsc / debian rules, and build, on an x86_64, a 64 bit Raspberry Pi OS 64-bit compatible image
without using QEMU to actually build the image on arm64, without access to an actual RPi,
using an existing debian package that is known to work on sid on aarch64, and should be backportable,
making sure it's actually built against the correct set of Raspbian dependencies.
I'm guess this is a rather standard thing, I just don't know how to do it. I'm happy with using containers and similar technology, as I can easily integrate that with CI.
I do not plan to use an Arm64 VM, as the software in question takes about an hour to build and test, on an x86_64 server, natively.
¹I've talked to plugwash of Raspbian fame, and as earlier versions of this question showed: there's significant confusion about the heredity of Raspbian OS 64 bit: It's not Raspbian nor based on it. But people including Wikipedia and the RPi Foundation themselves conflate Raspberry Pi OS and Raspbian ("Raspberry Pi OS, formerly Raspbian"), which is 32 bit only.
RaspberryPi documentation here has explained how to build the x64 kernel from the source. What you want is in a way exactly like that.
Notice this line on the Kernel building page:
sudo apt install crossbuild-essential-arm64
This command on your Linux host machine installs a compiler that runs on an AMD64 machine but produces a binary that runs on an ARM machine.
And this line tells the compiler to actually build the source for that architecture:
make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- Image modules dtbs
Image modules dtbs are specific to your project. they may differ.
As for your Debian package, there is no way that you can transform an AMD64 package into an ARM one. Your package for the RaspberryPi if doesn't exist in an official or some third-party repository, must be built from the source.
Find the source code of your package and build it very similarly to RaspberryOS.
If your Package has dependencies it gets a little more complicated. First, install the dependency on your RaspberryPi. Then you should set up a sysroot on your host machine which is basically a mirror image of the preinstalled packages on RaspberryPi. Then for compiling your package you should give the sysroot address to cross compiler so that it can find dependencies.
There is another way too, you can put the source code of your package on your RaspberryPi and build it locally which can take a very long time based on the source code. Just to have a sense, Qt source code without WebEngine module took 48h for me. But Qt is big.
In conclusion, if your package binary is not on any repository you must compile it from the source.
Cross-compilation of different projects and executables are very similar to each other. To have a clear understanding of the process it can be beneficial to look for some other projects that were ported to RaspberryPi OS. Things like Qt, TagLib for android, and ...
First, I would take a look here:
https://github.com/Truelite/qt5custom for inspiration. I checked and those scritps work. However, you might have problems going completely „qemuless”; e.g. in case of QT some libraries needed to be added to host machine sysroot and qemu was simply the easiest way to add them properly: it seems to me that multiarch Debian has some deficiencies in the field of cross-compilation and the simplest way to overcome them is to pretend it’s the native one.

Is it possible to run tensorflow-data-validation on MacOS with M1 chip?

Question: Is it possible to run tensorflow-data-validation on MacOS with M1 chip?
Steps taken: I have created a conda environment (tfdv38) in which I have installed the Mac-optimized TensorFlow.
I have tried to install the package within the environment, this didn't work:
(tfdv38) ... % pip install tensorflow-data-validation
ERROR: Could not find a version that satisfies the requirement tensorflow-data-validation
ERROR: No matching distribution found for tensorflow-data-validation
Any suggestions?
At the moment unfortunately we don't support TFT, TFX and TFDV for M1 Mac, We are currently working on this issue, and will have an update in the fairly near future. In the meantime, some users have reported success with Rosetta. Other options include using a VM. We understand that neither of those is ideal.
You can also check on our Tensorflow Forum, same discussion is going on here.
TFDV is tested on the following 64-bit operating systems. Supported platforms :
macOS 10.14.6 (Mojave) or later
Ubuntu 16.04 or later
Windows 7 or later

Up-to-date recommendations for yocto build host version

Because the documentation recommends Ubuntu 15.10 as a yocto build host, we went to considerable effort to set this up, only to find that Bitbake still tells us that this is not a supported version.
What is the latest recommended Ubuntu version, please? I'm thinking we may as go with the latest LTS.
If you use Warrior Yocto version, you can use Ubuntu 18.04 as stated here. For older Yocto version, you'll need an older host distribution due to GCC support.
Anyway, I suggest you to build Yocto within a Docker environment, for example this one.

Open MPI stuck on an older version

I'm building a Raspberry Pi cluster and am using Open MPI to do some parallel processing... I was able to get it up and running with my Raspberry Pi 3 and a few Pi 1s, but when I tried to add another Pi 3 I started getting some errors (Error: unknown option "--hnp-topo-sig")
It's possible that the problem is because the versions of mpi between both my pis are different - my first pi 3 has version 2.0.2 while the other has 1.6.5, which is odd considering I only installed it on that pi today and on the first pi about a week ago.
I've tried sudo apt-get update and upgrade, but my pi keeps telling me that everything is up to date, even though it doesn't seem like it is. So my question is this - how can I update my open mpi to a newer version so I can run my files? Thanks in advance!
As Gilles noted, Open MPI requires the version to be identical on all machines.
If your Linux distro is telling you that the packaged version of Open MPI is up to date, then you probably have different versions of Linux distros on your different RPi units.
You might want to try:
Installing the same exact Linux distro/version on all your RPi units, and/or
Downloading the latest Open MPI source code tarball from www.open-mpi.org and building/installing Open MPI from source on all your RPi units. That will definitely work, but be aware that Open MPI is a large software package -- compiling it on an RPi will take quite a while.

Installing postgis in mac

I am trying to install postgis in mac. But I am not sure if I should compile it from the source code or install the binary. When I tried to install the binary it says that I need to install postgresql 9.1 which I already have. What should I do? Are there any clear instructions for installing in mac
When I run into trouble I usually install from source. This gives the configuration and compilers a better chance to tailor everything to your computer. You must have the development libraries for postgresql installed however to make this work with postgis.