mongoc library & cross compile for ARM - mongodb

I try to compile the mongo c driver using an Ubuntu system to target the raspbian x86 architecture.
I'm using the toolchains provided by Raspberry and Here what I do :
git clone https://github.com/mongodb/mongo-c-driver.git
cd mongo-c-driver
./autogen.sh
./configure --host=arm-linux-gnueabihf --target=arm-linux-gnueabihf --with-libbson=bundled --enable-tests=no --enable-examples=no --enable-static --disable-shared
But it compiles for x64 :
objdum -a ./libs/libmongoc-1.0.&
File format elf64-x86-64
What can I do?

$ ./configure --help
Describes the env vars to set your cross compiler
$ export CC=[PATH]/arm-linux-gnueabihf-cpp
$ export CXX=[PATH]/arm-linux-gnueabihf-g++

Related

how to install mongodb-c driver in ubuntu and use in coturn server

i am using coturn and i want to use mongodb as a database
when i run the turnserver it shows
SQLite supported, default database location is /var/lib/turn/turndb
0: Redis supported
0: PostgreSQL supported
0: MySQL supported
0: MongoDB is not supported
0:
0: Default Net Engine version: 3 (UDP thread per CPU core)
i have installed coturn
using this command
sudo apt-get install coturn
and the coturn docs says
mongo-c-driver packages are not available "automatically". MongoDB
support will not be compiled, unless you install it "manually" before
the TURN server compilation. Refer to
https://github.com/mongodb/mongo-c-driver for installation
instructions of the driver.
and tried to install mongo c driver by following this guide
Install libmongoc with a Package Manager
apt-get install libmongoc-1.0-0
Build environment on Unix
On Debian / Ubuntu:
$ sudo apt-get install cmake libssl-dev libsasl2-dev
Configuring the build
Preparing a build from a git repository clone
$ git clone https://github.com/mongodb/mongo-c-driver.git
$ cd mongo-c-driver
$ git checkout 1.17.0 # To build a particular release
$ python build/calc_release_version.py > VERSION_CURRENT
$ mkdir cmake-build
$ cd cmake-build
$ cmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF ..
Executing a build
Building on Unix, macOS, and Windows (MinGW-W64 and MSYS2)ΒΆ
$ cmake --build .
$ sudo cmake --build . --target install
and
~/mongo-c-driver/cmake-build$cmake --build . help
returned
Unknown argument help
Usage: cmake --build <dir> [options] [-- [native-options]]
Options:
<dir> = Project binary directory to be built.
--target <tgt> = Build <tgt> instead of default targets.
May only be specified once.
--config <cfg> = For multi-configuration tools, choose <cfg>.
--clean-first = Build target 'clean' first, then build.
(To clean only, use --target 'clean'.)
--use-stderr = Ignored. Behavior is default in CMake >= 3.0.
-- = Pass remaining options to the native tool.
and
Generating the documentation
cmake -DENABLE_MAN_PAGES=ON -DENABLE_HTML_DOCS=ON ..
returned
-- No CMAKE_BUILD_TYPE selected, defaulting to RelWithDebInfo
file VERSION_CURRENT contained BUILD_VERSION 1.17.0
-- Build and install static libraries
-- Using bundled libbson
libbson version (from VERSION_CURRENT file): 1.17.0
-- struct timespec found
Adding -fPIC to compilation of bson_static components
CMake Error at CMakeLists.txt:10 (_message):
Could NOT find Sphinx (missing: SPHINX_EXECUTABLE)
Call Stack (most recent call first):
/usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:137 (message)
/usr/share/cmake-3.10/Modules/FindPackageHandleStandardArgs.cmake:378 (_FPHSA_FAILURE_MESSAGE)
build/cmake/FindSphinx.cmake:10 (find_package_handle_standard_args)
src/libbson/CMakeLists.txt:444 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/user/mongo-c-driver/cmake-build/CMakeFiles/CMakeOutput.log".
See also "/home/user/mongo-c-driver/cmake-build/CMakeFiles/CMakeError.log".
cmake --build . --target mongoc-doc
make: *** No rule to make target 'mongoc-doc'. Stop.
and when i restart the coturn server it still shows that mongodb is not supported
how can i resolve this issue
sudo apt-get install python3-sphinx

Prebuilt sparc bare metal cross compiler not working

Downloaded a prebuilt cross compiler sparc-elf-4.2.2 and has set the PATH to sparc-elf-4.4.4/bin after which i ran sparc-elf-gcc -o matrixmul matrixmul.c on the terminal only to find the following response
/home/root/sparc-elf-4.4.2/bin/sparc-elf-gcc: No such file or directory
I have no idea as to why this response .
I just ran into the same problem. Turned out that that my OS is a 64-Bit Ubuntu System and the compiler is a 32-Bit program.
I followed the instructions given here https://askubuntu.com/questions/454253/how-to-run-32-bit-app-in-ubuntu-64-bit :
To run a 32-bit executable file on a 64-bit multi-architecture Ubuntu
system, you have to add the i386 architecture and install the three
library packages libc6:i386, libncurses5:i386, and libstdc++6:i386:
sudo dpkg --add-architecture i386
Or if you are using Ubuntu 12.04 LTS (Precise Pangolin) or below, use this:
echo "foreign-architecture i386" > /etc/dpkg/dpkg.cfg.d/multiarch
Then:
sudo apt-get update
sudo apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386
If fails, do also
sudo apt-get install multiarch-support
After these steps, you should be able to run the 32-bit application:
./example32bitprogram

How to load or infer onnx models in edge devices like raspberry pi?

I just want to load onnx models in raspberry pi. How to load onnx models in edge devices?
You can use ONNX Runtime for ONNX model inference in Raspberry Pi. It support Arm32v7l architecture. Pre-build binary is not provided as of 2020/1/14. So you need to build it from source code. Instruction is described below.
https://github.com/microsoft/onnxruntime/blob/master/dockerfiles/README.md#arm-32v7
Install DockerCE on your development machine by following the instructions here
Create an empty local directory
mkdir onnx-build
cd onnx-build
Save the Dockerfile to your new directory
Dockerfile.arm32v7
FROM balenalib/raspberrypi3-python:latest-stretch-build
ARG ONNXRUNTIME_REPO=https://github.com/Microsoft/onnxruntime
ARG ONNXRUNTIME_SERVER_BRANCH=master
#Enforces cross-compilation through Quemu
RUN [ "cross-build-start" ]
RUN install_packages \
sudo \
build-essential \
curl \
libcurl4-openssl-dev \
libssl-dev \
wget \
python3 \
python3-pip \
python3-dev \
git \
tar \
libatlas-base-dev
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
RUN pip3 install --upgrade wheel
RUN pip3 install numpy
# Build the latest cmake
WORKDIR /code
RUN wget https://github.com/Kitware/CMake/releases/download/v3.14.3/cmake-3.14.3.tar.gz
RUN tar zxf cmake-3.14.3.tar.gz
WORKDIR /code/cmake-3.14.3
RUN ./configure --system-curl
RUN make
RUN sudo make install
# Set up build args
ARG BUILDTYPE=MinSizeRel
ARG BUILDARGS="--config ${BUILDTYPE} --arm"
# Prepare onnxruntime Repo
WORKDIR /code
RUN git clone --single-branch --branch ${ONNXRUNTIME_SERVER_BRANCH} --recursive ${ONNXRUNTIME_REPO} onnxruntime
# Start the basic build
WORKDIR /code/onnxruntime
RUN ./build.sh ${BUILDARGS} --update --build
# Build Shared Library
RUN ./build.sh ${BUILDARGS} --build_shared_lib
# Build Python Bindings and Wheel
RUN ./build.sh ${BUILDARGS} --enable_pybind --build_wheel
# Build Output
RUN ls -l /code/onnxruntime/build/Linux/${BUILDTYPE}/*.so
RUN ls -l /code/onnxruntime/build/Linux/${BUILDTYPE}/dist/*.whl
RUN [ "cross-build-end" ]
Run docker build
This will build all the dependencies first, then build ONNX Runtime and its Python bindings. This will take several hours.
docker build -t onnxruntime-arm32v7 -f Dockerfile.arm32v7 .
Note the full path of the .whl file
Reported at the end of the build, after the # Build Output line.
It should follow the format onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl, but version number may have changed. You'll use this path to extract the wheel file later.
Check that the build succeeded
Upon completion, you should see an image tagged onnxruntime-arm32v7 in your list of docker images:
docker images
Extract the Python wheel file from the docker image
(Update the path/version of the .whl file with the one noted in step 5)
docker create -ti --name onnxruntime_temp onnxruntime-arm32v7 bash
docker cp onnxruntime_temp:/code/onnxruntime/build/Linux/MinSizeRel/dist/onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl .
docker rm -fv onnxruntime_temp
This will save a copy of the wheel file, onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl, to your working directory on your host machine.
Copy the wheel file (onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl) to your Raspberry Pi or other ARM device
On device, install the ONNX Runtime wheel file
sudo apt-get update
sudo apt-get install -y python3 python3-pip
pip3 install numpy
# Install ONNX Runtime
# Important: Update path/version to match the name and location of your .whl file
pip3 install onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl
Test installation by following the instructions here

--select option for linux ./configure command

I am new to Linux and wanted to install MonoDevelop on my CentOS 6 VM. I found this question: Install Mono and Monodevelop on CentOS 5.x/6.x and was following the instructions outlined there but when I go to this step:
cd /usr/src
wget http://download.mono-project.com/sources/monodevelop/monodevelop-3.1.1.tar.bz2
tar -xvjf monodevelop-3.1.1.tar.bz2
cd monodevelop-3.1.1
PKG_CONFIG_PATH=/usr/lib/pkgconfig
export PKG_CONFIG_PATH
./configure --prefix=/usr --select
make && make install
Specifically the ./configure --prefix=/usr --select command, I got the following error on my system:
configure: error: unrecognized option: `--select'
Try `./configure --help' for more information
I typed ./configure --help in the terminal, but saw no --select option.
What does the --select option do?

Portable binaries with Rust

I have problems building a portable executable with rust.
Running an executable simply built with cargo build on Ubuntu fails with
./test: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by ./test)
Building with rustc ... -C link-args=-static fails to link correctly (output of ld ./test):
ld: error in ./test(.eh_frame); no .eh_frame_hdr table will be created.
Is there a way around this except building on an older system with an old glibc version?
Glibc is not linked statically (much as we might have liked to, it goes out of its way to prevent this). As a result, the system libraries (libstd and such) are always dependent on the glibc version on which they were built. This is why the buildbots in the linux cluster mozilla uses are/were old versions of centos.
See https://github.com/rust-lang/rust/issues/9545 and https://github.com/rust-lang/rust/issues/7283
Unfortunately at this time I believe there is no workaround aside from making sure you build on a system with an older glibc than you're going to deploy to.
To avoid GLIBC errors, you can compile your own version of Rust against a static alternative libc, musl.
Get the latest stable release of musl and build it with option --disable-shared:
$ mkdir musldist
$ PREFIX=$(pwd)/musldist
$ ./configure --disable-shared --prefix=$PREFIX
then build Rust against musl:
$ ./configure --target=x86_64-unknown-linux-musl --musl-root=$PREFIX --prefix=$PREFIX
then build your project
$ echo 'fn main() { println!("Hello, world!"); }' > main.rs
$ rustc --target=x86_64-unknown-linux-musl main.rs
$ ldd main
not a dynamic executable
For more information, look at the advanced linking section of the documentation.
As reported in the original documentation:
However, you may need to recompile your native libraries against musl
before they can be linked against.
You can also use rustup.
Remove old Rust installed by rustup.sh
$ sudo /usr/local/lib/rustlib/uninstall.sh # only if you have
$ rm $HOME/.rustup
Install rustup
$ curl https://sh.rustup.rs -sSf | sh
$ rustup default nightly #just for ubuntu 14.04 (stable Rust 1.11.0 has linking issue)
$ rustup target add x86_64-unknown-linux-musl
$ export PATH=$HOME/.cargo/bin:$PATH
$ cargo new --bin hello && cd hello
$ cargo run --target=x86_64-unknown-linux-musl
$ ldd target/x86_64-unknown-linux-musl/debug/hello
not a dynamic executable