Buildroot packages not in busybox can't be run - buildroot

If I compile custom executables (binaries), or use executables from buildroot, busybox shell keeps saying applet not found For instance, I can select coremark from the included packages list, but then if I try running coremark, I get:
~ # coremark
coremark: applet not found
Is there some setting within busybox or buildroot that tells busybox to be willing to load non-busybox executable files? I'm currently running with RISC-V 32-bit rv32 Linux 5.18 and tip of master buildroot.

I found the issue! On my particular system, only flat binaries are usable. So, you either need to add -Wl,-elf2flt=-r to the link invocation or use elf2flt in order to make executable files.

Related

nvidia-docker - can cuda_runtime be available while building a container?

While attempting to compile darknet in the build command of a docker container I constantly run into the exception include/darknet.h:11:30: fatal error: cuda_runtime.h: No such file or directory.
I am building the container from the instructions here: https://github.com/NVIDIA/nvidia-docker/wiki/Deploy-on-Amazon-EC2. I have a simple Dockerfile I am testing with - the relevant parts:
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
...
WORKDIR /
RUN apt-get install -y git
RUN git clone https://github.com/pjreddie/darknet.git
WORKDIR /darknet
# Set OpenCV makefile flag
RUN sed -i '/OPENCV=0/c\OPENCV=1' Makefile
RUN sed -i '/GPU=0/c\GPU=1' Makefile
#RUN ln -s /usr/local/cuda-9.2 /usr/local/cuda
# HERE I have been playing with commands to show me the state of the docker image to try to troubleshoot the problem
RUN find / -name "cuda_runtime.h"
RUN ls /usr/local/cuda/lib64/
RUN less /usr/local/cuda/README
RUN make
Most of the documentation I see references using the nvidia libraries when running a container, but the darknet compiles differently when built with gpu support so I need cuda_runtime.h available at build time.
Perhaps I misunderstand what nvidia-docker is doing - I'm assuming that nvidia-docker exists because the Nvidia code must be installed on the actual host machine and not inside the container & they use some mechanism to share the "native" code with the containers so the GPU can be managed - is that correct?
Should I even be trying to build darknet when building my container or should I be installing it on the host machine, then making it available somehow to the container? This seems to go against the portability of the containers but I can live with some constraints to get access to the GPU.
FROM nvidia/cuda:9.2-runtime-ubuntu16.04
Your image only has bits and pieces of CUDA-9.2 needed to run a CUDA app, but does not have the bits needed to build one.
You need to use -devel variant.

-sh /usr/local/sbin/wpa_supplicant no such file or directory

I have built the TI wilink utilities which then I have integrated in my rootfs. This done using petalinux 2016.4 and have created a install template app in yocto build to copy all the tools and libraries in the rootfs.
When I bring up the BOOT.bin and image.ub, I see the files and libraries but when I try to run for example wpa_supplicant it does not work
even wpa_supplicant -h wont work.
It shows me error:
-sh: /usr/local/sbin/wpa_supplicant: no such file or directory.
The file is present and also has executable permissions.
Do you have any idea why it is not able to run ?
Thanks
Typically, this means that executable file is built for the wrong architecture, i.e. there is a mismatch between the environment where are you running and environment for which you are building. This is how you can make sure they do match or not (execute on target):
# file /usr/local/sbin/wpa_supplicant
...
# uname -m
...
If you see mismatch, then it all boils down to how are you building TI wilink.

Debug Error Occurred in Eclipse

I'm trying to debug an open source package, called libprotoident in Eclipse, Kepler version, within Debian. As it has the Makefile, I choose to make an empty Makefile project, and then add all the sources into the workspace. So after that the source compiled and run successfully as in the command line using the Makefile.
As it has 4 apps you can use, I choose to run lpi_protoident package in the run configuration window, as the following image shown.
So the Program ran successfully. Now I'm trying to debug it but it generates the following error.
How can I solve this error and debug the Project?
The file you are trying to debug is most likely a shell script created by automake that acts as a wrapper around the real executable, which has been built in a hidden directory.
Instead of telling Eclipse that tools/protoident/lpi_protoident is your application, try using tools/protoident/.libs/lpi_protoident instead.
General Answer about the error you are getting
What not in executable format: File format not reconized error means is that lpi_protoident is not an executable on the platform you are working on.
Are you sure that is an executable you can run (E.g. from the command line)?
There is also the small chance that the GDB you are using is somehow incompatible with the executable, but that is less likely.
Building libprotoident from source
(Assuming you are trying to build https://github.com/wanduow/libprotoident)
You are trying to build an automake project. The normal way to do that is by configuring to create Makefile, you shouldn't be making your own makefile. Please refer to the README in the project, but the key parts you need to do are:
Installation
After having installed the required libraries, running the following series of commands should install libprotoident
./bootstrap.sh (only if you've cloned the source from GitHub)
./configure
make
make install
By default, libprotoident installs to /usr/local - this can be changed
by appending the --prefix= option to ./configure.
The libprotoident tools are built by default - this can be changed by
using the
--with-tools=no option with ./configure.

Preventing hardcode path in RPM SPEC file

I am creating rpm for apc. While writing spec file, I realized that some commands may have path which can keep on changing which are required during the compilation time. For eg. these commands are required to be executed during the building time.
$ /usr/local/php/bin/phpize
$ ./configure --with-php-config=/usr/local/php/bin/php-config
But the complete path of phpize and php-config file may change. So how can i prevent this dependencies so that i should not hard-code these path in my spec file.
Because these commands are used at building time, the ideal solution to this problem is here:
Find packages on distribution which provide these commands or paths e.g php-config is provided by php-devel package on Fedora operating system. In fedora you can find it using yum whatprovides "*/php-config" or if they are already installed on system then using rpm -qf /path/to/command.
Once you know the packages add them as BuildRequire tag in spec file. Step 2 will make sure that paths are always present whenever you build the package from spec file even if you use hard coded paths (which isn't ofcourse best way to do it).
In place of /usr/ you can use %{_prefix}, it depends entirely on macros available on distribution you are building this rpm on. Check macro files for path macros. One link which has common macro definitions is here.

netbeans c++ deployment

I had developed a small program in netbeans using c++. I need to know how can i deploy/run the package on another linux system
Abdul Khaliq
I have seen your code, you probably missing XML files in the current folder... where the executable is located... paste then and then run as ./your-executable
I recommend that you use a makefile to recompile on your target machine which will ensure that your program is deployed properly.
You should use a makefile as suggested. I know that NetBeans can generate one, but it's been a while since I last did so. Maybe this can help: http://forums.netbeans.org/topic3071.html
Typically, once compiled, your executable will need several libraries. Chance is that those libraries will also be available on the target linux system.
Thus, you can simply copy your executable over to the other system. If you run ldd on your executable, you should see the list of libraries your executable is dynamically loading. Those libraries should be available on the target system as well.
In case your executable makes use of resources such as images and other binary files, you can use a resource system (e.g. Qt Resource System) and compile those binary files into your executable.
The easiest way to test is to do the copy, run
ldd yourExecutable
on the target system. It will tell you if you are missing any library. Install those libraries using the system package manager.
Of course, you also have the option to statically build all libraries into your executable. However, this is not recommended since it makes the executable too large and complicates matters.
What type of package is your netbeans compiler creating? deb,rpm? If you are moving the package to a different linux install you will need to use that distributions package type. Ubuntu - deb
Fedora/Redhat - rpm
etc...
I'm not sure how you change this in netbeans but I'm pretty sure it has the ability to. A google search could help you more.