Yocto- how to get a link map for the kernel? - yocto

I'd like to modify my Bitbake .bbappend file (for linux-imx) to have it generate a linker map file. I've tried things like the following with no success (though they do seem to work correctly for non-kernel recipes):
LDFLAGS += "-Map mymap.map"
LDFLAGS += "-Map=mymap.map"
LDFLAGS_append = " -Map mymap.map"
The end goal is to verify whether or not the on-chip RAM (which is at a fixed address) is being used or not.

Related

Yocto: how to configure an out-of-tree kernel module recipe that uses "inherit module"?

I have written a simple inherit module recipe to build a third-party out-of-tree kernel module called u-dma-buf:
SUMMARY = "User space mappable DMA Buffer"
DESCRIPTION = "${SUMMARY}"
LICENSE = "BSD-2-Clause"
LIC_FILES_CHKSUM = "file://LICENSE;md5=bebf0492502927bef0741aa04d1f35f5"
inherit module
SRC_URI = "git://github.com/ikwzm/udmabuf.git"
SRCREV = "9b943d49abc9c92a464e4c71e83d1c479ebbf80e"
S = "${WORKDIR}/git"
RPROVIDES_${PN} += "kernel-module-u-dma-buf"
This works correctly and generates the module file /lib/modules/[version]/extra/u-dma-buf.ko in the image.
However, looking at the docs there is an option that can be enabled: CONFIG_U_DMA_BUF_MGR that is disabled by default. If I can somehow enable this, then I can expect to find /lib/modules/[version]/extra/u-dma-buf-mgr.ko in the image also.
The project has a Kconfig file. Does bitbake have support for integrating Kbuild configuration outside of the kernel tree? If so, how do I hook into this and enable CONFIG_U_DMA_BUF_MGR? And if not, what's my best option, other than patching the Kconfig file to change the default to "y"? (EDIT: this probably won't work, as kernel sources need to be modified to incorporate Kbuild anyway - so probably a dead end unless it's a specific feature of bitbake I haven't encountered yet).
I see that the upstream Makefile also has the option CONFIG_U_DMA_BUF_MGR=m that could be used to enable this feature outside of Kbuild. I'm not sure how to pass that to the make command line though - would I need to write a custom do_compile task? Looking at the module.bbclass code, I can't see any provision for passing such an option to oe_runmake. Should I just copy/paste module_do_compile() from module.bbclass and add CONFIG_U_DMA_BUF_MGR=m? Is that the best way to do this?
So my question is, given I'm using inherit module, what is the proper way to enable this configuration option, given the recipe I have?
According to the Yocto kernel development docs:
If your module Makefile uses a different variable, you might want to override the do_compile step
That suggests to me that the correct and intended course of action in this case is to copy/paste module_do_compile() as a do_compile() override, and modify accordingly (add CONFIG_U_DMA_BUF_MGR=m):
do_compile() {
unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
oe_runmake KERNEL_PATH=${STAGING_KERNEL_DIR} \
KERNEL_VERSION=${KERNEL_VERSION} \
CC="${KERNEL_CC}" LD="${KERNEL_LD}" \
AR="${KERNEL_AR}" \
O=${STAGING_KERNEL_BUILDDIR} \
KBUILD_EXTRA_SYMBOLS="${KBUILD_EXTRA_SYMBOLS}" \
CONFIG_U_DMA_BUF_MGR=m \
${MAKE_TARGETS}
}
Additionally, add the new module to RPROVIDES_${PN}:
RPROVIDES_${PN} += "kernel-module-u-dma-buf kernel-module-u-dma-buf-mgr"

Yocto rust recipe also produces -native output that needs packaging

I tried this approach on hardknott but I couldn't get it to work recipe also produces -native output that needs packaging
It is a rust recipe that generates an x86_64 app which I would like to package the right way in sdk, so that it can be used.
I can separate the main package to -native-bin, and I see it in the recipe-sysroot, but I can't get it to populate the recipe-sysroot of the workdir of the file when building the -native-helper recipe. And I suspect the reason is that I get an error that the main recipe for x86_64 can't be found?
ERROR: Manifest xxxxxx.populate_sysroot not found in vs_imx8mp cortexa53 armv8a-crc armv8a aarch64 allarch x86_64_x86_64-nativesdk (variant '')?
So any helpful information would be appreciated!
Hacked like this:
Recipe.bb:
do_install_append() {
# Set permision without run flag so that it doesn't fail on checks
chmod 644 ${D}/usr/bin/#RECIPE#-compiler
}
# #RECIPE# generates a compiler during the target generation step
#separate this to the -native-bin package, and skip the ARCH checks
#also in the image file for stations_sdk move the app to right dir and add execute flag
PACKAGES_prepend = "${PN}-native-bin "
PROVIDES_prepend = "${PN}-native-bin "
INSANE_SKIP_${PN}-native-bin = "arch"
FILES_${PN}-native-bin = "/usr/bin/#RECIPE#-compiler"
SYSROOT_DIRS += "/"
Image.bb:
# #RECIPE# produces a compiler that is produced as a part of the target generation
#so we use the recipe and hack it to supply the -compiler as part of the
#host binaries
TOOLCHAIN_TARGET_TASK_append = " #RECIPE#-native-bin"
do_fix_#RECIPE#() {
mv ${SDK_OUTPUT}/${SDKTARGETSYSROOT}/usr/bin/#RECIPE#-compiler ${SDK_OUTPUT}/${SDKPATHNATIVE}/usr/bin/#RECIPE#-compiler
chmod 755 ${SDK_OUTPUT}/${SDKPATHNATIVE}/usr/bin/#RECIPE#-compiler
}
SDK_POSTPROCESS_COMMAND_prepend = "do_fix_#RECIPE#; "
This produces at the end the binary in the right directory

Control mex link options with g++ from command line

I'm trying to link a library using mex from command line, or more exactly, from a makefile. I do this from a Makefile which I post here:
BDDM_MATLAB = #matlabhome#
MEXCC = $(BDDM_MATLAB)/bin/mex
MEXFLAGS = -v -largeArrayDims -O
MEXEXT = mexa64
TDIR = $(abs_top_srcdir)/test
IDIR = $(abs_top_srcdir)/src
LDIR = $(abs_top_srcdir)/lib
LOP1 = $(CUDA_LDFLAGS) $(LIBS)
SOURCES := $(wildcard *.cpp)
OBJS = $(SOURCES:.cpp=.o)
mTESTS = $(addprefix $(TDIR)/, $(SOURCES:.cpp=.$(MEXEXT)))
all: $(TDIR) $(mTESTS)
$(OBJS) : %.o : %.cpp
$(MEXCC) $(MEXFLAGS) -c -outdir ./ -output $# $(CUDA_CFLAGS) -I$(IDIR) CFLAGS="\$$CFLAGS -std=c99" $^
$(mTESTS) : $(TDIR)/%.$(MEXEXT) : %.o
$(MEXCC) $(MEXFLAGS) -L$(LDIR) -outdir $(TDIR) $^ $(LOP1) -lmpdcm LDFLAGS="-lcudart -lcuda"
.PHONY = $(TDIR)
$(TDIR):
$(MKDIR_P) $#
clean:
$(RM) *.o
libmpdcm is a static library that includes calls to two shared libraries libcuda and libcudart. My environment has
export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH:
My make rule produces
/usr/local/MATLAB/R2014a/bin/mex -v -largeArrayDims -O -L/home/eaponte/projects/test_cpp/lib -outdir /home/eaponte/projects/test_cpp/test test_LayeredEEG.o -L/usr/local/cuda/lib64 -lcudart -lcuda -lmpdcm LDFLAGS="-lcudart -lcuda"
This produces the following g++ command:
/usr/bin/gcc -lcudart -lcuda -shared -O -Wl,--version-script,"/usr/local/MATLAB/R2014a/extern/lib/glnxa64/mexFunction.map" test_LayeredEEG.o -lcudart -lcuda -lmpdcm -L/home/eaponte/projects/test_cpp/lib -L/usr/local/cuda/lib64 -L"/usr/local/MATLAB/R2014a/bin/glnxa64" -lmx -lmex -lmat -lm -lstdc++ -o /home/eaponte/projects/test_cpp/test/test_LayeredEEG.mexa64
The problem is that afterwards I get a linking error in Matlab:
Invalid MEX-file '/home/eaponte/projects/test_cpp/test/test_Fmri.mexa64': /home/eaponte/projects/test_cpp/test/test_Fmri.mexa64: undefined symbol: cudaFree
I know that the solution is simply to put the cuda libraries at the end of the g++ command
/usr/bin/gcc -lcudart -lcuda -shared -O -Wl,--version-script,"/usr/local/MATLAB/R2014a/extern/lib/glnxa64/mexFunction.map" test_LayeredEEG.o -lmpdcm -L/home/eaponte/projects/test_cpp/lib -L/usr/local/cuda/lib64 -L"/usr/local/MATLAB/R2014a/bin/glnxa64" -lmx -lmex -lmat -lm -lstdc++ -lcudart -lcuda -o /home/eaponte/projects/test_cpp/test/test_LayeredEEG.mexa64
How can achieve that running mex from command line (or from a Makefile)?
Just to illuminate the problem and solution and offer some help in avoiding the like:
The fundamental rule of linkage with the GNU linker
that your problem makefile transgressed is: In the commandline sequence of entities to be linked, ones that need symbol definitions
must appear before the ones that provide the definitions.
An object file (.o) in the linkage sequence will be incorporated entire in the output executable,
regardless of whether or not it defines any symbols that the executable uses. A library
on the other hand, is merely examined to see if it provides any definitions for symbols
that are thus-far undefined, and only such definitions as it provides are linked into in
the executable (I am simplifying somewhat). Thus, linkage doesn't get started until some object file is seen,
and any library must appear after everything that needs definitions from it.
Breaches of this principle usually arise from inept bundling of some linker flag-options
and some library-options together into a make-variable and its placement in the linkage recipe,
with the result that the bundled options are interpolated at a position that is valid for
the flags but not valid for libraries. This was so in your problem makefile, with LOP1 the
bad bundle.
In the typical case, the bundling causes all of the libraries to be placed before all the object files,
and never mentioned again. So the object files yield undefined symbol errors, because the libraries
they require were seen by the linker before it had discovered any undefined symbols, and were ignored.
In your untypical case, it resulted in libcudart and libcuda being seen later than your only
object file test_LayeredEEG.o - which however required no symbols from them - but earlier than
the only thing that did require symbols from them, the library libmpdcm. So they were ignored,
and you built a .mex64 shared library that had not been linked with them.
Long ago - pre-GCC 4.5 - shared libraries (like libcudart and libcuda) were exempt
from the requirement that they should be needed, at the point when the linker sees them,
in order to be linked. They were linked regardless, like object files, and the belief that
this is so has not entirely died out. It is not so. By default, shared libraries and
static libraries alike are linked if and only if needed-when-seen.
To avoid such traps it is vastly helpful to understand the canonical nomenclature of
the make variables involved in compilation and linkage and their semantics, and
their canonical use in compilation and linkage recipes for make. Mex is a
manipulator of C/C++/Fortran compilers that adds some commandline options of its own:
for make purposes, it is another compiler. For the options that it inherits from and
passes to the underlying compiler, you want to adhere to the usage for that compiler in make recipes.
These are the make variables most likely to matter to you and their meanings:
CC = Your C compiler, e.g. gcc
FC = Your Fortran compiler, e.g. gfortran
CXX = Your C++ compiler, e.g. g++.
LD = Your linker, e.g. ld. But you should know that only for specialized uses
should the linker be directly invoked. Normally, the real linker is invoked on your
behalf by the compiler. It can deduce from the options that you pass it whether you
want compiling done or linking done, and will invoke the appropriate tool. When you
want linking done, it will quietly augment the linker options that you pass with
additional ones that it would be very tiresome to specify, but which ensure
that the linkage acquires all the the correct flags and libraries for the language of the
program you are linking. Consequently almost always, specify your compiler as your
linker.
AR = Your archiving tool (static library builder)
CFLAGS = Options for C compilation
FFLAGS = Options for Fortran compilation
CXXFLAGS = Options for C++ compilation
CPPFLAGS = Options for the C preprocessor, for any compiler that uses it. Avoid the common mistake of writing CPPFLAGS when you mean CXXFLAGS
LDFLAGS = Options for linkage, N.B. excluding library options, -l<name>
LDLIBS = Library options for linkage, -l<name>
And the canonical make rules for compiling and linking:
C source file $< to object file $#:
$(CC) $(CPPFLAGS) $(CFLAGS) -c $# $<
Free-from Fortran file $< to object file $#, with preprocessing:
$(FC) $(CPPFLAGS) $(FFLAGS) -c $# $<
(without preprocessing, remove $(CPPFLAGS))
C++ source file $< to object file $#:
$(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $# $<
Linking object files $^ into an executable $#:
$(<compiler>) $(LDFLAGS) -o $# $^ $(LDLIBS)
If you can as much as possible write makefiles so that a) you have assigned the right options to the right variables from
this glossary, and b) used the canonical make recipes, then your path will be much smoother.
And BTW...
Your makefile has the following bug:
.PHONY = $(TDIR)
This is apparently an attempt to make $(TDIR) a phony target,
but the syntax is wrong. It should be:
.PHONY: $(TDIR)
what the assignment does is simply create a make variable called, .PHONY with the value of $(TDIR),
and does not make $(TDIR) a phony target.
Which is fortunate, because $(TDIR) is your output directory and not a phony
target.
You wish to ensure that make creates $(TDIR) before you need to output anything into
it, but you do not want it to a normal prequisite of those artefacts, which would oblige
make to rebuild them whenever the timestamp of $(TDIR) was touched. That is presumably
why you thought to make it a phony target.
What you actually want $(TDIR) to be is an order-only prerequsite
of the $(mTESTS) that will be output there. The way to do that is to amend the $(mTESTS) rule to be:
$(mTESTS) : $(TDIR)/%.$(MEXEXT) : %.o | $(TDIR)
This will cause $(TDIR) to be made, if needed, before $(mTESTS) is made, but
nevertheless $(TDIR) will not be considered in determining whether $(mTESTS) does
need to be made.
On the other hand, the targets all and clean are phony targets: no such artefacts
are to be made, so you should tell make so with:
.PHONY: all clean
As pointed out in the comments, the problem was in the order of the dynamic libraries in the compilation flags. After searching the reason for this I found in SO that static libraries need to be linked taking into account the order of dependency. In my case, the library libmpdc had dependencies on libcuda and libcudart but was on the left. The solution is to swap the order in the makefile from:
$(mTESTS) : $(TDIR)/%.$(MEXEXT) : %.o
$(MEXCC) $(MEXFLAGS) -L$(LDIR) -outdir $(TDIR) $^ $(LOP1) -lmpdcm LDFLAGS="-lcudart -lcuda"
to
$(mTESTS) : $(TDIR)/%.$(MEXEXT) : %.o
$(MEXCC) $(MEXFLAGS) -L$(LDIR) -outdir $(TDIR) $^ -lmpdcm $(LOP1)

Finding which file defines extern value

I am using raspberry PI to run some assembler code on GPU.
It works like this: you assemble the code into binary file. Then you include it into a C code which pushes data into the GPU. This binary file is defined as
extern uint32_t arrayOfCode[];
However I can not find where is this code included(it is not in any other include files).
The whole code can be found here
Upon running makefile it works.
Where the problem comes is when I am trying to build it as MEX function in Matlab.
Matlab cannot find where is the assembler binary code defined. Thus I have a suspicion that it must be linked somehow in Makefile since that's the only difference in building it.
Does anyone has an idea how to find where is this extern value defined ?
==EDIT 1==
I posted my one solution how to build this into a Matlab library.
But the question remains more or less the same. How this makefile
CXX=g++
ASMSRCS := gemm_float.asm
ASMOBJS := $(subst .asm,.do,$(ASMSRCS))
CPPSRCS := $(shell find . -name '*.cpp' -not -name '._*')
CPPOBJS := $(subst .cpp,.o,$(CPPSRCS))
CPPFLAGS=-Ofast -DTARGET_PI -march=armv6 \
-mfloat-abi=hard \
-ftree-vectorize \
-funroll-all-loops \
-mfpu=vfp \
%.cdat: %.asm helpers.asm
m4 $< | qpu-asm -o $(basename $#).cdat -c g_$(basename $#)Code
%.do: %.cdat
$(CXX) $(CPPFLAGS) -x c -c $< -o $(basename $#).do
%.o: %.cpp
$(CXX) $(CPPFLAGS) -fPIC -c $< -o $(basename $#).o
gemm: $(CPPOBJS) $(ASMOBJS)
g++ -g -O3 -o gemm $(CPPOBJS) $(ASMOBJS) -lblas
Could include "gemm_float.asm" assembled code into the C array defined with the keyword extern. I read that these "%." rules in makefile are rules for dependencies. Okay, that would mean that "gemm" would build anew if I changed something in files that "gemm" depends on. Or I might not understand makefiles well enough.
I was able to fix the issue by separately building C code and assembler code. This is for anyone who cannot use an original makefile for some reason(mine was trying to build this into a mex Matlab library).
First, we build code.asm into binary file
m4 code.asm | ./qpu-asm -o code.bin
Now we have to adjust our C code in such a way that it loads .bin code in it's main function. I used a function from eman's tutorial which simply reads the assembler code from a bin file and saves it into an array for later use.
int loadQPUCode(const char *fname, unsigned int* buffer, int len)
{
FILE *in = fopen(fname, "r");
if (!in) {
fprintf(stderr, "Failed to open %s.\n", fname);
return -1;
}
size_t items = fread(buffer, sizeof(unsigned int), len, in);
fclose(in);
return items * sizeof(unsigned int);
}
Then it is enough to call call this before using any gpu related function.
loadQPUCode("asmCode.bin",arrayOfCode,CONSTANT_RELATIVE_TO_ASMCODE);
With this adjusted C code and binary assembler code I was able to build Matlab library using matlab mex function.
mex main.cpp include1.cpp include2.cpp

Automake, generated source files and VPATH builds

I'm doing VPATH builds with automake. I'm now also using generated source, with SWIG. I've got rules in Makefile.am like:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
Then the file gets used later:
myprogram_SOURCES = ... whatever.cpp
It works fine when $builddir == $srcdir. But when doing VPATH builds (e.g. mkdir build; cd build; ../configure; make), I get error messages about missing whatever.cpp.
Should generated source files go to $builddir or $srcdir? (I reckon probably $builddir.)
How should dependencies and rules be specified to put generated files in the right place?
Simple answer
You should assume that $srcdir is a read-only, so you must not write anything there.
So, your generated source-code will end up in $(builddir).
By default, autotool-generated Makefiles will only look for source-files in $srcdir, so you have to tell it to check $builddir as well. Adding the following to your Makefile.am should help:
VPATH = $(srcdir) $(builddir)
After that you might end up with a no rule to make target ... error, which you should be able to fix by updating your source-generating rule as in:
$(builddir)/whatever.cpp: whatever.swig
# ...
A better solution
You might notice that in your current setup, the release tarball (as created by make dist) will contain the whatever.cpp file as part of your sources, since you added this file to the myprogram_SOURCES.
If you don't want this (e.g. because it might mean that the build-process will really take the pregenerated file rather than generating it again), you might want to use something like the following.
It uses a wrapper source-file (whatever_includer.cpp) that simply includes the generated file, and it uses -I$(builddir) to then find the generated file.
Makefile.am:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
whatever_includer.cpp: whatever.cpp
myprogram_SOURCES = ... whatever_includer.cpp
myprogram_CPPFLAGS = ... -I$(builddir)
clean-local::
rm -f $(builddir)/whatever.cpp
whatever_includer.cpp:
#include "whatever.cpp"
Usually, you want to keep $srcdir readonly, so that if for instance the source is distributed unpacked on a CDROM, you can still run /.../configure from some other part of the file-system.
However if you are using SWIG to generate source code for a wrapper library, you probably want to distribute that SWIG-generated code as well so that your users do not need to install SWIG to compile your code. Then you have indeed a choice: you can decide that the SWIG-generated code should end in $builddir (it's OK: make dist will collect it there and include it in the tarball), or you could decide to output SWIG-generated code in $srcdir since it is really a source from the point of view of the distributed package. An advantage of keeping it in $srcdir is that when make distcheck attempts to build your package from a read-only source directory, it will fail on any attempt to call SWIG to regenerate the wrapper source. If you have your wrapper source in $builddir, you might not notice you have some broken rule that cause SWIG to be run on the user's host; by generating in $srcdir you ensure that SWIG is not needed by your users.
So my preference is to output SWIG wrapper sources in $srcdir. My setup for Python wrappers looks as follows:
EXTRA_DIST = spot.i
python_PYTHON = $(srcdir)/spot.py # _PYTHON is distributed by default
pyexec_LTLIBRARIES = _spot.la
MAINTAINERCLEANFILES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot.py
_spot_la_SOURCES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot_wrap.h
_spot_la_LDFLAGS = -avoid-version -module
_spot_la_LIBADD = $(top_builddir)/src/libspot.la
$(srcdir)/spot_wrap.cxx: $(srcdir)/spot.i
$(SWIG) -c++ -python -I$(srcdir) -I$(top_srcdir)/src $(srcdir)/spot.i
# Handle the multi-file output of SWIG.
$(srcdir)/spot.py: $(srcdir)/spot.i
$(MAKE) $(AM_MAKEFLAGS) spot_wrap.cxx
Note that I use $(srcdir) for all targets, because of limitations of the VPATH feature on various flavors of make. My setup to deal with the multiple files output by SWIG could be improved, but as these rules are not run by users and it has never caused me any problem, I do not bother.