I compiled a package that includes binutils, linux-headers, linux-2.6.9 and glibc 2.3.2, gcc, etc.
This eventually creates a file bin used to be loaded onto a satellite signal receiver.
When I want to run an external program compiled, I get an error: relocation error symbol not defined in libc.so.6.
The symbol is bcopy. It´s defined in the static library (libc.a) and dynamics also rightful libc-2.3.2.so. But how I can do to be included in libc.so.6?
Is it possible to export this symbol libc.so.6?
But how I can do to be included in libc.so.6
The libc.so.6 is (or should be) a symlink to libc-2.3.2.so that you've built.
Related
I have to merge one of my app's libs with the NVIDIA CUDA static lib using this horrific awful CMake code:
GET_TARGET_PROPERTY(OUTPUT_LIB ${LIBNAME} LOCATION)
add_custom_command (TARGET ${LIBNAME}
POST_BUILD
COMMAND mv ${OUTPUT_LIB} ${OUTPUT_LIB}.old
COMMAND echo "create ${OUTPUT_LIB}" > combineLibs.mri
COMMAND echo "addlib ${OUTPUT_LIB}.old" >> combineLibs.mri
COMMAND echo "addlib ${CUDA_LOCATION}" >> combineLibs.mri
COMMAND echo "save" >> combineLibs.mri
COMMAND echo "end" >> combineLibs.mri
COMMAND ar -M <combineLibs.mri
COMMAND rm ${OUTPUT_LIB}.old
COMMENT "Building merged library for ${LIBNAME} at ${OUTPUT_LIB}, including ${CUDA_LOCATION}"
)
target_link_libraries(${LIBNAME} -pthread -c)
This successfully produces a merged static library that has all the symbols in it. However, the NVIDIA CUDA static lib brought with it dependencies on libpthread and libc in the form of unresolved symbols. Now the merged library also has those unresolved symbols, and the target_link_libraries line doesn't seem to do what I seem to think it does, because the symbols don't get resolved at link-time. How do I get the merged static library to dynamically link against libpthread and libc?
The the target_link_libraries line does indeed not do what you think.
target_link_libraries(target,options) can have the desired effect of
adding the linker options options to the linkage of target only if target
is something that is produced by the linker. If no linkage happens in the
production of target then this directive will have no effect.
Your target is a static library. A static library - unlike a program, and unlike
a dynamic/shared library - is not produced by the linker. As your custom_command
in fact illustrates, a static library is produced by the GNU general purpose archiver,
ar. It is nothing but an archive of files which happen to be object files,
but as far as ar is concerned they might as well be the contents of your
Documents, Pictures and Music folders. Since no linkage is involved in the
production of a static library, nothing can be linked with a static library.
An ar archive can be used as a linker input in the linkage of something that
is produced by the linker - a program or a shared library. In that case the
linker will look into the archive to see if contains any object files it needs
to carry on the linkage. If it finds any, it will extract them from the archive
and link them into the program. The linkage will be exactly the same as if
you had listed the required object files in the linker commandline and not
mentioned the archive at all.
But if any of the object files that the linker extracts from an archive bring
with them undefined references, then to get them resolved you must link some
library or libraries that define those references in the linkage of the
program or shared library that you want the linker to produce - just as you
must do to resolve undefined references in any other object files you
input to the linkage.
So,
How do I get the merged static library to dynamically link against libpthread and libc?
You can't. It doesn't make sense. Any library dependencies of object files
in a static library can be satisfied only in the linkage of a program or shared library
that has acquired those dependencies by linking those object files.
Finally, -c is not a GCC linkage option that will have the effect of requesting
linkage of libc. It is not a linkage option at all. It is an option that
directs the GCC frontend not to invoke the linker. It is passed to GCC to
request compilation without linkage, and the perverse effect of including it in a
CMake target_link_libraries directive will be to stop any linkage of the
target from happening.
If you want to explicitly request linkage of libc, use -lc, following
the linker usage protocol that -lname requests linkage of libname.
Perhaps you inferred that -c requests linkage of libc from the assumption
that -pthread requests linkage of libpthread. In fact, -lpthread would
request linkage of libpthread. The option -pthread is a more abstract GCC
option, for both compilation and linkage, that means do the right things, for this platform, to link with the Posix Threads
library - which might entail passing -lpthead to the linker, and possibly not.
Thus -pthread is OK as an argument of target_link_libraries that will
have the effect of requesting Posix Threads linkage, but see
answers to cmake and libpthread
for CMake-proper ways of doing this.
I'm having trouble using the CUDA Thrust library in MATLAB MEX code.
I have an example that runs fine externally, but if I compile and run it as a MEX file, it produces "missing symbol" errors at runtime.
It seems specific to the Thrust library. If instead of thrust::device_vector I use cudaMalloc with cudaMemcpy or cublasSetVector then everything is fine.
Minimum example
thrustDemo.cu:
#ifdef MATLAB_MEX_FILE
#include "mex.h"
#include "gpu/mxGPUArray.h"
#endif
#include <thrust/device_vector.h>
#include <vector>
void thrustDemo() {
std::vector<double> foo(65536, 3.14);
thrust::device_vector<double> device_foo(foo);
}
#ifdef MATLAB_MEX_FILE
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, mxArray const *prhs[]) {
thrustDemo();
}
#else
int main(void) { thrustDemo(); }
#endif
The problem
I can compile this from the command line (nvcc thrustDemo.cu) and run the resulting executable just fine.
When I try to build this as a MATLAB MEX file (mexcuda thrustDemo.cu from within MATLAB R2017a), it compiles and links just fine:
>> mexcuda thrustDemo.cu
Building with 'nvcc'.
MEX completed successfully.
But when I try to run it, I get the following error:
>> thrustDemo()
Invalid MEX-file '/home/kqs/thrustDemo.mexa64':
Missing symbol '_ZNKSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE5c_strEv' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNKSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE5emptyEv' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNSt12length_errorC1EPKc' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNSt13runtime_errorC2EPKc' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEaSEPKc' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEC1EPKcRKS3_' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEC1ERKS4_' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEC1Ev' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEED1Ev' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEpLEPKc' required by '/home/kqs/thrustDemo.mexa64'
Missing symbol '_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEpLERKS4_' required by '/home/kqs/thrustDemo.mexa64'.
This is pretty foreign to me; can somebody tell me what this means? These look like linker errors, but they're being generated at runtime. Also, I thought Thrust was a template library, so what is there to link to?
And finally, replacing thrust::device_vector with cudaMalloc and either cudaMemcpy or cublasSetVector works just fine. So for now I'm stuck with a bunch of cudaMalloc in my code, which seems...distasteful. I'd really like to be able to use Thrust.
Versions
MATLAB R2017a
nvcc V8.0.61, gcc 5.4.0, Ubuntu 16.04.2
NVidia driver 375.39, GTX 1060 graphics card (Compute Capability 6.1)
Update: ldd output
Per comments, I checked the dependencies of the MEX file using ldd thrustDemo.mexa64:
linux-vdso.so.1 => (0x00007ffdd35ea000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f097eccf000)
libcudart.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudart.so.8.0 (0x00007f097ea69000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f097e852000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f097e489000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f097e180000)
/lib64/ld-linux-x86-64.so.2 (0x0000562df178c000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f097df7b000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f097dd5e000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f097db56000)
I tried looking for one of these missing symbols, and was able to find it:
$ nm -D /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep "_ZNKSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE5c_strEv"
0000000000120be0 W _ZNKSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE5c_strEv
So it seems that MATLAB must be looking in the wrong place.
It turns out this has nothing to do with Thrust but is rather an issue with MATLAB having its own version of the C++ standard libraries.
Thanks to #Navan and #talonmies for their helpful comments.
Interpreting the error
First, MATLAB is raising these errors when it's loading the MEX file. The MEX file has external dependencies, and MATLAB could not find them.
After checking these dependencies with the Linux utility ldd, then using nm to list the symbols define by these libraries, I found that the system version of the libstdc++ shared library actually contains these "missing symbols". Hence why the externally-compiled version works just fine.
Solving the issue
The root problem, then, is that MATLAB ships with its own, older version of libstdc++ that is lacking these functions. Knowing the root cause, I found questions like these:
How to tell mex to link with the libstdc++.so.6 in /usr/lib instead of the one in the MATLAB directory?
Version GLIBCXX_3.4.11 not found (required by buildW.mexglx)
which describe workarounds that were indeed successful for my problem.
In particular, I used LD_PRELOAD when launching MATLAB to force MATLAB to use the system libstdc++ instead of its own copy:
$ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6 /usr/local/MATLAB/R2017a/bin/matlab
Update: a better solution
It turns out that the folks at GCC are well aware of this incompatibility and discuss it here:
In the GCC 5.1 release libstdc++ introduced a new library ABI that includes new implementations of std::string and std::list. These changes were necessary to conform to the 2011 C++ standard which forbids Copy-On-Write strings and requires lists to keep track of their size.
In order to maintain backwards compatibility for existing code linked to libstdc++ the library's soname has not changed and the old implementations are still supported in parallel with the new ones.
...
The _GLIBCXX_USE_CXX11_ABI macro (see Macros) controls whether the declarations in the library headers use the old or new ABI.
To tell gcc to use the older ABI, we just need to define _GLIBCXX_USE_CXX11_ABI as 0 before we include any library headers, e.g. by passing a -D option to the compiler:
-D_GLIBCXX_USE_CXX11_ABI=0
For the sake of completeness, I'll mention that my full mexcuda call looks like this:
nvcc_opts = [...
'-gencode=arch=compute_30,code=sm_30 ' ...
'-gencode=arch=compute_50,code=sm_50 ' ...
'-gencode=arch=compute_60,code=sm_60 ' ...
'-std=c++11 ' ...
'-D_GLIBCXX_USE_CXX11_ABI=0 ' % MATLAB's libstdc++ uses old library ABI
];
mexcuda_opts = {
'-lcublas' % Link to cuBLAS
'-lmwlapack' % Link to LAPACK
'-lcufft' % Link to cuFFT
['NVCCFLAGS="' nvcc_opts '"']
'-L/usr/local/cuda/lib64' % Location of CUDA libraries
};
mexcuda(mexcuda_opts{:}, src_file);
I'm trying to build a Swift Package Manager system package (a module.modulemap)
making available two system C libraries where one includes the other.
That is, one (say libcurl) is a base module and the other C library is including
that (like so: #include "libcurl.h"). On the regular C side this works, because
the makefiles pass in proper -I flags and all is good (and I could presumably
do the same in SPM, but I'd like to avoid extra flags to SPM).
So what I came up with is this module map:
module CBase [system] {
header "/usr/include/curl.h"
link "curl"
export *
}
module CMyLib [system] {
use CBase
header "/usr/include/mylib.h"
link "mylib"
export *
}
I got importing CBase in a Swift package working fine.
But when I try to import CMyLib, the compiler complains:
error: 'curl.h' file not found
Which is kinda understandable because the compiler doesn't know where to look
(though I assumed that use CBase would help).
Is there a way to get this to work w/o having to add -Xcc -I flags to the
build process?
Update 1: To a degree this is covered in
Swift SR-145
and
SE-0063: SwiftPM System Module Search Paths.
The recommendation is to use the Package.swift pkgConfig setting. This seems to work OK for my specific setup. However, it is a chicken and egg if there is no .pc file. I tried embedding an own .pc file in the package, but the system package directory isn't added to the PKG_CONFIG_PATH (and hence won't be considered during the compilation of a dependent module). So the question stands: how to accomplish that in an environment where there libs are installed, but w/o a .pc file (just header and lib).
I have installed perl-5.8.9, mod_perl-2.0.7, Embperl-2.3.0 and httpd-2.2.22. And while starting apache, I get the error like the following (broken into multiple lines for readability):
httpd: Syntax error on line * of ../conf/httpd.conf:
Cannot load ../Apache2/mod_perl.so into server:
libperl.so: cannot open shared object file:
No such file or directory
And by doing ldd ../Apache2/mod_perl.so, I got the output like
linux-gate.so.1 => (0x00735000)
libperl.so => not found
libnsl.so.1 => /lib/libnsl.so.1 (0x005e5000)
libdl.so.2 => /lib/libdl.so.2 (0x00fab000)
libm.so.6 => /lib/libm.so.6 (0x0041f000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x0084d000)
libutil.so.1 => /lib/libutil.so.1 (0x00110000)
libc.so.6 => /lib/libc.so.6 (0x00197000)
/lib/ld-linux.so.2 (0x00163000)
[The question could usefully be moved to the Unix & Linux SE.]
You appear to have installed not-quite-compatible packages, or one or more packages did not install properly, or the installation instructions are missing a step, or the dynamic linker defaults are not as expected.
libperl is (unsurprisingly) part of perl. On some platforms a normal perl default build will create only the libperl.a static library, though it is possible to create a dynamic library libperl.so instead, or both. Some people favor the .so approach, though it can have some performance overheads.
The most likely causes of your problem are:
the installed perl has only static libperl.a, but mod_perl was built against a dynamic perl with libperl.so
the perl package installed libperl.so somewhere the dynamic linker doesn't look
Addressing cause #1 means finding different, compatible packages (or perhaps a combined package).
If it's #2, you should be able to locate libperl.so (somewhere like /usr/local/lib/perl5/5.8.9/mach/CORE/), or just run ldd /usr/local/bin/perl (wherever the new perl binary installed to) to see if it knows where it is.
Before you try any of the changes suggested below, just run ldconfig as root then try again, in case that fixes it.
If you find libperl.so under the new perl installation directory, then you can copy that file to your Apache lib/ sub-directory as Apache should be built (with an ELF RPATH) to include that directory in the library search path. If you find it under /usr/local/lib then you should modify your default linker configuration (/etc/ld.so.conf) to include that directory, and refresh (run ldconfig as root).
Your platform is evidently Linux, but you don't state the type and origin of packages, or whether it's a source build, so I can't be more precise...
I'm slightly suspicious that something didn't install properly because the conventional place for Apache modules is within the modules/ sub-directory, not directly in the ServerRoot.
I have two files: Main.d and ImportMe.d. Their purposes should be self-explanatory. They are in the same directory, and have no explicit module declaration. When I try to compile Main.d, though, I get a "symbols not found" error!
$ dmd Main.d -I.
Undefined symbols:
"_D8ImportMe12__ModuleInfoZ", referenced from:
_D4Main12__ModuleInfoZ in Main.o
"_D8ImportMe8SayHelloFxAyaZv", referenced from:
__Dmain in Main.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
--- errorlevel 1
Compiling both files at the same time works fine.
$ dmd Main.d ImportMe.d
You don't have to do this with the standard library, though. What is it doing differently? Changing the include path via -I has no visible effect.
When you compile a module, dmd must have the .d or .di files for all of the modules that that module needs in its import path. -I allows you to add paths to the import path. However, that does not build those other modules. It just gives dmd what it needs to build the module that you requested it to build. And when you link, dmd needs either the object files or the library binaries for all of the modules being used in the program, otherwise it's going to complain about undefined symbols (-L can be used for linker flags if you want to link in libraries). The linking step uses the C linker, so it's not D-aware at all and doesn't know anything about modules.
So, if you compile and link in two steps, you first compile each module separately or together with other modules, generating either object files or library files, depending on the flags that you pass the compiler (object files are the default). You then link those object files and libraries together in the linking stage, generating the executable.
When you use dmd without passing it -c or -lib, it's going to do both the compiling and the linking together, so you must provide it all of the modules that you intend to compile, or when it gets to the linking step, it's going to complain about undefined symbols. It doesn't magically go and compile all of the modules that the modules that you ask it to compile import. If you want that sort of behavior, you need to use a tool such as rdmd.
dmd is able to find druntime and Phobos without you having to specify them because of dmd.conf (on Posix) or sc.ini (on Windows). That configuration file adds the appropriate .d and .di files to the import path and adds libphobos.a or phobos.lib (depending on the platform) to DFLAGS so that dmd can find those modules when compiling your modules and can link in the library in the linking phase. It also adds in any other flags that the standard library needs to work (such as linking in librt on Linux). If you move any of those files to non-standard places, it's that configuration file that you need to change to make it so that dmd can still find them.
You don't have to specify modules from the standard library because the compiler implicitly passes the precompiled standard library .lib file to the linker. For your own projects, consider using rdmd or another build tool.