I have two files: Main.d and ImportMe.d. Their purposes should be self-explanatory. They are in the same directory, and have no explicit module declaration. When I try to compile Main.d, though, I get a "symbols not found" error!
$ dmd Main.d -I.
Undefined symbols:
"_D8ImportMe12__ModuleInfoZ", referenced from:
_D4Main12__ModuleInfoZ in Main.o
"_D8ImportMe8SayHelloFxAyaZv", referenced from:
__Dmain in Main.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
--- errorlevel 1
Compiling both files at the same time works fine.
$ dmd Main.d ImportMe.d
You don't have to do this with the standard library, though. What is it doing differently? Changing the include path via -I has no visible effect.
When you compile a module, dmd must have the .d or .di files for all of the modules that that module needs in its import path. -I allows you to add paths to the import path. However, that does not build those other modules. It just gives dmd what it needs to build the module that you requested it to build. And when you link, dmd needs either the object files or the library binaries for all of the modules being used in the program, otherwise it's going to complain about undefined symbols (-L can be used for linker flags if you want to link in libraries). The linking step uses the C linker, so it's not D-aware at all and doesn't know anything about modules.
So, if you compile and link in two steps, you first compile each module separately or together with other modules, generating either object files or library files, depending on the flags that you pass the compiler (object files are the default). You then link those object files and libraries together in the linking stage, generating the executable.
When you use dmd without passing it -c or -lib, it's going to do both the compiling and the linking together, so you must provide it all of the modules that you intend to compile, or when it gets to the linking step, it's going to complain about undefined symbols. It doesn't magically go and compile all of the modules that the modules that you ask it to compile import. If you want that sort of behavior, you need to use a tool such as rdmd.
dmd is able to find druntime and Phobos without you having to specify them because of dmd.conf (on Posix) or sc.ini (on Windows). That configuration file adds the appropriate .d and .di files to the import path and adds libphobos.a or phobos.lib (depending on the platform) to DFLAGS so that dmd can find those modules when compiling your modules and can link in the library in the linking phase. It also adds in any other flags that the standard library needs to work (such as linking in librt on Linux). If you move any of those files to non-standard places, it's that configuration file that you need to change to make it so that dmd can still find them.
You don't have to specify modules from the standard library because the compiler implicitly passes the precompiled standard library .lib file to the linker. For your own projects, consider using rdmd or another build tool.
Related
I have to merge one of my app's libs with the NVIDIA CUDA static lib using this horrific awful CMake code:
GET_TARGET_PROPERTY(OUTPUT_LIB ${LIBNAME} LOCATION)
add_custom_command (TARGET ${LIBNAME}
POST_BUILD
COMMAND mv ${OUTPUT_LIB} ${OUTPUT_LIB}.old
COMMAND echo "create ${OUTPUT_LIB}" > combineLibs.mri
COMMAND echo "addlib ${OUTPUT_LIB}.old" >> combineLibs.mri
COMMAND echo "addlib ${CUDA_LOCATION}" >> combineLibs.mri
COMMAND echo "save" >> combineLibs.mri
COMMAND echo "end" >> combineLibs.mri
COMMAND ar -M <combineLibs.mri
COMMAND rm ${OUTPUT_LIB}.old
COMMENT "Building merged library for ${LIBNAME} at ${OUTPUT_LIB}, including ${CUDA_LOCATION}"
)
target_link_libraries(${LIBNAME} -pthread -c)
This successfully produces a merged static library that has all the symbols in it. However, the NVIDIA CUDA static lib brought with it dependencies on libpthread and libc in the form of unresolved symbols. Now the merged library also has those unresolved symbols, and the target_link_libraries line doesn't seem to do what I seem to think it does, because the symbols don't get resolved at link-time. How do I get the merged static library to dynamically link against libpthread and libc?
The the target_link_libraries line does indeed not do what you think.
target_link_libraries(target,options) can have the desired effect of
adding the linker options options to the linkage of target only if target
is something that is produced by the linker. If no linkage happens in the
production of target then this directive will have no effect.
Your target is a static library. A static library - unlike a program, and unlike
a dynamic/shared library - is not produced by the linker. As your custom_command
in fact illustrates, a static library is produced by the GNU general purpose archiver,
ar. It is nothing but an archive of files which happen to be object files,
but as far as ar is concerned they might as well be the contents of your
Documents, Pictures and Music folders. Since no linkage is involved in the
production of a static library, nothing can be linked with a static library.
An ar archive can be used as a linker input in the linkage of something that
is produced by the linker - a program or a shared library. In that case the
linker will look into the archive to see if contains any object files it needs
to carry on the linkage. If it finds any, it will extract them from the archive
and link them into the program. The linkage will be exactly the same as if
you had listed the required object files in the linker commandline and not
mentioned the archive at all.
But if any of the object files that the linker extracts from an archive bring
with them undefined references, then to get them resolved you must link some
library or libraries that define those references in the linkage of the
program or shared library that you want the linker to produce - just as you
must do to resolve undefined references in any other object files you
input to the linkage.
So,
How do I get the merged static library to dynamically link against libpthread and libc?
You can't. It doesn't make sense. Any library dependencies of object files
in a static library can be satisfied only in the linkage of a program or shared library
that has acquired those dependencies by linking those object files.
Finally, -c is not a GCC linkage option that will have the effect of requesting
linkage of libc. It is not a linkage option at all. It is an option that
directs the GCC frontend not to invoke the linker. It is passed to GCC to
request compilation without linkage, and the perverse effect of including it in a
CMake target_link_libraries directive will be to stop any linkage of the
target from happening.
If you want to explicitly request linkage of libc, use -lc, following
the linker usage protocol that -lname requests linkage of libname.
Perhaps you inferred that -c requests linkage of libc from the assumption
that -pthread requests linkage of libpthread. In fact, -lpthread would
request linkage of libpthread. The option -pthread is a more abstract GCC
option, for both compilation and linkage, that means do the right things, for this platform, to link with the Posix Threads
library - which might entail passing -lpthead to the linker, and possibly not.
Thus -pthread is OK as an argument of target_link_libraries that will
have the effect of requesting Posix Threads linkage, but see
answers to cmake and libpthread
for CMake-proper ways of doing this.
I'm trying to build a Swift Package Manager system package (a module.modulemap)
making available two system C libraries where one includes the other.
That is, one (say libcurl) is a base module and the other C library is including
that (like so: #include "libcurl.h"). On the regular C side this works, because
the makefiles pass in proper -I flags and all is good (and I could presumably
do the same in SPM, but I'd like to avoid extra flags to SPM).
So what I came up with is this module map:
module CBase [system] {
header "/usr/include/curl.h"
link "curl"
export *
}
module CMyLib [system] {
use CBase
header "/usr/include/mylib.h"
link "mylib"
export *
}
I got importing CBase in a Swift package working fine.
But when I try to import CMyLib, the compiler complains:
error: 'curl.h' file not found
Which is kinda understandable because the compiler doesn't know where to look
(though I assumed that use CBase would help).
Is there a way to get this to work w/o having to add -Xcc -I flags to the
build process?
Update 1: To a degree this is covered in
Swift SR-145
and
SE-0063: SwiftPM System Module Search Paths.
The recommendation is to use the Package.swift pkgConfig setting. This seems to work OK for my specific setup. However, it is a chicken and egg if there is no .pc file. I tried embedding an own .pc file in the package, but the system package directory isn't added to the PKG_CONFIG_PATH (and hence won't be considered during the compilation of a dependent module). So the question stands: how to accomplish that in an environment where there libs are installed, but w/o a .pc file (just header and lib).
I use Eclipse + ARM plugin to build my projects. When I needed to use in my project the StemWin library, I configured my IDE to use external library.
I set
Preferences -> C/C++ General -> Paths and Symbols
I added in "Library Paths" the link to my folder includes library.
I also added the name of my library in tab "Library".
I checked the settings in the compiler tab and I ascertained all should be good.
When I tried to build my project I got an error from linker:
cannot find -lMyLib.a Hello C/C++ Problem
I double checked the name of my library and link, all are correct. This is the output of my linker:
arm-none-eabi-gcc -mcpu=cortex-m4 -mthumb -mfloat-abi=hard -L"C:\lib"
-T"C:\arm_toolchain\stm32_workspace\Hello\LinkerScript.ld" -Wl,
-Map=output.map -Wl,--gc-sections -o "Hello.elf" #"objects.list" -lMyLib.a
What should I do from here?
I faced the same problem before.
-l:STemWin526_CM4_GCC.a
-L"C:\Edu_Workspace\STM32F4\stm32f4_bsp_template\Drivers\Middlewares\ST\STemWin\Lib"
Above are my working settings.
With -l:<archive file name> the colon : is important for archive file linking.
And -L will contain library path.
Also for stemwin make sure to compile with hardware floating point
-mfloat-abi=hard -mfpu=fpv4-sp-d16
the convention for the -l option of the linker (say you give -lMyLib.a as a linker option) is to search for a library file with "lib" prepended to the given name and .a (or .so) appended, i.e. your command line searches for a file libMyLib.a.{a,so} which is probably not how it's named.
Either you rename your library according to this convention or give it to the linker command line omitting -l (provided your IDE allows you to do so).
Looks like the problem is in -lMyLib.a which means you're trying to link a static library as a dynamic one.
To link a static lib you have to use its path as with ordinary .o files: ... /path/to/MyLib.a
and the resulting command line should look something like
arm-none-eabi-gcc -mcpu=cortex-m4 -mthumb -mfloat-abi=hard -L"C:\lib" -T"C:\arm_toolchain\stm32_workspace\Hello\LinkerScript.ld" -Wl,-Map=output.map -Wl,--gc-sections -o "Hello.elf" #"objects.list" /path/to/MyLib.a
UPDATE:
Although it might fix the issue, turns out it's not true:
-llibrary
-l library
...
Normally the files found this way are library files—archive files whose members are object files. The linker handles an archive file by scanning through it for members which define symbols that have so far been referenced but not defined. But if the file that is found is an ordinary object file, it is linked in the usual fashion. The only difference between using an -l option and specifying a file name is that -l surrounds library with ‘lib’ and ‘.a’ and searches several directories.
(https://gcc.gnu.org/onlinedocs/gcc/Link-Options.html)
I'm trying to use CUDA code inside MATLAB mex, under linux. With the "whole program compilation" mode, it works good for me. I take the following two steps inside Nsight:
(1) Add "-fPIC" as a compiler option to each .cpp or .cu file, then compile them separately, each producing a .o file.
(2) Set the linker command to be "mex" and add "-cxx" to indicate that the type of all the .o input files are cpp files, and add the library path for cuda. Also add a cpp file that contains the mexFunction entry as an additional input.
This works good and the resulted mex file runs well under MATLAB. After that when I need to use dynamical parallelism, I have to switch to the "separate compilation mode" in Nsight. I tried the same thing above but the linker produces a lot of errors of missing reference, which I wasn't able to resolve.
Then I checked the compilation and linking steps of the "separate compilation" mode. I got confused by what it is doing. It seems that Nsight does two compilation steps for each .cpp or .cu file and produces a .o file as well as a .d file. Like this:
/usr/local/cuda-5.5/bin/nvcc -O3 -gencode arch=compute_35,code=sm_35 -odir "src" -M -o "src/tn_matrix.d" "../src/tn_matrix.cu"
/usr/local/cuda-5.5/bin/nvcc --device-c -O3 -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -x cu -o "src/tn_matrix.o" "../src/tn_matrix.cu"
The linking command is like this:
/usr/local/cuda-5.5/bin/nvcc --cudart static --relocatable-device-code=true -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -link -o "test7" ./src/cu_base.o ./src/exp_bp_wsj_dev_mex.o ./src/tn_main.o ./src/tn_matlab_helper.o ./src/tn_matrix.o ./src/tn_matrix_lib_dev.o ./src/tn_matrix_lib_host.o ./src/tn_model_wsj_dev.o ./src/tn_model_wsj_host.o ./src/tn_utility.o -lcudadevrt -lmx -lcusparse -lcurand -lcublas
What's interesting is that the linker does not take the .d file as input. So I'm not sure how it dealt with these files and how I should process them with the "mex" command when linking?
Another problem is that the linking stage has a lot of options I don't understand (--cudart static --relocatable-device-code=true), which I guess is the reason why I cannot make it work like in the "whole program compilation" mode. So I tried the following:
(1) Compile in the same way as in the beginning of the post.
(2) Preserve the linking command as provided by Nsight but change to use "-shared" option, so that the linker produces a lib file.
(3) Invoke mex with input the lib file and another cpp file containing the mexFunction entry.
This way mex compilation works and it produces a mex executable as output. However, running the resulted mex executable under MATLAB produces a segmentation fault immediately and crashes MATLAB.
I'm not sure if this way of linking would cause any problem. More strangely, I found that the mex linking step seems to finish trivially without even checking the completeness of the executable, because even if I miss a .cpp file for some function that the mexFunction will use, it still compiles.
EDIT:
I figured out how to manually link into a mex executable which can run correctly under MATLAB, but I haven't figured out how to do that automatically under Nsight, which I can in the "whole program compilation" mode. Here is my approach:
(1) Exclude from build the cpp file which contains the mexFunction entry. Manually compile it with the command "mex -c".
(2) Add "-fPIC" as a compiler option to each of the rest .cpp or .cu file, then compile them separately, each producing a .o file.
(3) Linking will fail because it cannot find the main function. We don't have it since we use mexFunction and it is excluded. This doesn't matter and I just leave it there.
(4) Follow the method in the post below to manually dlink the .o files into a device object file
cuda shared library linking: undefined reference to cudaRegisterLinkedBinary
For example, if step (2) produces a.o and b.o, here we do
nvcc -gencode arch=compute_35,code=sm_35 -Xcompiler '-fPIC' -dlink a.o b.o -o mex_dev.o -lcudadevrt
Note that here the output file mex_dev.o should not exist, otherwise the above command will fail.
(5) Use mex command to link all the .o files produced in step (2) and step (4), with all necessary libraries supplied.
This works and produces runnable mex executable. The reason I cannot automate step (1) inside Nsight is because if I change the compilation command to "mex", Nsight will also use this command to generate a dependency file (the .d file mentioned in the question text). And the reason I cannot automate step (4) and step (5) in Nsight is because it involves two commands, which I don't know how to put them in. Please let me know if you knows how to do these. Thanks!
OK, I figured out the solution. Here are the complete steps for compiling mex programs with "separate compilation mode" in Nsight:
Create a cuda project.
In the project level, change build option for the following:
Switch on -fPIC in the compiler option of "NVCC compiler" at the project level.
Add -dlink -Xcompiler '-fPIC' to "Expert Settings" "Command Line Pattern" of the linker "NVCC Linker"
Add letter o to "Build Artifact" -> "Artifact Extension", since by -dlink in the last step we are making the output a .o file.
Add mex -cxx -o path_to_mex_bin/mex_bin_filename ./*.o ./src/*.o -lcudadevrt to "Post Build Steps", (add other necessary libs)
UPDATE: In my actual project I moved the last step to a .m file in MATLAB, because otherwise if I do it while my mex program is running, it could cause MATLAB crash.
For files needs to be compiled with mex, change these build option for each of them:
Change the compiler to GCC C++ Compiler in Tool Chain Editor.
Go back to compiler setting of GCC C++ Compiler and change Command to mex
Change command line pattern to ${COMMAND} -c -outdir "src" ${INPUTS}
Several additional notes:
(1) Cuda specific details (such as kernel functions and calls to kernel functions) must be hidden from the mex compiler. So they should be put in the .cu files rather than the header files. Here is a trick to put templates involving cuda details into .cu files.
In the header file (e.g., f.h), you put only the declaration of the function like this:
template<typename ValueType>
void func(ValueType x);
Add a new file named f.inc, which holds the definition
template<>
void func(ValueType x) {
// possible kernel launches which should be hidden from mex
}
In the source code file (e.g., f.cu), you put this
#define ValueType float
#include "f.inc"
#undef ValueType
#define ValueType double
#include "f.inc"
#undef ValueType
// Add other types you want.
This trick can be easily generalized for templated classes to hide details.
(2) mex specific details should also be hidden from cuda source files, since the mex.h will alter the definitions of some system functions, such as printf. So including of "mex.h" should not appear in header files that can potentially be included in the cuda source files.
(3) In the mex source code file containing the entry mexFunction, one can use the compiler macro MATLAB_MEX_FILE to selectively compile code sections. This way th source code file can be compiled into both mex executable or ordinarily executable, allowing debugging under Nsight without matlab. Here is a trick for building multiple targets under Nsight: Building multiple binaries within one Eclipse project
First of all, it should be possible to set up Night to use a custom Makefile rather than generate it automatically. See Setting Nsight to run with existing Makefile project.
Once we have a custom Makefile, it may be possible to automate (1), (4), and (5). The advantage of a custom Makefile is that you know exactly what compilation commands will take place.
A bare-bones example:
all: mx.mexa64
mx.mexa64: mx.o
mex -o mx.mexa64 mx.o -L/usr/local/cuda/lib64 -lcudart -lcudadevrt
mx.o: mxfunc.o helper.o
nvcc -arch=sm_35 -Xcompiler -fPIC -o mx.o -dlink helper.o mxfunc.o -lcudadevrt
mxfunc.o: mxfunc.c
mex -c -o mxfunc.o mxfunc.c
helper.o: helper.c
nvcc -arch=sm_35 -Xcompiler -fPIC -c -o helper.o helper.c
clean:
rm -fv mx.mexa64 *.o
... where mxfunc.c contains the mxFunction but helper.c does not.
EDIT: You may be able achieve the same effect in the automatic compilation system. Right click on each source file and select Properties, and you'll get a window where you can add some compilation options for that individual file. For linking options, open Properties of the project. Do some experiments and pay attention to the actual compilation commands that show up in the console. In my experience, custom options sometimes interact with the automatic system in a weird way. If this method proves too troublesome for you, I suggest that you make a custom Makefile; this way, at least we are not caught by unexpected side-effects.
I am trying to program the Arudino Nano from Eclipse. It has the same processor as the Uno (Atmega328p). I have had this working before with the Uno, but have since gotten a new hard drive and had to reinstall/reconfigure everything. I am running Fedora 19 with Eclipse Kepler. I am getting an error in the build process that I don't even know where to start looking to solve, and Google hasn't been much help...
Here is the error:
make all
Building target: Arduino_Template.elf
Invoking: AVR C Linker
avr-gcc -Wl,-Map,Arduino_Template.map -mmcu=atmega328p -o "Arduino_Template.elf" ./Analog.o -l/usr/avr
/usr/lib/gcc/avr/4.8.2/../../../../avr/bin/ld: cannot find -l/usr/avr
collect2: error: ld returned 1 exit status
make: *** [Arduino_Template.elf] Error 1
17:29:38 Build Finished (took 124ms)
Has anyone encountered this before? Or does anyone have any suggestions?
Thanks.
The problem could be described like this:
First, the library should be specified only by its name and without the "lib" prefix and the ".a" suffix. The linker (this is where you get the error) will look for the library within the specified in the project paths and will add whatever is necessary to the file library name. So if the library you need is named mystuff it will look for a file named libmystuff.a.
In your case this is specified by the -l/usr/avr which I think could be misconfiguration or you did not copy/paste the entire error output. With the '-l' option you specify the name only, not the entire path to the file.
Second, the path should be specified, in the project configuration, where to look for the libraries, otherwise the linker will look for the library files only within your own project only. Often libraries are part of another project - so you need to adjust the project configuration accordingly.
I had similar problem and this is how I solved it ...
Go to menu Project/Properties. On left - choose "C/C++ Build". On right - choose the tab "Tool Settings". On the tree view choose "AVR C Linker" then "Libraries" sub-item. You are where you may need to make changes.
The list of "Libraries" is where you add libraries names, such as mystuff.
The Libraries Path is where you specify the paths to the libraries. This may look like this: "${workspace_loc:/mystuff/Release}"
The result of this is that the linker will look for this file: /mystuff/Release/libmystuff.a under you workspace root folder.