I use Eclipse + ARM plugin to build my projects. When I needed to use in my project the StemWin library, I configured my IDE to use external library.
I set
Preferences -> C/C++ General -> Paths and Symbols
I added in "Library Paths" the link to my folder includes library.
I also added the name of my library in tab "Library".
I checked the settings in the compiler tab and I ascertained all should be good.
When I tried to build my project I got an error from linker:
cannot find -lMyLib.a Hello C/C++ Problem
I double checked the name of my library and link, all are correct. This is the output of my linker:
arm-none-eabi-gcc -mcpu=cortex-m4 -mthumb -mfloat-abi=hard -L"C:\lib"
-T"C:\arm_toolchain\stm32_workspace\Hello\LinkerScript.ld" -Wl,
-Map=output.map -Wl,--gc-sections -o "Hello.elf" #"objects.list" -lMyLib.a
What should I do from here?
I faced the same problem before.
-l:STemWin526_CM4_GCC.a
-L"C:\Edu_Workspace\STM32F4\stm32f4_bsp_template\Drivers\Middlewares\ST\STemWin\Lib"
Above are my working settings.
With -l:<archive file name> the colon : is important for archive file linking.
And -L will contain library path.
Also for stemwin make sure to compile with hardware floating point
-mfloat-abi=hard -mfpu=fpv4-sp-d16
the convention for the -l option of the linker (say you give -lMyLib.a as a linker option) is to search for a library file with "lib" prepended to the given name and .a (or .so) appended, i.e. your command line searches for a file libMyLib.a.{a,so} which is probably not how it's named.
Either you rename your library according to this convention or give it to the linker command line omitting -l (provided your IDE allows you to do so).
Looks like the problem is in -lMyLib.a which means you're trying to link a static library as a dynamic one.
To link a static lib you have to use its path as with ordinary .o files: ... /path/to/MyLib.a
and the resulting command line should look something like
arm-none-eabi-gcc -mcpu=cortex-m4 -mthumb -mfloat-abi=hard -L"C:\lib" -T"C:\arm_toolchain\stm32_workspace\Hello\LinkerScript.ld" -Wl,-Map=output.map -Wl,--gc-sections -o "Hello.elf" #"objects.list" /path/to/MyLib.a
UPDATE:
Although it might fix the issue, turns out it's not true:
-llibrary
-l library
...
Normally the files found this way are library files—archive files whose members are object files. The linker handles an archive file by scanning through it for members which define symbols that have so far been referenced but not defined. But if the file that is found is an ordinary object file, it is linked in the usual fashion. The only difference between using an -l option and specifying a file name is that -l surrounds library with ‘lib’ and ‘.a’ and searches several directories.
(https://gcc.gnu.org/onlinedocs/gcc/Link-Options.html)
Related
I have to merge one of my app's libs with the NVIDIA CUDA static lib using this horrific awful CMake code:
GET_TARGET_PROPERTY(OUTPUT_LIB ${LIBNAME} LOCATION)
add_custom_command (TARGET ${LIBNAME}
POST_BUILD
COMMAND mv ${OUTPUT_LIB} ${OUTPUT_LIB}.old
COMMAND echo "create ${OUTPUT_LIB}" > combineLibs.mri
COMMAND echo "addlib ${OUTPUT_LIB}.old" >> combineLibs.mri
COMMAND echo "addlib ${CUDA_LOCATION}" >> combineLibs.mri
COMMAND echo "save" >> combineLibs.mri
COMMAND echo "end" >> combineLibs.mri
COMMAND ar -M <combineLibs.mri
COMMAND rm ${OUTPUT_LIB}.old
COMMENT "Building merged library for ${LIBNAME} at ${OUTPUT_LIB}, including ${CUDA_LOCATION}"
)
target_link_libraries(${LIBNAME} -pthread -c)
This successfully produces a merged static library that has all the symbols in it. However, the NVIDIA CUDA static lib brought with it dependencies on libpthread and libc in the form of unresolved symbols. Now the merged library also has those unresolved symbols, and the target_link_libraries line doesn't seem to do what I seem to think it does, because the symbols don't get resolved at link-time. How do I get the merged static library to dynamically link against libpthread and libc?
The the target_link_libraries line does indeed not do what you think.
target_link_libraries(target,options) can have the desired effect of
adding the linker options options to the linkage of target only if target
is something that is produced by the linker. If no linkage happens in the
production of target then this directive will have no effect.
Your target is a static library. A static library - unlike a program, and unlike
a dynamic/shared library - is not produced by the linker. As your custom_command
in fact illustrates, a static library is produced by the GNU general purpose archiver,
ar. It is nothing but an archive of files which happen to be object files,
but as far as ar is concerned they might as well be the contents of your
Documents, Pictures and Music folders. Since no linkage is involved in the
production of a static library, nothing can be linked with a static library.
An ar archive can be used as a linker input in the linkage of something that
is produced by the linker - a program or a shared library. In that case the
linker will look into the archive to see if contains any object files it needs
to carry on the linkage. If it finds any, it will extract them from the archive
and link them into the program. The linkage will be exactly the same as if
you had listed the required object files in the linker commandline and not
mentioned the archive at all.
But if any of the object files that the linker extracts from an archive bring
with them undefined references, then to get them resolved you must link some
library or libraries that define those references in the linkage of the
program or shared library that you want the linker to produce - just as you
must do to resolve undefined references in any other object files you
input to the linkage.
So,
How do I get the merged static library to dynamically link against libpthread and libc?
You can't. It doesn't make sense. Any library dependencies of object files
in a static library can be satisfied only in the linkage of a program or shared library
that has acquired those dependencies by linking those object files.
Finally, -c is not a GCC linkage option that will have the effect of requesting
linkage of libc. It is not a linkage option at all. It is an option that
directs the GCC frontend not to invoke the linker. It is passed to GCC to
request compilation without linkage, and the perverse effect of including it in a
CMake target_link_libraries directive will be to stop any linkage of the
target from happening.
If you want to explicitly request linkage of libc, use -lc, following
the linker usage protocol that -lname requests linkage of libname.
Perhaps you inferred that -c requests linkage of libc from the assumption
that -pthread requests linkage of libpthread. In fact, -lpthread would
request linkage of libpthread. The option -pthread is a more abstract GCC
option, for both compilation and linkage, that means do the right things, for this platform, to link with the Posix Threads
library - which might entail passing -lpthead to the linker, and possibly not.
Thus -pthread is OK as an argument of target_link_libraries that will
have the effect of requesting Posix Threads linkage, but see
answers to cmake and libpthread
for CMake-proper ways of doing this.
I am trying to compile a source file with icc compiler and MAGMAmic library. However I get the following error:
icc -c -o direct.o direct.c -O3 -openmp -DADD_ -Wall -DHAVE_MIC -I/opt/intel/mic/coi/include -I/usr/include/intel-coi -I/opt/intel/compilers_and_libraries_2017.2.174/linux/mkl/include:/opt/intel/compilers_and_libraries_2017.2.174/linux/ipp/include:/opt/intel/compilers_and_libraries_2017.2.174/linux/mkl/include:/opt/intel/compilers_and_libraries_2017.2.174/linux/tbb/include:/opt/intel/compilers_and_libraries_2017.2.174/linux/daal/include -I/home/dslavchev/install/magmamic-1.4.0/include -I/home/dslavchev/install/magmamic-1.4.0/contol
icc: command line remark #10411: option '-openmp' is deprecated and will be removed in a future release. Please use the replacement option '-qopenmp'
In file included from /home/dslavchev/install/magmamic-1.4.0/include/magma_types.h(134),
from /home/dslavchev/install/magmamic-1.4.0/include/magmablas_z.h(17),
from /home/dslavchev/install/magmamic-1.4.0/include/magmablas.h(12),
from /home/dslavchev/install/magmamic-1.4.0/include/magma.h(17),
from direct.c(21):
/opt/intel/compilers_and_libraries_2017.2.174/linux/compiler/include/complex(30): catastrophic error: cannot open source file "complex"
#include_next <complex>
^
The MAGMAmic library has compiled correctly and I can run it's test ok.
I have looked at the way testing_dgesv_mic.cpp example compiles and used the same includes and link, however in my case I get the above error.
I have added the following in my .bashrc file in order to get the Intel compilers' and libraries' enviromental variables:
#for MAGMA mic
export MAGMA_PATH=/home/dslavchev/install/magmamic-1.4.0
source /opt/intel/bin/compilervars.sh intel64
source /opt/intel/mkl/bin/mklvars.sh intel64
Any ideas what might cause icc to be unable to include the "complex" file?
The file complex really exists in "/opt/intel/compilers_and_libraries_2017.2.174/linux/compiler/include/complex"
icc vesrion is:
[dslavchev#sl051 results]$ icc -v
icc version 17.0.2 (gcc version 4.4.7 compatibility)
magmamic version is magmamic-1.4.0
EDIT: Removed unnecessary code comment
EDIT2: Added version info.
MAGMAmic is a C++ library and it cannot be used with C code directly.
When icc detects that you want to compile .c++ file it automatically switches to icpc (Intel C++ compiler) which in turn results in the above error.
Solution: Either switch to icpc or rename your files to .c++
This question was answered by mark on the MAGMA forums. Link:
http://icl.cs.utk.edu/magma/forum/viewtopic.php?f=2&t=1587&p=4442#p4442
I'm trying to use CUDA code inside MATLAB mex, under linux. With the "whole program compilation" mode, it works good for me. I take the following two steps inside Nsight:
(1) Add "-fPIC" as a compiler option to each .cpp or .cu file, then compile them separately, each producing a .o file.
(2) Set the linker command to be "mex" and add "-cxx" to indicate that the type of all the .o input files are cpp files, and add the library path for cuda. Also add a cpp file that contains the mexFunction entry as an additional input.
This works good and the resulted mex file runs well under MATLAB. After that when I need to use dynamical parallelism, I have to switch to the "separate compilation mode" in Nsight. I tried the same thing above but the linker produces a lot of errors of missing reference, which I wasn't able to resolve.
Then I checked the compilation and linking steps of the "separate compilation" mode. I got confused by what it is doing. It seems that Nsight does two compilation steps for each .cpp or .cu file and produces a .o file as well as a .d file. Like this:
/usr/local/cuda-5.5/bin/nvcc -O3 -gencode arch=compute_35,code=sm_35 -odir "src" -M -o "src/tn_matrix.d" "../src/tn_matrix.cu"
/usr/local/cuda-5.5/bin/nvcc --device-c -O3 -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -x cu -o "src/tn_matrix.o" "../src/tn_matrix.cu"
The linking command is like this:
/usr/local/cuda-5.5/bin/nvcc --cudart static --relocatable-device-code=true -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -link -o "test7" ./src/cu_base.o ./src/exp_bp_wsj_dev_mex.o ./src/tn_main.o ./src/tn_matlab_helper.o ./src/tn_matrix.o ./src/tn_matrix_lib_dev.o ./src/tn_matrix_lib_host.o ./src/tn_model_wsj_dev.o ./src/tn_model_wsj_host.o ./src/tn_utility.o -lcudadevrt -lmx -lcusparse -lcurand -lcublas
What's interesting is that the linker does not take the .d file as input. So I'm not sure how it dealt with these files and how I should process them with the "mex" command when linking?
Another problem is that the linking stage has a lot of options I don't understand (--cudart static --relocatable-device-code=true), which I guess is the reason why I cannot make it work like in the "whole program compilation" mode. So I tried the following:
(1) Compile in the same way as in the beginning of the post.
(2) Preserve the linking command as provided by Nsight but change to use "-shared" option, so that the linker produces a lib file.
(3) Invoke mex with input the lib file and another cpp file containing the mexFunction entry.
This way mex compilation works and it produces a mex executable as output. However, running the resulted mex executable under MATLAB produces a segmentation fault immediately and crashes MATLAB.
I'm not sure if this way of linking would cause any problem. More strangely, I found that the mex linking step seems to finish trivially without even checking the completeness of the executable, because even if I miss a .cpp file for some function that the mexFunction will use, it still compiles.
EDIT:
I figured out how to manually link into a mex executable which can run correctly under MATLAB, but I haven't figured out how to do that automatically under Nsight, which I can in the "whole program compilation" mode. Here is my approach:
(1) Exclude from build the cpp file which contains the mexFunction entry. Manually compile it with the command "mex -c".
(2) Add "-fPIC" as a compiler option to each of the rest .cpp or .cu file, then compile them separately, each producing a .o file.
(3) Linking will fail because it cannot find the main function. We don't have it since we use mexFunction and it is excluded. This doesn't matter and I just leave it there.
(4) Follow the method in the post below to manually dlink the .o files into a device object file
cuda shared library linking: undefined reference to cudaRegisterLinkedBinary
For example, if step (2) produces a.o and b.o, here we do
nvcc -gencode arch=compute_35,code=sm_35 -Xcompiler '-fPIC' -dlink a.o b.o -o mex_dev.o -lcudadevrt
Note that here the output file mex_dev.o should not exist, otherwise the above command will fail.
(5) Use mex command to link all the .o files produced in step (2) and step (4), with all necessary libraries supplied.
This works and produces runnable mex executable. The reason I cannot automate step (1) inside Nsight is because if I change the compilation command to "mex", Nsight will also use this command to generate a dependency file (the .d file mentioned in the question text). And the reason I cannot automate step (4) and step (5) in Nsight is because it involves two commands, which I don't know how to put them in. Please let me know if you knows how to do these. Thanks!
OK, I figured out the solution. Here are the complete steps for compiling mex programs with "separate compilation mode" in Nsight:
Create a cuda project.
In the project level, change build option for the following:
Switch on -fPIC in the compiler option of "NVCC compiler" at the project level.
Add -dlink -Xcompiler '-fPIC' to "Expert Settings" "Command Line Pattern" of the linker "NVCC Linker"
Add letter o to "Build Artifact" -> "Artifact Extension", since by -dlink in the last step we are making the output a .o file.
Add mex -cxx -o path_to_mex_bin/mex_bin_filename ./*.o ./src/*.o -lcudadevrt to "Post Build Steps", (add other necessary libs)
UPDATE: In my actual project I moved the last step to a .m file in MATLAB, because otherwise if I do it while my mex program is running, it could cause MATLAB crash.
For files needs to be compiled with mex, change these build option for each of them:
Change the compiler to GCC C++ Compiler in Tool Chain Editor.
Go back to compiler setting of GCC C++ Compiler and change Command to mex
Change command line pattern to ${COMMAND} -c -outdir "src" ${INPUTS}
Several additional notes:
(1) Cuda specific details (such as kernel functions and calls to kernel functions) must be hidden from the mex compiler. So they should be put in the .cu files rather than the header files. Here is a trick to put templates involving cuda details into .cu files.
In the header file (e.g., f.h), you put only the declaration of the function like this:
template<typename ValueType>
void func(ValueType x);
Add a new file named f.inc, which holds the definition
template<>
void func(ValueType x) {
// possible kernel launches which should be hidden from mex
}
In the source code file (e.g., f.cu), you put this
#define ValueType float
#include "f.inc"
#undef ValueType
#define ValueType double
#include "f.inc"
#undef ValueType
// Add other types you want.
This trick can be easily generalized for templated classes to hide details.
(2) mex specific details should also be hidden from cuda source files, since the mex.h will alter the definitions of some system functions, such as printf. So including of "mex.h" should not appear in header files that can potentially be included in the cuda source files.
(3) In the mex source code file containing the entry mexFunction, one can use the compiler macro MATLAB_MEX_FILE to selectively compile code sections. This way th source code file can be compiled into both mex executable or ordinarily executable, allowing debugging under Nsight without matlab. Here is a trick for building multiple targets under Nsight: Building multiple binaries within one Eclipse project
First of all, it should be possible to set up Night to use a custom Makefile rather than generate it automatically. See Setting Nsight to run with existing Makefile project.
Once we have a custom Makefile, it may be possible to automate (1), (4), and (5). The advantage of a custom Makefile is that you know exactly what compilation commands will take place.
A bare-bones example:
all: mx.mexa64
mx.mexa64: mx.o
mex -o mx.mexa64 mx.o -L/usr/local/cuda/lib64 -lcudart -lcudadevrt
mx.o: mxfunc.o helper.o
nvcc -arch=sm_35 -Xcompiler -fPIC -o mx.o -dlink helper.o mxfunc.o -lcudadevrt
mxfunc.o: mxfunc.c
mex -c -o mxfunc.o mxfunc.c
helper.o: helper.c
nvcc -arch=sm_35 -Xcompiler -fPIC -c -o helper.o helper.c
clean:
rm -fv mx.mexa64 *.o
... where mxfunc.c contains the mxFunction but helper.c does not.
EDIT: You may be able achieve the same effect in the automatic compilation system. Right click on each source file and select Properties, and you'll get a window where you can add some compilation options for that individual file. For linking options, open Properties of the project. Do some experiments and pay attention to the actual compilation commands that show up in the console. In my experience, custom options sometimes interact with the automatic system in a weird way. If this method proves too troublesome for you, I suggest that you make a custom Makefile; this way, at least we are not caught by unexpected side-effects.
I am trying to program the Arudino Nano from Eclipse. It has the same processor as the Uno (Atmega328p). I have had this working before with the Uno, but have since gotten a new hard drive and had to reinstall/reconfigure everything. I am running Fedora 19 with Eclipse Kepler. I am getting an error in the build process that I don't even know where to start looking to solve, and Google hasn't been much help...
Here is the error:
make all
Building target: Arduino_Template.elf
Invoking: AVR C Linker
avr-gcc -Wl,-Map,Arduino_Template.map -mmcu=atmega328p -o "Arduino_Template.elf" ./Analog.o -l/usr/avr
/usr/lib/gcc/avr/4.8.2/../../../../avr/bin/ld: cannot find -l/usr/avr
collect2: error: ld returned 1 exit status
make: *** [Arduino_Template.elf] Error 1
17:29:38 Build Finished (took 124ms)
Has anyone encountered this before? Or does anyone have any suggestions?
Thanks.
The problem could be described like this:
First, the library should be specified only by its name and without the "lib" prefix and the ".a" suffix. The linker (this is where you get the error) will look for the library within the specified in the project paths and will add whatever is necessary to the file library name. So if the library you need is named mystuff it will look for a file named libmystuff.a.
In your case this is specified by the -l/usr/avr which I think could be misconfiguration or you did not copy/paste the entire error output. With the '-l' option you specify the name only, not the entire path to the file.
Second, the path should be specified, in the project configuration, where to look for the libraries, otherwise the linker will look for the library files only within your own project only. Often libraries are part of another project - so you need to adjust the project configuration accordingly.
I had similar problem and this is how I solved it ...
Go to menu Project/Properties. On left - choose "C/C++ Build". On right - choose the tab "Tool Settings". On the tree view choose "AVR C Linker" then "Libraries" sub-item. You are where you may need to make changes.
The list of "Libraries" is where you add libraries names, such as mystuff.
The Libraries Path is where you specify the paths to the libraries. This may look like this: "${workspace_loc:/mystuff/Release}"
The result of this is that the linker will look for this file: /mystuff/Release/libmystuff.a under you workspace root folder.
I'm working on adding some functionality to the storage manager module in Postgresql.
I have added few source files already to the smgr folder, and I was able to have the Make system includes them by adding their names to the OBJS list in the Makefile inside the smgr folder. (i.e. When I add A.c, I would add A.o to the OBJS list).
That was working fine. Now I'm trying to add a new file hdfs_test.c to the project. The problem with this file is that it requires some extra directives in its compilation command (-I and -L directives).
The gcc command is:
gcc hdfs_test.c -I/HDFS_HOME/hdfs/src/c++/libhdfs -I/usr/lib/jvm/default-java/include -L/HDFS_HOME/hdfs/src/c++/libhdfs -L/HDFS_HOME/build/c++/Linux-i386-32/lib -L/usr/lib/jvm/default-java/jre/lib/i386/server -ljvm -lhdfs -o hdfs_test
Therefore, simply adding hdfs_test.o to the OBJS list doesn't work.
I tried editing the Makefile to look like this:
OBJS = md.o smgr.o smgrtype.o A.o B.o hdfs_test.o
MyRule1 : hdfs_test.c
gcc tati.c -c -I/diskless/taljab1/Workspace/HDFS_Append/hdfs/src/c++/libhdfs -I/usr/lib/jvm/default-java/include -L/diskless/taljab1/Workspace/HDFS_Append/hdfs/src/c++/libhdfs -L/diskless/taljab1/Workspace/HDFS_Append/build/c++/Linux-i386-32/lib -L/usr/lib/jvm/default-java/jre/lib/i386/server -ljvm -lhdfs
but it didn't work out, and I kept getting errors message of the Make trying to compile hdfs_test.c without including the directives.
How do I enforce the Make to include my compilation directives for hdfs_test.c ?
Thanks
You don't need to pass -l and -L at compile time, only at link time. At compile time only -I (include path) directives are required to help the compiler find any extra headers.
You should compile your source file to a .o file, same as all the others. Then add the -L and -l directives to the link command line when the linker is invoked to create the postgres executable. That means all you need to edit in src/backend/storage/smgr/Makefile is the OBJS line to add your output object, as you've already done below. Remove your custom rule, it's unnecessary as well as incorrect.
Just add your extra libraries to the $(LIBS) make variable and add your -L paths to $(LDFLAGS) via src/Makefile.global. src/Makefile.global is generated by configure from src/Makefile.global.in so you actually need to modify configure's behavior to add your includes, library paths and libraries. Don't edit configure directly either; edit configure.in and re-generate it with autoconf.
Yes, GNU Autotools is sometimes referred to as autohell for a reason. It's a bit ... interesting ... to work with sometimes, and there can be a lot of indirection involved in doing simple things.