Normally with gcc you can specify the level of debugging information with -g and if you use -g3 it will include preprocessor macro definitions in the executable which debuggers like gdb can read and allow you to use during debugging. I would like to do this with nvcc for debugging CUDA programs.
I am currently working by modifying the template program in the SDK so I'm using the default Makefile and common.mk included from the Makefile. In common.mk within the 'ifeq ($(dbg), 1)' block, I have tried the following:
put -g3 under COMMONFLAGS
put -g3 under NVCCFLAGS
put -g3 under CXXFLAGS and CFLAGS
put --compiler-options -g3 under NVCCFLAGS.
The first two give an unrecognized option error. The second two do not seem to do the trick because when I debug using cuda-gdb I don't get the macro information.
The reason I would like to do this is because I would like to inspect some memory using the same macros the program itself uses to access that memory. For example,
#define ARROW(state, arrow) ((c_arrow_t *)(&((state)->arrows) + (arrow) * sizeof(c_arrow_t)))
#define STATE(nfa, state) ((c_state_t *)(&((nfa)->states) + (state) * sizeof(c_state_t)))
are some macros I use to access states and arrows of a non-deterministic finite state automaton.
Thank you for your help!
You probably need to pass both -g to nvcc to set it to build with host debugging, and also pass -g3 to the host compiler via the -Xcompiler (or --compiler-options).
Just an observation, but you really shouldn't be using that SDK makefile for anything. It is truly evil - a crude broken hack on some autotools generated make statements, which is both unnecessarily complex and very inflexible. Even the NVIDIA developers I interact with warn not to use it.
Related
Good old disconnect between what the compiler deems valid and what the IDE thinks... Before you introduce duplicate questions / answers, I must stress that everything available on the issue here and elsewhere I have already tried and distilled to this setup:
install latest eclipse CDT - Oxygen 4.7.2 / build id 20171218-0600
gcc 6.1 only compiler visible in system path - defaults to c++14 standard without the need for explicit -std flag
Checking the log for compiler settings discovery, it's configured correctly for C++14:
03:51:39 **** Running scanner discovery: CDT GCC Built-in Compiler Settings MinGW ****
g++ -E -P -v -dD C:/dev/eclipse-oxy-cpp/.metadata/.plugins/org.eclipse.cdt.managedbuilder.core/spec.C
Using built-in specs.
COLLECT_GCC=g++
Target: x86_64-w64-mingw32
Configured with: ../../../src/gcc-6.1.0/configure --host=x86_64-w64-mingw32 --build=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --prefix=/mingw64 --with-sysroot=/c/mingw610/x86_64-610-posix-seh-rt_v5/mingw64 --enable-shared --enable-static --disable-multilib --enable-languages=c,c++,fortran,lto --enable-libstdcxx-time=yes --enable-threads=posix --enable-libgomp --enable-libatomic --enable-lto --enable-graphite --enable-checking=release --enable-fully-dynamic-string --enable-version-specific-runtime-libs --enable-libstdcxx-filesystem-ts=yes --disable-isl-version-check --disable-libstdcxx-pch --disable-libstdcxx-debug --enable-bootstrap --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-gnu-as --with-gnu-ld --with-arch=nocona --with-tune=core2 --with-libiconv --with-system-zlib --with-gmp=/c/mingw610/prerequisites/x86_64-w64-mingw32-static --with-mpfr=/c/mingw610/prerequisites/x86_64-w64-mingw32-static --with-mpc=/c/mingw610/prerequisites/x86_64-w64-mingw32-static --with-isl=/c/mingw610/prerequisites/x86_64-w64-mingw32-static --with-pkgversion='x86_64-posix-seh, Built by MinGW-W64 project' --with-bugurl=http://sourceforge.net/projects/mingw-w64 CFLAGS='-O2 -pipe -I/c/mingw610/x86_64-610-posix-seh-rt_v5/mingw64/opt/include -I/c/mingw610/prerequisites/x86_64-zlib-static/include -I/c/mingw610/prerequisites/x86_64-w64-mingw32-static/include' CXXFLAGS='-O2 -pipe -I/c/mingw610/x86_64-610-posix-seh-rt_v5/mingw64/opt/include -I/c/mingw610/prerequisites/x86_64-zlib-static/include -I/c/mingw610/prerequisites/x86_64-w64-mingw32-static/include' CPPFLAGS= LDFLAGS='-pipe -L/c/mingw610/x86_64-610-posix-seh-rt_v5/mingw64/opt/lib -L/c/mingw610/prerequisites/x86_64-zlib-static/lib -L/c/mingw610/prerequisites/x86_64-w64-mingw32-static/lib '
Thread model: posix
gcc version 6.1.0 (x86_64-posix-seh, Built by MinGW-W64 project)
COLLECT_GCC_OPTIONS='-E' '-P' '-v' '-dD' '-shared-libgcc' '-mtune=core2' '-march=nocona'
C:/mingw64/bin/../libexec/gcc/x86_64-w64-mingw32/6.1.0/cc1plus.exe -E -quiet -v -P -iprefix C:/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/6.1.0/ -D_REENTRANT C:/dev/eclipse-oxy-cpp/.metadata/.plugins/org.eclipse.cdt.managedbuilder.core/spec.C -mtune=core2 -march=nocona -dD
#define __STDC__ 1
#define __cplusplus 201402L
(etc)
The following snippet compiles (with unused variable warning but nonetheless), but eclipse highlights issues with resolving test in int main:
auto test (auto N) {return N;}
int main () {
auto z = test( 3U );
return 0;
}
The parser log gives:
Unresolved names:
Attempt to use symbol failed: test in file ...
The IDE shows this in problems:
Invalid arguments '
Candidates are:
? test(?)
'
For all intents and purposes eclipse is in fact seeing the correct gcc binary, and is using the correct c++ standard, but still not conforming to the compiler per evidence above (unless I missed something?).
Any ideas how to make eclipse behave with syntax parsing given it's already got the right c++ standard version?
I get the feeling it's an issue with the indexer rather than the parser, since it has no trouble figuring out the number of arguments or the declaration but can't seem to make sense of functions with auto return type that depends on an auto-typed argument.
UPDATE
It may be related to this bug (looks to be fixed) and this followup bug (still ongoing, test case given is similar to mine).
However, this version of test does not give me problems, so I'm unsure if it's the same bug or something else...
template< typename T > auto test (T N) {return N;}
auto in the parameter type of a function (as opposed to a lambda) is not standard C++14.
It's supported by the Concepts TS (which is supported by GCC >= 6 with the -fconcepts flag), and accepted by GCC >= 4.9 even without -fconcepts as an extension, but it's not standard. (It may become standard in C++20, along with some other parts of the Concepts TS.)
For example, here's what Clang (which does not support this extension) says about your code in C++14 mode:
test.cpp:1:12: error: 'auto' not allowed in function prototype
auto test (auto N) {return N;}
^~~~
Eclipse CDT does not currently implement this extension either. I filed bug 532085 to track adding support for it; contributions are welcome!
If I give clang -O2 -O3 in the same command line, in that order, is the -O3 going to override the -O2 ? Does the later argument always override?
A build script which I can't change by default adds -O2 and I can only add things after it. Is that an acceptable thing to do?
Operation of the Clang driver is described in the manual page Driver Design & Internals § Driver stages. Note how you can use the -### option to get it to dump the result of each stage. This is not something you can exercise with your borken build system since the option must be listed first. But you can verify that the driver does in fact do what you hope it does:
clang -### foo.cpp -O2 -O3 # dumps yayayada "-O3" yadamore
clang -### foo.cpp -O3 -O2 # dumps yayayada "-O2" yadamore
Where “yada” is spew that I omitted since there’s too much of it. So, indeed, the last -O option you specify is the one that is effective. Which is the expected behavior for any compiler driver.
clang processes options left-to-right. Thus, the last -O option "wins". This is a feature exactly for the reason you ask: so there's a possibility to override defaults set by someone else (e.g. some build system, software developers, ...) Yes, it is totally acceptable, and you are in plenty of good company.
The ultimate reference would be the LLVM source code (option handling
is implemented by cl::ParseCommandLineOptions() in file lib/Support/CommandLine.cpp.)
Thinking outside the box: even if you cannot change the build script, you may influence it to do what you want. For example, the optimization option may be part of a variable that is taken from an option or from the environment. For example, if the build uses a Makefile, the variable could be called CFLAGS or COPTS and be set with
make CFLAGS=-O3
If the build uses a shell script, maybe something like
CFLAGS="-O3" ./configure
would work. There's no telling without seeing the build.
I'm trying to use CUDA code inside MATLAB mex, under linux. With the "whole program compilation" mode, it works good for me. I take the following two steps inside Nsight:
(1) Add "-fPIC" as a compiler option to each .cpp or .cu file, then compile them separately, each producing a .o file.
(2) Set the linker command to be "mex" and add "-cxx" to indicate that the type of all the .o input files are cpp files, and add the library path for cuda. Also add a cpp file that contains the mexFunction entry as an additional input.
This works good and the resulted mex file runs well under MATLAB. After that when I need to use dynamical parallelism, I have to switch to the "separate compilation mode" in Nsight. I tried the same thing above but the linker produces a lot of errors of missing reference, which I wasn't able to resolve.
Then I checked the compilation and linking steps of the "separate compilation" mode. I got confused by what it is doing. It seems that Nsight does two compilation steps for each .cpp or .cu file and produces a .o file as well as a .d file. Like this:
/usr/local/cuda-5.5/bin/nvcc -O3 -gencode arch=compute_35,code=sm_35 -odir "src" -M -o "src/tn_matrix.d" "../src/tn_matrix.cu"
/usr/local/cuda-5.5/bin/nvcc --device-c -O3 -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -x cu -o "src/tn_matrix.o" "../src/tn_matrix.cu"
The linking command is like this:
/usr/local/cuda-5.5/bin/nvcc --cudart static --relocatable-device-code=true -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -link -o "test7" ./src/cu_base.o ./src/exp_bp_wsj_dev_mex.o ./src/tn_main.o ./src/tn_matlab_helper.o ./src/tn_matrix.o ./src/tn_matrix_lib_dev.o ./src/tn_matrix_lib_host.o ./src/tn_model_wsj_dev.o ./src/tn_model_wsj_host.o ./src/tn_utility.o -lcudadevrt -lmx -lcusparse -lcurand -lcublas
What's interesting is that the linker does not take the .d file as input. So I'm not sure how it dealt with these files and how I should process them with the "mex" command when linking?
Another problem is that the linking stage has a lot of options I don't understand (--cudart static --relocatable-device-code=true), which I guess is the reason why I cannot make it work like in the "whole program compilation" mode. So I tried the following:
(1) Compile in the same way as in the beginning of the post.
(2) Preserve the linking command as provided by Nsight but change to use "-shared" option, so that the linker produces a lib file.
(3) Invoke mex with input the lib file and another cpp file containing the mexFunction entry.
This way mex compilation works and it produces a mex executable as output. However, running the resulted mex executable under MATLAB produces a segmentation fault immediately and crashes MATLAB.
I'm not sure if this way of linking would cause any problem. More strangely, I found that the mex linking step seems to finish trivially without even checking the completeness of the executable, because even if I miss a .cpp file for some function that the mexFunction will use, it still compiles.
EDIT:
I figured out how to manually link into a mex executable which can run correctly under MATLAB, but I haven't figured out how to do that automatically under Nsight, which I can in the "whole program compilation" mode. Here is my approach:
(1) Exclude from build the cpp file which contains the mexFunction entry. Manually compile it with the command "mex -c".
(2) Add "-fPIC" as a compiler option to each of the rest .cpp or .cu file, then compile them separately, each producing a .o file.
(3) Linking will fail because it cannot find the main function. We don't have it since we use mexFunction and it is excluded. This doesn't matter and I just leave it there.
(4) Follow the method in the post below to manually dlink the .o files into a device object file
cuda shared library linking: undefined reference to cudaRegisterLinkedBinary
For example, if step (2) produces a.o and b.o, here we do
nvcc -gencode arch=compute_35,code=sm_35 -Xcompiler '-fPIC' -dlink a.o b.o -o mex_dev.o -lcudadevrt
Note that here the output file mex_dev.o should not exist, otherwise the above command will fail.
(5) Use mex command to link all the .o files produced in step (2) and step (4), with all necessary libraries supplied.
This works and produces runnable mex executable. The reason I cannot automate step (1) inside Nsight is because if I change the compilation command to "mex", Nsight will also use this command to generate a dependency file (the .d file mentioned in the question text). And the reason I cannot automate step (4) and step (5) in Nsight is because it involves two commands, which I don't know how to put them in. Please let me know if you knows how to do these. Thanks!
OK, I figured out the solution. Here are the complete steps for compiling mex programs with "separate compilation mode" in Nsight:
Create a cuda project.
In the project level, change build option for the following:
Switch on -fPIC in the compiler option of "NVCC compiler" at the project level.
Add -dlink -Xcompiler '-fPIC' to "Expert Settings" "Command Line Pattern" of the linker "NVCC Linker"
Add letter o to "Build Artifact" -> "Artifact Extension", since by -dlink in the last step we are making the output a .o file.
Add mex -cxx -o path_to_mex_bin/mex_bin_filename ./*.o ./src/*.o -lcudadevrt to "Post Build Steps", (add other necessary libs)
UPDATE: In my actual project I moved the last step to a .m file in MATLAB, because otherwise if I do it while my mex program is running, it could cause MATLAB crash.
For files needs to be compiled with mex, change these build option for each of them:
Change the compiler to GCC C++ Compiler in Tool Chain Editor.
Go back to compiler setting of GCC C++ Compiler and change Command to mex
Change command line pattern to ${COMMAND} -c -outdir "src" ${INPUTS}
Several additional notes:
(1) Cuda specific details (such as kernel functions and calls to kernel functions) must be hidden from the mex compiler. So they should be put in the .cu files rather than the header files. Here is a trick to put templates involving cuda details into .cu files.
In the header file (e.g., f.h), you put only the declaration of the function like this:
template<typename ValueType>
void func(ValueType x);
Add a new file named f.inc, which holds the definition
template<>
void func(ValueType x) {
// possible kernel launches which should be hidden from mex
}
In the source code file (e.g., f.cu), you put this
#define ValueType float
#include "f.inc"
#undef ValueType
#define ValueType double
#include "f.inc"
#undef ValueType
// Add other types you want.
This trick can be easily generalized for templated classes to hide details.
(2) mex specific details should also be hidden from cuda source files, since the mex.h will alter the definitions of some system functions, such as printf. So including of "mex.h" should not appear in header files that can potentially be included in the cuda source files.
(3) In the mex source code file containing the entry mexFunction, one can use the compiler macro MATLAB_MEX_FILE to selectively compile code sections. This way th source code file can be compiled into both mex executable or ordinarily executable, allowing debugging under Nsight without matlab. Here is a trick for building multiple targets under Nsight: Building multiple binaries within one Eclipse project
First of all, it should be possible to set up Night to use a custom Makefile rather than generate it automatically. See Setting Nsight to run with existing Makefile project.
Once we have a custom Makefile, it may be possible to automate (1), (4), and (5). The advantage of a custom Makefile is that you know exactly what compilation commands will take place.
A bare-bones example:
all: mx.mexa64
mx.mexa64: mx.o
mex -o mx.mexa64 mx.o -L/usr/local/cuda/lib64 -lcudart -lcudadevrt
mx.o: mxfunc.o helper.o
nvcc -arch=sm_35 -Xcompiler -fPIC -o mx.o -dlink helper.o mxfunc.o -lcudadevrt
mxfunc.o: mxfunc.c
mex -c -o mxfunc.o mxfunc.c
helper.o: helper.c
nvcc -arch=sm_35 -Xcompiler -fPIC -c -o helper.o helper.c
clean:
rm -fv mx.mexa64 *.o
... where mxfunc.c contains the mxFunction but helper.c does not.
EDIT: You may be able achieve the same effect in the automatic compilation system. Right click on each source file and select Properties, and you'll get a window where you can add some compilation options for that individual file. For linking options, open Properties of the project. Do some experiments and pay attention to the actual compilation commands that show up in the console. In my experience, custom options sometimes interact with the automatic system in a weird way. If this method proves too troublesome for you, I suggest that you make a custom Makefile; this way, at least we are not caught by unexpected side-effects.
I am trying to get started using Rccp and decided to use Eclipse as a development environment since I already use StatEt for R. I am having trouble getting even a simple program to compile and run though, and would appreciate some help!
Briefly I tried to follow the instructions on the blog: http://blog.fellstat.com/?p=170 exactly for setting up Rcpp, RInside and Eclipse, and for the example program. I am running on Mountain Lion, and installed g++ using the command line options in XCode. I think I've faithfully followed all the steps in the blog, but cannot get the program to compile. I think the problem is in the way the header files are included, as indicated from the snippet of the output below. As far as I can tell, line 52 of /usr/include/c++/4.2.1/cstring is an include statement for <string.h> and the compiler includes Rccp/include/string.h instead of the string.h from std that is found earlier on the include path.
I am a novice in C++ so I'd really appreciate some pointers on how to proceed.
-Krishna
16:22:38 **** Incremental Build of configuration Debug for project MyTestRCppPackage ****
Info: Internal Builder is used for build
g++ -DINSIDE -I/Library/Frameworks/R.framework/Versions/2.15/Resources/include -I/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include -I/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp -I/Library/Frameworks/R.framework/Versions/2.15/Resources/library/RInside/include -O0 -g3 -Wall -c -fmessage-length=0 -arch x86_64 -v -o src/main.o ../src/main.cpp
Using built-in specs.
Target: i686-apple-darwin11
Configured with: /private/var/tmp/llvmgcc42/llvmgcc42-2336.11~182/src/configure --disable-checking --enable-werror --prefix=/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2 --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-prefix=llvm- --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --with-slibdir=/usr/lib --build=i686-apple-darwin11 --enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2336.11~182/dst-llvmCore/Developer/usr/local --program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11 --target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1
Thread model: posix
gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)
/usr/llvm-gcc-4.2/bin/../libexec/gcc/i686-apple-darwin11/4.2.1/cc1plus -quiet -v -I/Library/Frameworks/R.framework/Versions/2.15/Resources/include -I/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include -I/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp -I/Library/Frameworks/R.framework/Versions/2.15/Resources/library/RInside/include -imultilib x86_64 -iprefix /usr/llvm-gcc-4.2/bin/../lib/gcc/i686-apple-darwin11/4.2.1/ -dD -D__DYNAMIC__ -DINSIDE ../src/main.cpp -fPIC -quiet -dumpbase main.cpp -mmacosx-version-min=10.8.3 -m64 -mtune=core2 -auxbase-strip src/main.o -g3 -O0 -Wall -version -fmessage-length=0 -D__private_extern__=extern -o /var/folders/hc/vqp48jt56_v332kc3dqyf5780000gn/T//ccqdmOKI.s
ignoring nonexistent directory "/usr/llvm-gcc-4.2/bin/../lib/gcc/i686-apple-darwin11/4.2.1/../../../../i686-apple-darwin11/include"
ignoring nonexistent directory "/usr/include/c++/4.2.1/i686-apple-darwin11/x86_64"
ignoring nonexistent directory "/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/../../../../i686-apple-darwin11/include"
#include "..." search starts here:
#include <...> search starts here:
/Library/Frameworks/R.framework/Versions/2.15/Resources/include
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/RInside/include
/usr/llvm-gcc-4.2/bin/../lib/gcc/i686-apple-darwin11/4.2.1/include
/usr/include/c++/4.2.1
/usr/include/c++/4.2.1/backward
/usr/local/include
/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2/lib/gcc/i686-apple-darwin11/4.2.1/include
/usr/include
/System/Library/Frameworks (framework directory)
/Library/Frameworks (framework directory)
End of search list.
GNU C++ version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) (i686-apple-darwin11)
compiled by GNU C version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00).
GGC heuristics: --param ggc-min-expand=150 --param ggc-min-heapsize=65536
Compiler executable checksum: b37fef824b01c0a99fb2679acf3b04f1
In file included from /usr/include/c++/4.2.1/cstring:52,
from /usr/include/c++/4.2.1/bits/stl_algobase.h:66,
from /usr/include/c++/4.2.1/memory:53,
from /usr/include/c++/4.2.1/tr1/hashtable:56,
from /usr/include/c++/4.2.1/tr1/unordered_map:37,
from /Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/platform/compiler.h:158,
from /Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/RcppCommon.h:26,
from /Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp.h:27,
from ../src/main.cpp:8:
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:52: error: 'internal' has not been declared
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:52: error: typedef name may not be a nested-name-specifier
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:52: error: expected ';' before '<' token
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:65: error: expected `)' before 'charsxp'
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:70: error: expected ',' or '...' before '&' token
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:75: error: expected unqualified-id before '&' token
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:75: error: expected ',' or '...' before '&' token
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:75: error: 'Rcpp::String::String()' cannot be overloaded
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:55: error: with 'Rcpp::String::String()'
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:85: error: 'Rcpp::String::String(int)' cannot be overloaded
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:70: error: with 'Rcpp::String::String(int)'
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:88: error: expected `)' before 'x'
/Library/Frameworks/R.framework/Versions/2.15/Resources/library/Rcpp/include/Rcpp/string.h:89: error: expected `)' before 'x'
There are two entirely separate issues here:
Get all you need for Rcpp installed. OS X aspects should be documented on the relevant page maintained by Simon. If you have the tools, and have Rcpp install, then you should be able to do cppFunction('double nPi(int x) { return x*M_PI; }') which is uses functions supplied with Rcpp to create a callable C++ functions accessible to you as nPi() -- and nPi(2) should return a value.
Your choice of IDE and its settings. This is has little to do with 1. apart from requiring it to work to.
So I would work on 1. and see if I got that sorted out first, and only then turn to 2.
To summarize, the issue I faced was that include files in Rcpp with the sames names as those in std were in conflict. In particular, string.h from Rcpp was being included at a point where string.h from std was the right choice, and, as far as I could tell, this was due to the fact that paths specified via the -I directive are searched prior to the default paths.
I tried many different alternatives to solve this, including removing and re-installing XCode and the associated Command Line tools, as well as installing another g++ compiler using macports. None of these resolved the issue. I then used the -idirafter directive instead of the -I directive for the search path for include files for Rcpp and R. I got this hint from gcc include order broken?. This worked since these directories are now searched after the default paths. This precludes (at least so far!) the possibility that string.h from std and string.h from Rcpp come into conflict.
To get step 5 of http://blog.fellstat.com/?p=170 to work I had to set the -idirafter paths in PKG_CPPFLAGS in the file Makevars.
Thanks to everyone for your suggestions.
You simply have to remove include
/Library/Frameworks/R.framework/Resources/library/Rcpp/include/Rcpp
because it is:
unnecessary, as all R imports are in form <Rcpp/XXX>
causes this issue, as compiler looks for string.h in Rcpp directory (when it shouldn't).
I have inherited code, trying to compile with gcc on Linux.
what library am I looking for that has __builtin_ia32_stmxcsr ?
apologies -- i was too fast to submit; running gcc inside of Nvidia Eclipse. actual error message is "Functuion . . . could not be resolved" so i jumped the conclusion i needed to reference some lib. As the offending lines hav a :#if defined(SSE) I take it to mean that the -msse2 switch is present although i cannot seem to find a copyh of the compile command line. [just learning this Eclipse tool -- very new!]
You don't need to link with anything - the "builtin" in the name is a clue that it's a gcc built-in (intrinsic) compiler function.
However you do need to be compiling for an x86 target with SSE enabled for this to be recognised, e.g. gcc -msse2 ....
Note that you can use the _mm_getcsr intrinsic from <xmmintrin.h> instead of __builtin_ia32_stmxcsr - this would be a little more portable.
This is a bug in eclipses indexer with gcc's __builtin* functions. The bug report is at https://bugs.eclipse.org/bugs/show_bug.cgi?id=352537
The problem is that even the glibc/gcc libraries themselves use these __builtin* functions, so eclipse complains about a faulty xmmintrin.h etc., which is of course nonsense.
There is a workaround given in the bug report, you can add the function prototypes as user defined macros for the indexer, but of course this becomes tedious if there are a few more and some type checking abilities are lost.