How to get a CMake variable from the command line? - command-line

Something like this:
cmake --get-variable=MY_CMAKE_VARIABLE
The variable may exist in an included CMake file.

If you have an existing cache file, you can do:
grep MY_CMAKE_VARIABLE CMakeCache.txt
If you do not yet have a cache file and you want to see what options there are in a CMakeLists.txt file, you can do (in a different directory since this will write a cache file):
cmake -L /path/to/CMakeLists.txt | grep MY_CMAKE_VARIABLE
which will return to you something like
<VARIABLE>:<TYPE>=<VALUE>
If it is an advanced variable, add the -A flag to the same command and it will include advanced variables. Of course, if you only want the value, you can do:
cmake -L /path/to/CMakeLists.txt | grep MY_CMAKE_VARIABLE | cut -d "=" -f2
EDIT
For example, with a CMakeLists.txt that is:
cmake_minimum_required(VERSION 2.8)
project(test)
include(otherFile.txt)
set(MY_VAR "Testing" CACHE STRING "")
And where otherFile.txt is:
set(MY_OTHER_VAR "Hi" CACHE STRING "")
The command (run from another directory):
cmake -L ../cmaketest
Gives:
-- The C compiler identification is GNU
-- The CXX compiler identification is GNU
-- Check for working C compiler: /usr/bin/gcc
-- Check for working C compiler: /usr/bin/gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Configuring done
-- Generating done
-- Build files have been written to: /home/tgallagher/cmaketest-build
-- Cache values
CMAKE_BUILD_TYPE:STRING=
CMAKE_INSTALL_PREFIX:PATH=/usr/local
MY_OTHER_VAR:STRING=Hi
MY_VAR:STRING=Testing
So, it does show variables from other files. It should parse the entire build. The issue though is that it will not show any variables that are not marked with CACHE. And it will not show any that are cached INTERNAL, and will only show ADVANCED if -LA is used instead of -L.
If your variables are marked as INTERNAL or not CACHE'd at all, then there is no method within CMake to pull it out. But, non-CACHE'd variables are meant to be transient, so I'm not sure why you would need them outside of a build environment anyway.

Use:
cmake -LA -N /path/to/project
to get a listing of all cache values. The -N is important; it prevents cmake from trying to generate any build files, and just shows you what's in the cache.

If the variable you want is not something you're setting, but something from the defaults, you can use
cmake --system-information
And grep that. Note it does seem to take a second or two which seems kinda slow.
If for example you're trying to do this to configure your cmake vars in the first place, it avoids getting the cart out in front of the horse. :)
You can also pass this a file name. So you can try only generating it if it doesn't exist, and parse the file if it does (to save that 1-2 seconds).

If you need get non cached user variable but can't edit original cmake script, you may resort to a trick. Create new CMakeLists.txt file in another directory with the following content:
include(<Path to original CMakeLists.txt>)
message(STATUS "MY_VAR={${MY_VAR}}")
It is quite possible, cmake will made a lot of errors while running in new directory. Relative paths, if used in original script, is definitely a cause for such errors. But cmake will print last value assigned for your variable. Further, filter all errors and warnings using any well known text processor (assume UNIX familiar), for example:
cmake . | sed -n 's/-- MY_VAR={\(.*\)}/\1/p' 2>&1
I use this approach in projects maintenance scripts, it is reliably as long as original CMakeLists.txt has no syntax errors.

-D <var>:<type>=<value>:
When cmake is first run in an empty build tree, it creates a CMakeCache.txt file and populates it with customizable settings for the project. This option may be used to specify a setting that takes priority over the project's default value. The option may be repeated for as many cache entries as desired.

Related

Issue declaring no language in CMake

I'm trying to integrate cmake into a project that contains a Unity3D project. I want to bypass checks for languages, as the CMakeLists.txt file really just houses a custom command that calls Unity3D's build process. I'm aware of the LANGUAGES NONE setting in the project() command, and also a similar setting in the set_target_properties() command. However, neither seem to be working. My complete CMakeLists.txt file is pasted below:
cmake_minimum_required (VERSION 3.8)
project (
"Test-App"
VERSION
1.0
LANGUAGES
NONE
)
# For the custom command to be marked as not actually creating a file,
# you need to set the symbolic property on it
#set_property (
# SOURCE
# UNITY_OUTPUT
# PROPERTY
# SYMBOLIC
#)
add_custom_command (
OUTPUT
UNITY_OUTPUT
COMMAND
/Applications/Unity/Unity.app/Contents/MacOS/Unity -quit -batchmode -buildOSXUniversalPlayer ${CMAKE_SOURCE_DIR}/Builds/Test-Build.app
COMMENT
"Building application..."
)
add_executable (
Test-Build
UNITY_OUTPUT
)
set_target_properties (
Test-Build
PROPERTIES
LINKER_LANGUAGE
NONE
)
However, in my tereminal output cmake is still complaining that an internal variable has not been set. I get the following output:
-- Configuring done
CMake Error: Error required internal CMake variable not set, cmake may not be built correctly.
Missing variable is:
CMAKE_NONE_LINK_EXECUTABLE
-- Generating done
-- Build files have been written to: XXX
Note that the Makefile is still generated and I can still build stuff. I simply want to get rid of this annoying error, to be sure that I fully understand how CMake is operating.

how to make bitbake print options of do_configure

I'm having trouble cross compiling Qt5 for beaglebone using openembedded with bitbake. I think in step do_configure not everything is passed from my *.bbappend and no platform plugins are installed (I need 'linuxfb').
My question will be: how to make bitbake print list of arguments it passes to ./configure?
There's a few ways to get that info, I would suggest looking in the recipe work directory:
temp/log.do_configure contains the configure task log which should list exact ./configure-command
build/ contains the projects own build system artefacts
bitbake -e <recipe> | grep <VARIABLE> is very useful if you want to know what variable values end up as (check e.g. PACKAGECONFIG and PACKAGECONFIG_CONFARGS values if you're modifying packageconfig).

Suppress "program not found" errors in Eclipse CDT

Most of my team uses a .bat file to set paths and then run a build. The .bat file allows selection of multiple different compiler/target platforms, but all use some version of GCC/G++ or similar compiler.
I created an Eclipse project that simply uses the .bat file rather than re-inventing the wheel and tracking down all the paths needed for each build (which I'd need to update if anyone ever updated the .bat file anyway).
This works great for building, and I can even see compiler errors/warnings, but there are some extra errors always present:
Program "gcc" not found in PATH
Program "g++" not found in PATH
I've seen many questions about these and similar errors, but in those case the user couldn't build, and the solution was to install the tools and/or update their PATH or Eclipse environment settings. I don't want to do that; all the tools I need are installed, and the .bat file works just fine to set the PATH for building. Is there a way to suppress these errors, or have Eclipse not try to find the compiler executable, since the build succeeds anyway?
Edit: As suggested in the answer I've received so far, here is output on the console after putting a full path to a compiler in the global discovery settings, which isn't exactly my favorite solution even if it worked, but I'll probably deal with it. Regardless the errors don't go away:
15:27:24 **** Running scanner discovery: CDT GCC Built-in Compiler Settings MinGW ****
"C:\\redacted\\localapps\\MinGW5\\bin\\g++.exe" -E -P -v -dD C:/Project_Files/redacted/code_workspaces/redacted/.metadata/.plugins/org.eclipse.cdt.managedbuilder.core/spec.C
Reading specs from C:/redacted/localapps/MinGW5/bin/../lib/gcc/mingw32/3.4.2/specs
Configured with: ../gcc/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug
Thread model: win32
gcc version 3.4.2 (mingw-special)
C:/redacted/localapps/MinGW5/bin/../libexec/gcc/mingw32/3.4.2/cc1plus.exe -E -quiet -v -P -iprefix C:\redacted\localapps\MinGW5\bin\../lib/gcc/mingw32/3.4.2/ C:/Project_Files/redacted/code_workspaces/redacted/.metadata/.plugins/org.eclipse.cdt.managedbuilder.core/spec.C -dD
ignoring nonexistent directory "C:/redacted/localapps/MinGW5/bin/../lib/gcc/mingw32/3.4.2/../../../../mingw32/include"
#define __cplusplus 1
ignoring nonexistent directory "/mingw/lib/gcc/mingw32/../../../include/c++/3.4.2"
#define __STDC_HOSTED__ 1
ignoring nonexistent directory "/mingw/lib/gcc/mingw32/../../../include/c++/3.4.2/mingw32"
#define __GNUC__ 3
ignoring nonexistent directory "/mingw/lib/gcc/mingw32/../../../include/c++/3.4.2/backward"
...
And then a bunch of #defines
The command string I used in the discovery options for this output was C:\redacted\localapps\MinGW5\bin\${COMMAND}.exe ${FLAGS} -E -P -v -dD "${INPUTS}".
Based on the information provided, these errors are coming from the scanner discovery part of CDT.
On my machine the full error looks like this:
Description Location Type
Program "g++" not found in PATH Preferences, C++/Build/Settings/Discovery, [CDT GCC Built-in Compiler Settings MinGW] options C/C++ Scanner Discovery Problem
Program "gcc" not found in PATH Preferences, C++/Build/Settings/Discovery, [CDT GCC Built-in Compiler Settings MinGW] options C/C++ Scanner Discovery Problem
Or as a screenshot
What is going on here is Eclipse CDT is (attempting to) launch GCC and G++ to find out what the global settings are for things like include paths, etc.
To fix the problem, go to the Location specified in the error message and adjust the scanner settings. Here is the matching setting to go with the specific error I received.
Your error might be in the project or in the global settings.
To update the MinGW setting, you can provide the path to a batch file that looks like GCC/G++ but sets up your environment correctly first, or you can point directly at the GCC that Eclipse CDT did not find on its own.
For example you can have:
D:\path\to\my\compilers\${COMMAND}.exe ${FLAGS} -E -P -v -dD "${INPUTS}"
As the setting instead of the default.
To aid the debugging, check the Allocate console in the Console View to see exactly what is being run and what output is being generated.
And here is what you might see when it does not work. Hopefully the error messages in the console are sufficient to resolve the problem on your machine.
21:12:54 **** Running scanner discovery: CDT GCC Built-in Compiler Settings MinGW ****
"D:\\path\\to\\my\\compilers\\g++.exe" -E -P -v -dD C:/Temp/workspace/.metadata/.plugins/org.eclipse.cdt.managedbuilder.core/spec.C
Cannot run program "D:\path\to\my\compilers\g++.exe": Launching failed
Error: Program "D:\path\to\my\compilers\g++.exe" not found in PATH
PATH=[\bin;\bin; -- snip --]
21:12:54 Build Finished (took 37ms)
Here is a screenshot to match:
If it does work, you should see lots of #defines and the like showing the global state of your compiler.

mex linking of cuda code in separate compilation mode

I'm trying to use CUDA code inside MATLAB mex, under linux. With the "whole program compilation" mode, it works good for me. I take the following two steps inside Nsight:
(1) Add "-fPIC" as a compiler option to each .cpp or .cu file, then compile them separately, each producing a .o file.
(2) Set the linker command to be "mex" and add "-cxx" to indicate that the type of all the .o input files are cpp files, and add the library path for cuda. Also add a cpp file that contains the mexFunction entry as an additional input.
This works good and the resulted mex file runs well under MATLAB. After that when I need to use dynamical parallelism, I have to switch to the "separate compilation mode" in Nsight. I tried the same thing above but the linker produces a lot of errors of missing reference, which I wasn't able to resolve.
Then I checked the compilation and linking steps of the "separate compilation" mode. I got confused by what it is doing. It seems that Nsight does two compilation steps for each .cpp or .cu file and produces a .o file as well as a .d file. Like this:
/usr/local/cuda-5.5/bin/nvcc -O3 -gencode arch=compute_35,code=sm_35 -odir "src" -M -o "src/tn_matrix.d" "../src/tn_matrix.cu"
/usr/local/cuda-5.5/bin/nvcc --device-c -O3 -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -x cu -o "src/tn_matrix.o" "../src/tn_matrix.cu"
The linking command is like this:
/usr/local/cuda-5.5/bin/nvcc --cudart static --relocatable-device-code=true -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -link -o "test7" ./src/cu_base.o ./src/exp_bp_wsj_dev_mex.o ./src/tn_main.o ./src/tn_matlab_helper.o ./src/tn_matrix.o ./src/tn_matrix_lib_dev.o ./src/tn_matrix_lib_host.o ./src/tn_model_wsj_dev.o ./src/tn_model_wsj_host.o ./src/tn_utility.o -lcudadevrt -lmx -lcusparse -lcurand -lcublas
What's interesting is that the linker does not take the .d file as input. So I'm not sure how it dealt with these files and how I should process them with the "mex" command when linking?
Another problem is that the linking stage has a lot of options I don't understand (--cudart static --relocatable-device-code=true), which I guess is the reason why I cannot make it work like in the "whole program compilation" mode. So I tried the following:
(1) Compile in the same way as in the beginning of the post.
(2) Preserve the linking command as provided by Nsight but change to use "-shared" option, so that the linker produces a lib file.
(3) Invoke mex with input the lib file and another cpp file containing the mexFunction entry.
This way mex compilation works and it produces a mex executable as output. However, running the resulted mex executable under MATLAB produces a segmentation fault immediately and crashes MATLAB.
I'm not sure if this way of linking would cause any problem. More strangely, I found that the mex linking step seems to finish trivially without even checking the completeness of the executable, because even if I miss a .cpp file for some function that the mexFunction will use, it still compiles.
EDIT:
I figured out how to manually link into a mex executable which can run correctly under MATLAB, but I haven't figured out how to do that automatically under Nsight, which I can in the "whole program compilation" mode. Here is my approach:
(1) Exclude from build the cpp file which contains the mexFunction entry. Manually compile it with the command "mex -c".
(2) Add "-fPIC" as a compiler option to each of the rest .cpp or .cu file, then compile them separately, each producing a .o file.
(3) Linking will fail because it cannot find the main function. We don't have it since we use mexFunction and it is excluded. This doesn't matter and I just leave it there.
(4) Follow the method in the post below to manually dlink the .o files into a device object file
cuda shared library linking: undefined reference to cudaRegisterLinkedBinary
For example, if step (2) produces a.o and b.o, here we do
nvcc -gencode arch=compute_35,code=sm_35 -Xcompiler '-fPIC' -dlink a.o b.o -o mex_dev.o -lcudadevrt
Note that here the output file mex_dev.o should not exist, otherwise the above command will fail.
(5) Use mex command to link all the .o files produced in step (2) and step (4), with all necessary libraries supplied.
This works and produces runnable mex executable. The reason I cannot automate step (1) inside Nsight is because if I change the compilation command to "mex", Nsight will also use this command to generate a dependency file (the .d file mentioned in the question text). And the reason I cannot automate step (4) and step (5) in Nsight is because it involves two commands, which I don't know how to put them in. Please let me know if you knows how to do these. Thanks!
OK, I figured out the solution. Here are the complete steps for compiling mex programs with "separate compilation mode" in Nsight:
Create a cuda project.
In the project level, change build option for the following:
Switch on -fPIC in the compiler option of "NVCC compiler" at the project level.
Add -dlink -Xcompiler '-fPIC' to "Expert Settings" "Command Line Pattern" of the linker "NVCC Linker"
Add letter o to "Build Artifact" -> "Artifact Extension", since by -dlink in the last step we are making the output a .o file.
Add mex -cxx -o path_to_mex_bin/mex_bin_filename ./*.o ./src/*.o -lcudadevrt to "Post Build Steps", (add other necessary libs)
UPDATE: In my actual project I moved the last step to a .m file in MATLAB, because otherwise if I do it while my mex program is running, it could cause MATLAB crash.
For files needs to be compiled with mex, change these build option for each of them:
Change the compiler to GCC C++ Compiler in Tool Chain Editor.
Go back to compiler setting of GCC C++ Compiler and change Command to mex
Change command line pattern to ${COMMAND} -c -outdir "src" ${INPUTS}
Several additional notes:
(1) Cuda specific details (such as kernel functions and calls to kernel functions) must be hidden from the mex compiler. So they should be put in the .cu files rather than the header files. Here is a trick to put templates involving cuda details into .cu files.
In the header file (e.g., f.h), you put only the declaration of the function like this:
template<typename ValueType>
void func(ValueType x);
Add a new file named f.inc, which holds the definition
template<>
void func(ValueType x) {
// possible kernel launches which should be hidden from mex
}
In the source code file (e.g., f.cu), you put this
#define ValueType float
#include "f.inc"
#undef ValueType
#define ValueType double
#include "f.inc"
#undef ValueType
// Add other types you want.
This trick can be easily generalized for templated classes to hide details.
(2) mex specific details should also be hidden from cuda source files, since the mex.h will alter the definitions of some system functions, such as printf. So including of "mex.h" should not appear in header files that can potentially be included in the cuda source files.
(3) In the mex source code file containing the entry mexFunction, one can use the compiler macro MATLAB_MEX_FILE to selectively compile code sections. This way th source code file can be compiled into both mex executable or ordinarily executable, allowing debugging under Nsight without matlab. Here is a trick for building multiple targets under Nsight: Building multiple binaries within one Eclipse project
First of all, it should be possible to set up Night to use a custom Makefile rather than generate it automatically. See Setting Nsight to run with existing Makefile project.
Once we have a custom Makefile, it may be possible to automate (1), (4), and (5). The advantage of a custom Makefile is that you know exactly what compilation commands will take place.
A bare-bones example:
all: mx.mexa64
mx.mexa64: mx.o
mex -o mx.mexa64 mx.o -L/usr/local/cuda/lib64 -lcudart -lcudadevrt
mx.o: mxfunc.o helper.o
nvcc -arch=sm_35 -Xcompiler -fPIC -o mx.o -dlink helper.o mxfunc.o -lcudadevrt
mxfunc.o: mxfunc.c
mex -c -o mxfunc.o mxfunc.c
helper.o: helper.c
nvcc -arch=sm_35 -Xcompiler -fPIC -c -o helper.o helper.c
clean:
rm -fv mx.mexa64 *.o
... where mxfunc.c contains the mxFunction but helper.c does not.
EDIT: You may be able achieve the same effect in the automatic compilation system. Right click on each source file and select Properties, and you'll get a window where you can add some compilation options for that individual file. For linking options, open Properties of the project. Do some experiments and pay attention to the actual compilation commands that show up in the console. In my experience, custom options sometimes interact with the automatic system in a weird way. If this method proves too troublesome for you, I suggest that you make a custom Makefile; this way, at least we are not caught by unexpected side-effects.

Using Makefile to add new source code files to Postgresql

I'm working on adding some functionality to the storage manager module in Postgresql.
I have added few source files already to the smgr folder, and I was able to have the Make system includes them by adding their names to the OBJS list in the Makefile inside the smgr folder. (i.e. When I add A.c, I would add A.o to the OBJS list).
That was working fine. Now I'm trying to add a new file hdfs_test.c to the project. The problem with this file is that it requires some extra directives in its compilation command (-I and -L directives).
The gcc command is:
gcc hdfs_test.c -I/HDFS_HOME/hdfs/src/c++/libhdfs -I/usr/lib/jvm/default-java/include -L/HDFS_HOME/hdfs/src/c++/libhdfs -L/HDFS_HOME/build/c++/Linux-i386-32/lib -L/usr/lib/jvm/default-java/jre/lib/i386/server -ljvm -lhdfs -o hdfs_test
Therefore, simply adding hdfs_test.o to the OBJS list doesn't work.
I tried editing the Makefile to look like this:
OBJS = md.o smgr.o smgrtype.o A.o B.o hdfs_test.o
MyRule1 : hdfs_test.c
gcc tati.c -c -I/diskless/taljab1/Workspace/HDFS_Append/hdfs/src/c++/libhdfs -I/usr/lib/jvm/default-java/include -L/diskless/taljab1/Workspace/HDFS_Append/hdfs/src/c++/libhdfs -L/diskless/taljab1/Workspace/HDFS_Append/build/c++/Linux-i386-32/lib -L/usr/lib/jvm/default-java/jre/lib/i386/server -ljvm -lhdfs
but it didn't work out, and I kept getting errors message of the Make trying to compile hdfs_test.c without including the directives.
How do I enforce the Make to include my compilation directives for hdfs_test.c ?
Thanks
You don't need to pass -l and -L at compile time, only at link time. At compile time only -I (include path) directives are required to help the compiler find any extra headers.
You should compile your source file to a .o file, same as all the others. Then add the -L and -l directives to the link command line when the linker is invoked to create the postgres executable. That means all you need to edit in src/backend/storage/smgr/Makefile is the OBJS line to add your output object, as you've already done below. Remove your custom rule, it's unnecessary as well as incorrect.
Just add your extra libraries to the $(LIBS) make variable and add your -L paths to $(LDFLAGS) via src/Makefile.global. src/Makefile.global is generated by configure from src/Makefile.global.in so you actually need to modify configure's behavior to add your includes, library paths and libraries. Don't edit configure directly either; edit configure.in and re-generate it with autoconf.
Yes, GNU Autotools is sometimes referred to as autohell for a reason. It's a bit ... interesting ... to work with sometimes, and there can be a lot of indirection involved in doing simple things.