Can a mex s-function component be recompiled only if it changed? - matlab

In the first phase of testing my code has to run in a Matlab Simulink environment (2010b 32bits) compiled as S-function. The compiler is the one that comes with Visual Studio 2010.
I currently have a script that compiles my code that looks like this:
mex -c -v foo_main.c
mex -c -v foo_1.c
mex -c -v foo_2.c
% etc...
mex -v foo_wrapper.c foo_main.obj foo_1.obj foo_2.obj % etc...
During the years the number of files has greatly increased, to the extent that compilation takes quite some time.
My question is: does it exists a method to check if an *.obj does not correspond to its *.c counterpart and recompile it only when necessary?
I am wary of creating special scripts because I would have to change them on a case-by-case basis and I see this as a road prone to errors and unnecessary risks.
EDIT:
My current solution is to take the date property of the files and compare them:
c_file = dir(fullfile(pwd,'\\foo.c'));
obj_file = dir(fullfile(pwd,'\\foo.obj'));
if (datenum(c_file.date) > datenum(obj_file.date))
mex -c -v foo.c
end
I understand that this is not the cleanest solution and, in the comments, m.s. suggests to use a nmake file. I never created one and I do not know how to use it from a Matlab script.
Can it be done? Which steps should I follow to create and use it?

Related

Link to gfortran for mex file

I am trying to create mex files for using some C++ code in Matlab, but running into an error with the compilation which, I guess, goes beyond Matlab. I am on Windows.
The example I'm struggling with is the Mex Interface example in the Armadillo library for linear algebra, available: here. The Matlab file is what I'm running and it looks like this:
% Compile the demo as a mex file
mex -lgfortran armaMex_demo.cpp -I/path/to/armadillo
The error I get is:
Error using mex
MEX cannot find library 'gfortran' specified with the -l option.
MEX looks for a file with one of the names:
libgfortran.lib
gfortran.lib
Please specify the path to this library with the -L option.
Trying some things I found online, I installed MinGW and cygwin64 and added the paths to their bin folders to my PATH variable. Before that I also installed MinGW via Matlab.
My question: How do I get it to link with gfortran?
If I search C:for libgfortran I find some files, but none of them have the file ending .lib (but e.g. .dll and .a) and if I search for gfortranI find some .exe files but again, no .lib file. If I run gfortran -dumpversion in the terminal, I get 4.9.3 back, so it is obviously there somewhere. I am obviously missing something, and this is really not my forte, to say the least.

Matlab missing dependency MEX-file

I have a script in matlab that calls other libraries. I use matlab version 2012a on linux . I get below error and I don't know how to fix it.
The error is :
Invalid MEX-file '/home/XXX/nearest_neighbors.mexa64':
libflann.so.1.8: cannot open shared object file: No such file or
directory
Error in flann_search (line 82)
[indices,dists] = nearest_neighbors('find_nn', data, testset, n, params);
Error in MyScript (line 73)
[nresult, ndists] = flann_search(Ntraindata', Ntraindata', resu.KNN, struct('algorithm','composite',...
That library you are referring to - https://github.com/mariusmuja/flann/ - has the nearest_neighbors function written in MEX code. MEX code is C code that is used to interface with MATLAB. People usually write heavily computationally burdening code over in MEX as it is known to handle loops and other things faster. The inputs come from MATLAB and are sent to this MEX function, and the outputs come from the MEX function and are piped back to MATLAB. It's basically a nice black box where you can use it just like any other MATLAB function. In fact, a lot of the functions that come shipped with MATLAB have MEX wrappers written to promote acceleration.
You are getting that error because you need to compile the nearest_neighbors function so that there is a MEX wrapper that can be called in MATLAB. That wrapper is missing because you haven't compiled the code.
First, you need to set up MEX. Make sure you have a compiler that is compatible with your version of MATLAB. You can do that by visiting this link:
http://www.mathworks.com/support/compilers/R20xxy/index.html
xx is the version number that belongs to your MATLAB and y is the character code that comes after it. For example, if you are using R2013a, you would visit:
http://www.mathworks.com/support/compilers/R2013a/index.html
Once you're there, go to your Operating System and ensure you have one of those supported compilers installed. Once you have that installed, go into MATLAB, and in the command prompt, type in:
mex -setup
This will allow you to set up MEX and choose the compiler you want. Given your error, you're using Linux 64-bit, so it should be very easy for you to get GCC. Just make sure that you choose a version of GCC that is compatible with your version of MATLAB. Once you choose the compiler, you can compile the code by doing this in the command prompt:
>> mex -v -O nearest_neighbors.cpp
This should generate the nearest_neighbors MEX file for you. Once that's done, you can now run the code.
For more detailed instructions, check out FLANN's user manual - http://people.cs.ubc.ca/~mariusm/uploads/FLANN/flann_manual-1.8.4.pdf - It tells you how to build and compile it for MATLAB use.
Good luck!

Matlab: invalid mex file library not loaded

I have created a mex function (more specifically, using CUDA)
the compilation was successful, and I got a mex file zMul.mexmaci64
but on the execution, matlab reported an error:
Invalid MEX-file '/Users/zlw/Documents/MATLAB/lowComplexity/cbased/matMulGPU/zMul.mexmaci64':
dlopen(/Users/zlw/Documents/MATLAB/lowComplexity/cbased/matMulGPU/zMul.mexmaci64, 1):
Library not loaded: #rpath/libcublas.6.0.dylib
Referenced from: /Users/zlw/Documents/MATLAB/lowComplexity/cbased/matMulGPU/zMul.mexmaci64
Reason: image not found
What should I do do solve it?
additional info
setting the environment vars (PATH,LD_LIBRARY_PATH,DYLD_LIBRARY_PATH) in Matlab and in .bash_profile doesn't work for me
I'm pretty sure that the environment vars are set correctly because when I created an alias to the dylib file, Matlab detected it, tried to load it, but failed with message:no suitable image found
Thanks!
Use otool -L in both Matlab and UNIX console.
In Matlab:
!otool -L /path/to/zMul.mexmaci64
In UNIX console:
otool -L /path/to/zMul.mexmaci64
Try to find the difference between them. If there is a difference in dependency, that is probably breaking the MEX binary. You might need to apply the same technique for the dependent dylib objects recursively. Typically, enforcing the one appearing in UNIX console using DYLD_INSERT_LIBRARIES solves the problem.
Another possibility is the C++ runtime compatibility. If you're using OS X Mavericks, you should check that your MEX command is using libc++ but not libstdc++ in mexopts.sh. Below is my configuration example in mexopts.sh:
CC='clang'
CXX='clang++'
SDKROOT='/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/'
MACOSX_DEPLOYMENT_TARGET='10.9'
CFLAGS="$CFLAGS -Dchar16_t=uint16_t"
CXXFLAGS="$CXXFLAGS -std=c++11 -stdlib=libc++ -DCHAR16_T"
CXXLIBS="$MLIBS -lc++"
This post might help: http://www.seaandsailor.com/matlab-xcode6.html
It was easier than I thought. Just replace all 10.x with your OS X version and add -Dchar16_t=UINT16_T to CLIBS in mexopts.sh file.

mex linking of cuda code in separate compilation mode

I'm trying to use CUDA code inside MATLAB mex, under linux. With the "whole program compilation" mode, it works good for me. I take the following two steps inside Nsight:
(1) Add "-fPIC" as a compiler option to each .cpp or .cu file, then compile them separately, each producing a .o file.
(2) Set the linker command to be "mex" and add "-cxx" to indicate that the type of all the .o input files are cpp files, and add the library path for cuda. Also add a cpp file that contains the mexFunction entry as an additional input.
This works good and the resulted mex file runs well under MATLAB. After that when I need to use dynamical parallelism, I have to switch to the "separate compilation mode" in Nsight. I tried the same thing above but the linker produces a lot of errors of missing reference, which I wasn't able to resolve.
Then I checked the compilation and linking steps of the "separate compilation" mode. I got confused by what it is doing. It seems that Nsight does two compilation steps for each .cpp or .cu file and produces a .o file as well as a .d file. Like this:
/usr/local/cuda-5.5/bin/nvcc -O3 -gencode arch=compute_35,code=sm_35 -odir "src" -M -o "src/tn_matrix.d" "../src/tn_matrix.cu"
/usr/local/cuda-5.5/bin/nvcc --device-c -O3 -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -x cu -o "src/tn_matrix.o" "../src/tn_matrix.cu"
The linking command is like this:
/usr/local/cuda-5.5/bin/nvcc --cudart static --relocatable-device-code=true -gencode arch=compute_35,code=compute_35 -gencode arch=compute_35,code=sm_35 -link -o "test7" ./src/cu_base.o ./src/exp_bp_wsj_dev_mex.o ./src/tn_main.o ./src/tn_matlab_helper.o ./src/tn_matrix.o ./src/tn_matrix_lib_dev.o ./src/tn_matrix_lib_host.o ./src/tn_model_wsj_dev.o ./src/tn_model_wsj_host.o ./src/tn_utility.o -lcudadevrt -lmx -lcusparse -lcurand -lcublas
What's interesting is that the linker does not take the .d file as input. So I'm not sure how it dealt with these files and how I should process them with the "mex" command when linking?
Another problem is that the linking stage has a lot of options I don't understand (--cudart static --relocatable-device-code=true), which I guess is the reason why I cannot make it work like in the "whole program compilation" mode. So I tried the following:
(1) Compile in the same way as in the beginning of the post.
(2) Preserve the linking command as provided by Nsight but change to use "-shared" option, so that the linker produces a lib file.
(3) Invoke mex with input the lib file and another cpp file containing the mexFunction entry.
This way mex compilation works and it produces a mex executable as output. However, running the resulted mex executable under MATLAB produces a segmentation fault immediately and crashes MATLAB.
I'm not sure if this way of linking would cause any problem. More strangely, I found that the mex linking step seems to finish trivially without even checking the completeness of the executable, because even if I miss a .cpp file for some function that the mexFunction will use, it still compiles.
EDIT:
I figured out how to manually link into a mex executable which can run correctly under MATLAB, but I haven't figured out how to do that automatically under Nsight, which I can in the "whole program compilation" mode. Here is my approach:
(1) Exclude from build the cpp file which contains the mexFunction entry. Manually compile it with the command "mex -c".
(2) Add "-fPIC" as a compiler option to each of the rest .cpp or .cu file, then compile them separately, each producing a .o file.
(3) Linking will fail because it cannot find the main function. We don't have it since we use mexFunction and it is excluded. This doesn't matter and I just leave it there.
(4) Follow the method in the post below to manually dlink the .o files into a device object file
cuda shared library linking: undefined reference to cudaRegisterLinkedBinary
For example, if step (2) produces a.o and b.o, here we do
nvcc -gencode arch=compute_35,code=sm_35 -Xcompiler '-fPIC' -dlink a.o b.o -o mex_dev.o -lcudadevrt
Note that here the output file mex_dev.o should not exist, otherwise the above command will fail.
(5) Use mex command to link all the .o files produced in step (2) and step (4), with all necessary libraries supplied.
This works and produces runnable mex executable. The reason I cannot automate step (1) inside Nsight is because if I change the compilation command to "mex", Nsight will also use this command to generate a dependency file (the .d file mentioned in the question text). And the reason I cannot automate step (4) and step (5) in Nsight is because it involves two commands, which I don't know how to put them in. Please let me know if you knows how to do these. Thanks!
OK, I figured out the solution. Here are the complete steps for compiling mex programs with "separate compilation mode" in Nsight:
Create a cuda project.
In the project level, change build option for the following:
Switch on -fPIC in the compiler option of "NVCC compiler" at the project level.
Add -dlink -Xcompiler '-fPIC' to "Expert Settings" "Command Line Pattern" of the linker "NVCC Linker"
Add letter o to "Build Artifact" -> "Artifact Extension", since by -dlink in the last step we are making the output a .o file.
Add mex -cxx -o path_to_mex_bin/mex_bin_filename ./*.o ./src/*.o -lcudadevrt to "Post Build Steps", (add other necessary libs)
UPDATE: In my actual project I moved the last step to a .m file in MATLAB, because otherwise if I do it while my mex program is running, it could cause MATLAB crash.
For files needs to be compiled with mex, change these build option for each of them:
Change the compiler to GCC C++ Compiler in Tool Chain Editor.
Go back to compiler setting of GCC C++ Compiler and change Command to mex
Change command line pattern to ${COMMAND} -c -outdir "src" ${INPUTS}
Several additional notes:
(1) Cuda specific details (such as kernel functions and calls to kernel functions) must be hidden from the mex compiler. So they should be put in the .cu files rather than the header files. Here is a trick to put templates involving cuda details into .cu files.
In the header file (e.g., f.h), you put only the declaration of the function like this:
template<typename ValueType>
void func(ValueType x);
Add a new file named f.inc, which holds the definition
template<>
void func(ValueType x) {
// possible kernel launches which should be hidden from mex
}
In the source code file (e.g., f.cu), you put this
#define ValueType float
#include "f.inc"
#undef ValueType
#define ValueType double
#include "f.inc"
#undef ValueType
// Add other types you want.
This trick can be easily generalized for templated classes to hide details.
(2) mex specific details should also be hidden from cuda source files, since the mex.h will alter the definitions of some system functions, such as printf. So including of "mex.h" should not appear in header files that can potentially be included in the cuda source files.
(3) In the mex source code file containing the entry mexFunction, one can use the compiler macro MATLAB_MEX_FILE to selectively compile code sections. This way th source code file can be compiled into both mex executable or ordinarily executable, allowing debugging under Nsight without matlab. Here is a trick for building multiple targets under Nsight: Building multiple binaries within one Eclipse project
First of all, it should be possible to set up Night to use a custom Makefile rather than generate it automatically. See Setting Nsight to run with existing Makefile project.
Once we have a custom Makefile, it may be possible to automate (1), (4), and (5). The advantage of a custom Makefile is that you know exactly what compilation commands will take place.
A bare-bones example:
all: mx.mexa64
mx.mexa64: mx.o
mex -o mx.mexa64 mx.o -L/usr/local/cuda/lib64 -lcudart -lcudadevrt
mx.o: mxfunc.o helper.o
nvcc -arch=sm_35 -Xcompiler -fPIC -o mx.o -dlink helper.o mxfunc.o -lcudadevrt
mxfunc.o: mxfunc.c
mex -c -o mxfunc.o mxfunc.c
helper.o: helper.c
nvcc -arch=sm_35 -Xcompiler -fPIC -c -o helper.o helper.c
clean:
rm -fv mx.mexa64 *.o
... where mxfunc.c contains the mxFunction but helper.c does not.
EDIT: You may be able achieve the same effect in the automatic compilation system. Right click on each source file and select Properties, and you'll get a window where you can add some compilation options for that individual file. For linking options, open Properties of the project. Do some experiments and pay attention to the actual compilation commands that show up in the console. In my experience, custom options sometimes interact with the automatic system in a weird way. If this method proves too troublesome for you, I suggest that you make a custom Makefile; this way, at least we are not caught by unexpected side-effects.

MATLAB Compiler getting stuck during build process

I built a GUI in MATLAB that makes use of the Instrument Control Toolbox, among other things. I tried to compile this GUI with deploytool but it's getting stuck during the compile:
ant:
<mkdir dir="Z:\My Path\MyApp\distrib" />
<mkdir dir="Z:\My Path\MyApp\src" />
mcc -o MyApp -W WinMain:MyApp -T link:exe -d 'Z:\My Path\MyApp\src' -N -p instrument -w enable:specified_file_mismatch -w enable:repeated_file -w enable:switch_ignored -w enable:missing_lib_sentinel -w enable:demo_license -v 'Z:\My Path\MyApp.m'
Compiler version: 4.18.1 (R2013a)
Processing C:\Program Files\MATLAB\R2013a\toolbox\matlab\mcc.enc
Processing C:\Program Files\MATLAB\R2013a\toolbox\instrument\mcc.enc
Processing C:\Program Files\MATLAB\R2013a\toolbox\shared\instrument\mcc.enc
Processing include files...
2 item(s) added.
Processing directories installed with MCR...
The file Z:\My Path\MyApp\src\mccExcludedFiles.log contains a list of functions excluded from the CTF archive.
0 item(s) added.
Generating MATLAB path for the compiled application...
Created 54 path items.
And that's all she wrote. Of note, the final statement of "Created 54 path items" to me doesn't make a whole lot of sense, since I don't have 54 dependencies.
I changed the path that the code is on to Z:\My Path\MyApp to be generic. It may or may not be important but there are spaces in the path. I mentioned this because maybe that's my problem. Z:\ is a network drive instead of a local drive. Some dependencies live on R:\, which is also a network drive. All dependencies are on the MATLAB path or in my local folder.
I'm using MATLAB R2013a x64 and the Windows SDK 7.1 (used mbuild -setup to set this).
The deploytool and MATLAB are both responsive - I can cancel the build process with no problem. I ran the mcc code verbatim as above and it's still stuck in the same place with no indication that it's working. Hitting CTRL-C to interrupt it gives no error messages or anything.
Does anyone know what's going on? It must be something to do with the mcc call, I'm sure of it.
It looks like this problem is due to my license file being located on a network server rather than locally. Working with MathWorks tech support really helped out here.
When using mcc, specify a local license file with the -Y flag. The compile then goes from 20 minutes or so to about 2. An example call:
mcc -m -v -N -Y alpha.lic myApp.m