I have this custom build which invokes matlab to compile a .slx file into a .dll file.
function(BUILD_SIMULINK model)
set(EXECUTE_COMMAND matlab -r "rtwbuild( ${model} )" )
add_custom_target(
${model} ALL
COMMAND ${EXECUTE_COMMAND}
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/${model}.slx
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${model}.dll
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Building: ${model}"
)
endfunction(BUILD_SIMULINK)
However my problem is that whenever I use cmake --build ., this command will always be executed.
How can I prevent this command from executing when the DEPENDS hasn't changed and the OUTPUT exists? What I'm looking for is similar to how cmake avoids re-compiling c/cpp files when the source hasn't changed and the appropriate object file exists.
See add_custom_target() command documentation:
The target has no output file and is always considered out of date even if the commands try to create a file with the name of the target. Use the add_custom_command() command to generate a file with dependencies.
There is not OUTPUT keyword. I think its only accepted because CMake sees OUTPUT as a dependency. Actually I get an CMake warning when I run your code:
...
This project specifies custom command DEPENDS on files in the build tree
that are not specified as the OUTPUT or BYPRODUCTS of any
add_custom_command or add_custom_target:
test_model.dll
You need to use add_custom_command():
cmake_minimum_required(VERSION 2.6)
project(TestCustomTargetWithDependency NONE)
function(BUILD_SIMULINK model)
#set(EXECUTE_COMMAND matlab -r "rtwbuild( ${model} )" )
set(EXECUTE_COMMAND "${CMAKE_COMMAND}" -E touch "${model}.dll")
add_custom_command(
OUTPUT "${model}.dll"
COMMAND ${EXECUTE_COMMAND}
DEPENDS "${model}.slx"
COMMENT "Building: ${model}"
)
add_custom_target(
${model} ALL
DEPENDS "${model}.dll"
)
endfunction(BUILD_SIMULINK)
file(WRITE "test_model.slx" "")
BUILD_SIMULINK(test_model)
𝓝𝓸𝓽𝓮: Sources/Dependencies default is CMAKE_CURRENT_SOURCE_DIR and outputs default is CMAKE_CURRENT_BINARY_DIR. No need to explicitly prefix those.
Related
Ok, so here's my issue. I have written a build script in bash that pipes output to tee and sorts different output to different log files (so I can summarize errors/warnings at the end and get some statistics on files built). I wanted to use the colorgcc perl script (colorgcc.1.3.2) to colorize the output from gcc and had found in other places that this won't work piping to tee, since the script checks if it is writing to something that is not a tty. Having disabled this check everything was working until I did a full build and discovered some of the code we receive from another group builds C dependency files (we don't control this code, changing it or the build process for these isn't really an option).
The problem is that these .d files have the form as follows:
filename.o filename.d : filename.c \
dependant_file1.h \
dependant_file2.h (and so on for however many dependencies there are)
This output from GCC gets written into the .d file, but, since it is close enough to a warning/error message colorgcc outputs color codes (believe it's the check for filename:lineno:message but not 100% sure, could be filename:message check in the GCCOUT while loop). I've tried editing the regex to attempt to not match this but my perl-fu is admittedly pretty weak. So what I end up with is a color code on each line for these dependency files, which obviously causes the build to fail.
I ended up just replacing the check for ! -t STDOUT with a check for a NO_COLOR envar I set and unset in the build script for these directories (emulates the previous behavior of no color for non-tty). This works great if I run the full script, but doesn't if I cd into the directory and just run make (obviously setting and unsetting manually would work but this is a pain to do every time). Anyone have any ideas how to prevent this script from writing color codes into dependency files?
Here's how I worked around this. I added the following to colorgcc to search the gcc input for the flag to generate the .d files and just directly called the compiler in that case. This was inserted in place of the original TTY check.
for each $argnum (0 .. $#ARGV)
{
if ($ARGV[$argnum] =~ m/-M{1,2}/)
{
exec $compiler, #ARGV
or die("Couldn't exec");
}
}
I don't know if this is the proper 'perl' way of doing this sort of operation but it seems to work. Compiling inside directories that build .d files no longer inserts color codes and the source file builds do (both to terminal and my log files like I wanted). I guess sometimes the answer is more hacks instead of "hey, did you try giving up?".
In Xcode, for any Objective-C header, we can view the Generated Interface, which shows how it is seen by Swift in interop.
I'd like to generate that from the command line. Any idea how to do it?
Bonus task: The header should be precompiled first, so all #imports should be replaced already.
Invoke interpreter command :type lookup on the module you are trying to inspect.
Suppose you have a header file named header.h. Put it into a separate directory, so that the interpreter would recognise it as a module. Also create a modulemap in the same directory. Let's call this directory Mod:
./
./Mod/
/header.h
/module.modulemap
Fill in the modulemap with the following:
module Mod {
header "./header.h"
export *
}
Once it's done, issue a command like this:
echo "import Mod\n:type lookup Mod" | swift -I./Mod | tail -n+2 >| generated-interface.swift
Alternatively, you might want use a command like this with equal effect:
echo "import Mod\n:print_module Mod" | swift -deprecated-integrated-repl -I./Mod >| generated-interface.swift
It's broken down as follows:
first we echo the script to be executed: import module and type-lookup it;
then we launch the interpreter and feed the script into it; the -I argument helps it find our module, which is crucial;
then we cut off the “Welcome to Swift” part with Tail
and write the result into generated-interface.swift.
While running the above commands, make sure your working directory is set to one level higher than the Mod directory.
Note that the output might not be exactly the same as from Xcode, but it's very similar.
Just for the record, if you want to produce the interface from a Swift file, then it's just this:
swiftc -print-ast file.swift
I have a confusing problem concerning mingw make and windows command line (Win7):
I have a makefile which shall call a vbs file to convert .vds files to .png files. here is the code of the makefile (without the defined variables, you can see the result in the picture below).
VSD2PNG: $(VISIO_OUTPUT)
#echo *** converting visio files to png files finished
define vsd_rule
$(1): $(call FILTER_FUNCTION,$(basename $(notdir $(1))),$(VISIO_FILES))
$(VSD_SCRIPT) $$< $(VISIO_OUTPUT_DIR)
endef
$(foreach file,$(VISIO_OUTPUT),$(eval $(call vsd_rule,$(file))))
leads to
As you can see, the command should call .\tools\visio\convert(.vbs) with two arguments (input file & output directory). Surprising is that the same command executed in windows command line works fine. I tried some modifications to solve the problem (unsuccessfully):
adding file extension to vbs-script leads to error 193, but I cannot find out, what that means.
calling the script without any arguments should lead to a runtime-error in the script, but that leads to make error -1 again (or with file extension 193).
using absolute path for script
Does anybody know more about the differences between calling a script directly from command line or from makefile, which should usually lead to the command line?
Setup:
Windows 7 Enterprise.
Matlab 7.10.0 (R2010a).
mcc compiler: Microsoft Visual C++ 2008 Express.
What's happening:
My project runs fine when running it through Matlab, but when trying to run the .exe through the command prompt after using mcc to compile, the command prompt generates an error.
The mcc command I issue is:
mcc -m -v STARTUP1.m -o EXE_REDUC
The error I receive in the command prompt is:
??? Error using ==> textscan
Invalid file identifier. Use fopen to generate a valid file identifier.
I have a file called LoadXLS.m that loads and reads a .csv file using:
fid = fopen(file,'r');
temp_data = textscan(fid,...args...);
And then I process temp_data.
The csv file I'm trying to load is called spec.csv. It is located two directories down from where I have STARTUP1.m stored. The location of STARTUP1.m is also the place that the mcc generated files are stored to. I have used the pathtool to "Add with subfolders" this location, but am aware that those locations are not transferred to mbuild when compiling.
What I've Tried:
I have gone in and added print statements to print the value of fid to make sure it is valid. When I run it in Matlab, it has a valid value, however when I run in the command prompt it always returns an invalid value of -1.
I have removed all addpath() calls, I have tried adding the STARTUP1.m directory to the mcc ctf archive using:
mcc -m -v -a 'C:\Users\...path...\STARTUP1.m_location' STARTUP1.m -o EXE_REDUC;
However when I do this, I get a different error when running in the command prompt:
Cannot open CTF archive file
'C:\...path...\AppData\Local\Temp\mathworks_tmp_7532_28296'
or
'C:\...path...\AppData\Local\Temp\mathworks_tmp_7532_28296.zip'
??? Undefined function or variable 'matlabrc'.
To fix this, I've tried adding the pragma
%#function matlabrc
to the top of STARTUP1.m to try and enforce its inclusion, but had no success.
I also copied the spec.csv file to a new directory in the ctfroot and changed
fid = fopen(...)
to:
[tempFile, message] = fopen(fullfile(ctfroot, 'Added Config Files', ad.spec_file));
The message is:
message is: No such file or directory
Objective:
Rearranging file locations is a sufficient workaround while the exectuable only runs on my computer, however the idea is to take this standalone and distribute it to multiple people on many different computers. I would like to be able to have a top folder with a startup file and within this folder, have as many subfolders as the package requires. The startup file should be able to access all subfolders and files within them as necessary.
I read something about the exectuable actually running from a "secret location" on the machine here: http://matlab.wikia.com/wiki/FAQ
I would just like to be able to group one entire folder tree with all its files into a package containing the executable and be able to run it anywhere.
More info:
When I put the spec.csv file in the same directory as STARUTP1.m, it finds it fine using mcc without the -a 'path' option and using the following in the LoadXLS.m file:
[tempFile, message] = fopen(ad.spec_file,'r');
This project contains GUIs, generates PDFs, generates plots, and also creates a zip directory.
Thank you in advance.
I am trying to use diffstrings.py from Three20 on my iPhone project, and I can't find the proper format for the path arguments (as in "Usage: diffstrings.py [options] path1 path2 ...").
For example, when I run the script in my Xcode project directory like this
~/py/diffstrings.py -b
it analyzes just the main.m and finds 0 strings to localize,
then it diffs against existing fr.lproj and others, and finds that thes contain "obsolete strings".
Can anyone post examples of successful comand line invocations of diffstrings.py, for options -b, -d and -m?
Taking a quick look at the code here http://github.com/facebook/three20/blob/master/diffstrings.py I see that if you don't specify any command line options, it assumes you mean the directory wherever the script lives in. So the option is to either copy .py file to where your .m files are, or simple use the command
~py/diffstrings.py -b .
That is, give the current directory (.) as the path argument.