When building with CMAKE in two different codebases the option -j 16 is used, which tends to cause problems where -j 4 doesn't. Is there a global option for this? I can't find where to change it in Visual Studio Code CmakeTools.
Read this quickly before and admin deletes it :).
-j[cpu_count] is an argument to the generator (eg make) and as such can't be set by CMake in itself.
My advice is to either use eg Ninja (which handles this for you) or invoke CMake via a wrapper script that sets this depending on the codebase in question.
Eg in python3 f'-j{cpu_count()}'.
Related
I am considering to use the executable file generated by either Dymola (dymosim.exe) or OpenModelica (model_name.exe) to make parametric simulations on the same model.
I was wondering, is there any difference in the two .exe files and related input files? (which are dsin.txt for Dymola, and model_name_init.xml for OpenModelica).
Regarding file sizes, I can see that the Dymola files are smaller. But I was also wondering about speed of execution and flexibility of the input files for scripting.
Lastly, since Dymola is a commercial software, is the dymosim.exe file publicly shareable?
I will write this for OpenModelica, the Dymola people can add their own.
I would suggest to use FMUs instead of executables and some (co)simulation framework like OMSimulator (via Python scripting) or some other ones (PyFMI, etc). See an example here:
https://www.openmodelica.org/doc/OMSimulator/master/html/OMSimulatorPython.html#example-pi
Note that if you have resources such as tables, etc, these will be put inside the FMU if you use Modelica URIs: modelica://LibraryName/Resource/blah. However, for the generated executables you would need to ship them with the exe and they would need to be in a specific directory on the other machine. Also, you would need to ship dependent DLLs for the executables for the the FMUs that is (mostly - not true if you have external dlls that you call in your model) not needed as they are statically compiled.
Simulation speed depends on the model sometimes one or the other is faster.
For what libraries are supported by OpenModelica you can check the library coverage:
https://libraries.openmodelica.org/branches/overview-combined.html
If you still want to use executables, here is a list of command line parameters for them: https://www.openmodelica.org/doc/OpenModelicaUsersGuide/latest/simulationflags.html
How to do parameter sweeps via executables:
https://openmodelica.org/doc/OpenModelicaUsersGuide/latest/scripting_api.html#simulation-parameter-sweep
For Dymola:
If you have the appropriate binary export license you can generate a dymosim.exe that can be distributed.
Parameter-sweep can be run inside Dymola (the scripts are automatically generated), or from Python etc.
However, running a parameter sweep in that way does not only use dsin.txt, but also some additional files. There are two reasons:
Reduced overhead of starting/stopping dymosim.exe, especially for small models.
Automatic parallelization.
That part of dymosim is currently not well documented in the manual, but you can run:
dymosim -M Which as default sweeps based on two csv-files (multIn.csv, multOutHeader.csv) generating a third (multOut.csv)
dymosim -M -1 mIn.csv -2 mOutH.csv -3 mOut.csv if you want different file-names
dymosim -M -n 45 To generate normal trajectory files, dsres45.mat, dsres46.mat, ...
dymosim -h For help
dymosim -s Normal simulation
And if you are really bold you can pipe to/from dymosim.exe for parameter sweeps
Another possibility is to FMUs instead.
I am investigating how feasible would it be to switch to Visual Studio Code with larger and more complex projects in C++ with cmake and have hit the wall regarding environmental variables.
Some projects I'm dealing with require a complex structure of interconnected environmental variables, currently this is solved by mostly manual build in bash prompt
$ source envSetupFile
$ mkdir build && cd build
$ cmake .. && make -j $(nproc)
where the envSetupFile contains a lot of different variables, most cases related to toolchain and some specific settings. When those are just plain strings, everything works fine, however there are many cases of variables depending on another one, for example something like (in bash)
export PATH=/opt/custom/lib: $PATH
export ROOT=/some/path
export BIN=$ROOT/bin
export LIB=$ROOT/lib
what I did
I installed the c/c++ extension package along with CMake and CMakeTools packages plus EditorConfig for VS Code
and added custom lines to the visual studio config in the workspace:
/projectpath/.vscode/cmake-kits.json where I set the environmental variables, but while I can set (and verify) directly defined work, any that requires value of another fails completely.
example:
"environmentVariables":{
"POTATO": "aaa",
"CARROT": "bbb",
"CABAGE": "${POTATO}/${CARROT}"
}
I'd expect that afterwards when running cmake which references ${CABAGE} would resolve it as aaa/bbb, but instead of it it takes them as string literals and produces ${POTATO}/${CARROT} as ouptut
I must confess I'm a bit surprised by no trace of such use case in documentation or examples I've seen so far. Perhaps someone could lend a hand? (I'm running VisualStudio Code Version: 1.36.1 on ubuntu 16.04)
I'm having trouble cross compiling PostgreSQL for my TI Sitara AM335x EVM SK. My host system is an i386 machine running Ubuntu 12.04.
My application is written in C++ using Qt. When I try and compile, I get the error that libpq.so is incompatible. I believe this is because the cross compiler is trying to use the host libpq.so instead of one for the target system (which as I have found out, doesn't exist).
I've downloaded the source for PostgreSQL with the intention of cross compiling that in order to give me the libpq.so library that will be compatible with my target system, however there is virtually no information on how to do this.
I have tried using the CC argument with the configure file to change my compiler to the following: CC=/home/tim/ti-sdk-am335x-evm-06.00.00.00/linux-devkit/sysroots/i686-arago-linux/usr/bin/arm-linux-gnueabihf-gcc but the configure script gives me this error: configure: error: cannot run C compiled programs. If you meant to cross compile, use --host.
The configure file makes a small reference to the --host option, but the only information in the file that I could find is in reference to mingw and windows, which isn't what I want.
I've done some quick searching through the configure file, and it references the --host option, but with no explanation of what is a valid host. I'm assuming that with --host option there will be an associated --target.
What arguments can I give the configure script so that it will cross compile with the correct compiler to generate a library that my target device can use? Are there any resources out there that I haven't found in regards to how the --host/--target works or how to use them?
OK, so after fiddling around for a little while, I think I was actually able to cross compile PostgreSQL and answer my own question.
Before I went any further, I had realized I had forgotten to add the path to my cross compiler to the PATH environment variable. I used the command export PATH=/path/to/cross/compiler:$PATH to insert the compiler path to the PATH environment variable.
Next, I did some experimenting with the --host option. To start off with I tried using ./configure --host=arm-linux-gnueabihf and running the configure script. The configure script seemed to accept this as the host argument. I then went to the next step of running the makefile. Running this makefile resulted in errors being generated. The errors were selected processor does not support Thumb mode. I did a quick search to see what information I could find about this error and came to this webpage: http://www.postgresql.org/message-id/E1Ra1sk-0000Pq-EL#wrigleys.postgresql.org.
This webpage gave me a bit more information since it seemed like the person was trying to do something very similar to me. One of the responders to the post mentioned that --disable-spinlocks is intended for processors that aren't supported by default by PostgreSQL. I emulated the arguments that were used in the website listed above and used the command: ./configure --host=arm-linux CC=arm-linux-gnueabihf-gcc AR=arm-linux-gnueabihf-ar CPP=arm-linux-gnueabihf-cpp --without-readline --without-zlib --disable-spinlocks to generate my makefile. This makefile actually generated all of the files, including the libpq.so library file I was needing.
Hope this helps somebody else in the future!
What is a command line compiler?
Nowadays, you tend to have environments in which you develop code. In other words, you get an IDE (integrated development environment) which is comprised of an editor, compiler, linker, debugger and many other wonderous tools (code analysis, refactoring and so forth).
You never have to type in a command at all, preferring instead a key sequence like CTRLF5 which will build your entire project for you.
Not so in earlier days. We had to memorize all sorts of arcane commands to get our source code transformed into executables. Such beautiful constructs as:
cc -I/usr/include -c -o prog.o prog.c
cc -I/usr/include -c -o obj1.o obj1.c
as -o start.o start.s
ld -o prog -L/lib:/usr/lib prog.o obj1.o start.o -lm -lnet
Simple, no?
It was actually a great leap forward when we started using makefiles since we could hide all those arcane commands in a complex file and simply execute make from the command line. It would go away and perform all those commands for us, and only on files that needed it.
Of course, there's still a need for command-line compilers in today's world. The ability to run things like Eclipse in "headless" mode (no GUI) allow you to compile all your stuff in a batch way, without having to interact with the GUI itself.
In addition, both Borland (or whatever they're calling themselves this week) and Microsoft also provide command-line compilers for no cost (Microsoft also have their Express editions for free as well).
And gcc is also a command-line compiler. It does its one job very well and leaves it up to other applications to add a front end, if people need that sort of thing.
Don't get me wrong. I think the whole IDE thing is a wonderful idea for a quick code/debug cycle but I find that, once my applications have reached a certain level of maturity, I tend to prefer them in a form where I can edit the code with vim and just run make to produce the end product.
A command-line compiler is one that you run from the command line.
You type in gcc filename.c to compile a file (or something like that). Almost all compilers have a command-line version, and many have GUIs where you never see the command line, but the command line is still there. – Bill K Oct 5 at 16:27
(Bill K provided a nice answer in the comments... copied here and lightly edited by Mark Harrison, set to community wiki so as not to get rep.)
I've created a deployment project which works rather well and now I want to add it to source control repository for others to use.
The main problem I'm facing is that the .prj file which deploytool creates contains absolute paths which will not work on other computers. So far I've tried the following:
Create the stand alone exe using just mcc without deploytool. This works great but I could find a way to create the final _pkg.exe which contains everything. mcc doesn't seem to be able to create this file and there doesn't seem to be any other tool which does. Is this really the case?
Edit the .prj file to include relative paths instead of absolute paths. This only works partially because the .prj file contains a section called MATLABPath which is always replaced with the current setpath of matlab. anyone which uses this file will have to check it out since it is being changed when used.
Find a way to generate the .prj file. the mcc documentation say: Project files created using either mcc or deploytool are eligible to use this option. suggesting there is a way to create a .prj file using mcc but I wasn't able to find how this can be done.
Is there a solution to this situation?
We ran into the same thing with Matlab Builder JA. Not only are the paths absolute, Matlab also adds other toolboxes that it finds on the path, which is irritating, as the build machine doesn't have licenses for a lot of them.
So what we do is:
Edit the prj file by hand to get rid of the absolute paths
Check it into source control and build with mcc
NEVER EVER check in the prj file after it has been touched by deploytool (do all changes by hand in an editor)
Our revision control lets you modify files without an explicit checkout, so using deploytool is not a problem. But why would you want to?
Not sure what the final packager does, but if it just bundles the MCR with the compiled binary, I would just write a replacement.
I personally use Visual Studio 2005 project to maintain my deployment projects and then convert the VCPROJ to PRJ on the fly using build command step
http://younsi.blogspot.com/2011/11/matlab-relative-path-issue-in-prj.html
Here's the mcc option documentation.
What I've found most useful is creating a standalone exe using mcc:
mcc -C -m <function.m> -a <fig> -a <dll> -a <etc> -d <outputPath>
The -C option tells mcc to generate the ctf file (which is the archive of all the compiled MATLAB stuff) as a separate file. I've had trouble on some target computers using a single exe with everything compiled in.
The -m option tells mcc to make an exe.
The -a options tell mcc to include the specified file in the package. For instance, if your script uses two fig files and a data file, you need a -a for each to make sure they get put in the output package.
To tell mcc to use a project file (I've never done this myself):
mcc -F <projectfile>
Also, if you're using R2009a on Windows, there's a known bug that requires some manifest manipulation.