Run Multiple Experiments in BehaviorSpace - netlogo

I have created several separate experiments in BehaviorSpace and would like to run all the experiments in one execution. I read a potential solution here that makes use of an experiment-number global variable to set variables according to a parameter set, and then use BehaviorSpace to run each "experiment-number". However, this method would require me to set over 100 individual experiments. Instead, I would like just a few experiments, but each experiment varies parameter values. I would like to somehow perform all experiments automatically in one execution instead of picking each experiment separately from the BehaviorSpace GUI. Does anybody have suggestions on how multiple experiments can be ran automatically? I have read the documentation for headless mode, but it seems this still only offers a method to run a single experiment from the command line. I would ideally accomplish this in headless mode, and be able to put results of each experiment in different files. Your ideas are much appreciated.
java -Xmx1024m -Dfile.encoding=UTF-8 -cp NetLogo.jar \
org.nlogo.headless.Main \
--model MYMODEL.nlogo \
--experiment experiment1 \
--experiment experiment2 \
--experiment experiment3 \
--experiment experiment4 \
--experiment experiment5 \
--table -

Related

Passing CFLAGS to configure via bash variable

Just when I think I know how the shell works fairly, something comes along and stumps me. The following commands were executed on GNU bash, version 3.2.25.
I have several ./configure scripts that all share a group of common configure options, one of them being CFLAGS.
To that end, I have two variables
CFLAGS="-fPIC -O3"
COMMON_CONFIGURE_OPTIONS="CFLAGS=\"$CFLAGS\" --enable-static --disable-shared --prefix=$PREFIX"
When this gets passed to `./configure', it is done so like,
"$FOO/configure" $COMMON_CONFIGURE_OPTIONS
For the life of me, I cannot seem to get this to expand correctly. I have tried manually substituting the value of $CFLAGS into $COMMON_CONFIGURE_OPTIONS. I have tried every combination of single and double quotes under the sun. I have even tried quoting the entire "CFLAGS=..." argument.
The version I gave above yields the following (when set -x is enabled)
../configure 'CFLAGS="-fPIC' '-O3"' --enable-static --disable-shared --prefix=../install
configure: error: unrecognized option: `-O3"'
Try `../configure --help' for more information
What I expected, and what I desire, is for configure to be invoked like
./configure CFLAGS="-fPIC -O3" --enable-static --disable-shared --prefix="$PREFIX"
How can I achieve what I want, and additionally, are there good resources/tips on how to avoid this problem in the future?
To achieve what you want, I think you want to fundamentally change your approach. Assuming your configure scripts are generated by autoconf, I would suggest using a config.site file. That is, simply do something like:
mkdir -p $PREFIX/share
echo 'CFLAGS="--enable-static --disable-shared"' > $PREFIX/share/config.site
And then invoke configure as:
/path/to/configure --prefix=$PREFIX
Make sure that CONFIG_SITE is not set in the environment when you invoke configure, else the defaults will come from the file named there.

How to pass arguments to memcheck with ctest?

I want to use ctest from the command line to run my tests with memcheck and pass in arguments for the memcheck command.
I can run ctest -R my_test to run my test, and I can even run ctest -R my_test -T memcheck to run it through memcheck.
But I can't seem to find a way to pass arguments to that memcheck command, like --leak-check=full or --suppressions=/path/to/file.
After reading ctest's documentation I've tried using the -D option with CTEST_MEMCHECK_COMMAND_OPTIONS and MEMCHECK_COMMAND_OPTIONS. I also tried setting these as environment variables. None of my attempts produced any different test command. It's always:
Memory check command: /path/to/valgrind "--log-file=/path/to/build/Testing/Temporary/MemoryChecker.7.log" "-q" "--tool=memcheck" "--leak-check=yes" "--show-reachable=yes" "--num-callers=50"
How can I control the memcheck command from the ctest command line?
TL;DR
ctest --overwrite MemoryCheckCommandOptions="--leak-check=full --error-exitcode=100" \
--overwrite MemoryCheckSuppressionFile=/path/to/valgrind.suppressions \
-T memcheck
Explanation
I finally found the right way to override such variables, but unfortunately it's not easy to understand this from the documentation.
So, to help the next poor soul that needs to deal with this, here is my understanding of the various ways to set options for memcheck.
In a CTestConfig.cmake in you top-level source dir, or in a CMakeLists.txt (before calling include(CTest)), you can set MEMORYCHECK_COMMAND_OPTIONS or MEMORYCHECK_SUPPRESSIONS_FILE.
When you include(CTest), CMake will generate a DartConfiguration.tcl in your build directory and setting the aforementioned variables will populate MemoryCheckCommandOptions and MemoryCheckSuppressionFile respectively in this file.
This is the file that ctest parses in your build directory to populate its internal variables for running the memcheck step.
So, if you'd like to set you project's options for memcheck during cmake configuration, this is the way to got.
If instead you'd like to modify these options after you already have a properly configured build directory, you can:
Modify the DartConfiguration.tcl directly, but note that this will be overwritten if cmake runs again, since this file is regenerated each time cmake runs.
Use the ctest --overwrite command-line option to set these memcheck options just for that run.
Notes
I've seen mentions online of a CMAKE_MEMORYCHECK_COMMAND_OPTIONS variable. I have no idea what this variable is and I don't think cmake is aware of it in any way.
Setting CTEST_MEMORYCHECK_COMMAND_OPTIONS (the variable that is actually documented in the cmake docs) in your CTestConfig.cmake or CMakeLists.txt has no effect. It seems this variable only works in "CTest Client Scripts", which I have never used.
Unfortunately, both MEMORYCHECK_COMMAND_OPTIONS and MEMORYCHECK_SUPPRESSIONS_FILE aren't documented explicitly in cmake, only indirectly, in ctest documentation and the Testing With CTest tutorial.
When ctest is run in the build, it parses the file to populate its internal variables:
https://cmake.org/cmake/help/latest/manual/ctest.1.html#dashboard-client-via-ctest-command-line
It's not clear to me how this interacts with

Is there an argument "speed" to run behaviorSpace experiments from the command line

I would like to run behaviorSpace experiments from the command line. In the Netlogo interface, there is a slider "normal speed". By pressing this slider, it is possible to increase model speed. Is it possible to specify an argument "speed" in the command line to increase model speed ?
java -Xmx1024m -Dfile.encoding=UTF-8 -cp NetLogo.jar \
org.nlogo.headless.Main \
--model Test_model.nlogo \
--experiment experiment1 \
Thanks in advance for your help.
Alas, you're running the experiment in headless mode, so the speed slider can't help you, because the slider affects view updates only. In headless mode there are no view updates, so your model always runs as quickly as it can.

Run pgTAP with Perl prove instead of pg_prove

I have a test-suit as usual for Perl projects, containing a lib and a t directory. The tests in t are structured through subdirectories. So I run them using:
prove -Ilib -r t/
So far nothing special, and afaik quite a standard way of testing in Perl.
Since it is the assumption, that this is the standard way of testing, I'd like to make sure that the following applies:
"If you run prove -r on t, you have tested everything that is there to test".
This is very important, since otherwise you can never be sure that you really called all the tests and the stuff is fine. Somebody calling the above would then maybe - not knowing so - just call a part of the available tests, leaving some behind. Quite annoying... tests that are not run, are of no help. It should be as easy and predictive as possible for developers to call all the tests! It is a bad thing when you have to look up how to run the rest of the test-suit. You might not know about it, or might not do it anyway.
So here comes my problem: I have to integrate some Tests using pgTAP which kindly provides the tool pg_prove. Now I have to make two commandos to do the testing. Additionally to running prove -Ilib -r I also have to run something like e.g. pg_prove -S schema=customerX -U dbuser -d dbname t/pgTAP/*.sql. The problem is not that big if you call the tests automatically from cron or what ever. But it really decreases the chance that we lazy developers run all test tests during our busy days.
So I wonder what would be the best approach to implement the tests in such a way that prove will also include those tests. Is it, that I have to create some .t-files which wrap the whole thing (and how?)? Are there any tricks I can do with the whole Harness stuff on CPAN? Would a simple test_all.sh in the root-dir, including both commandos, do the best job, even if it breaks the assumptions I made above?
So my question in short is: Can I run all tests, including pgTAP with prove? If not, is there a best practice for solving my problem?
Thanks a lot.
Yes. In fact, pg_prove just passes everything off to prove. Assuming your pgTAP tests end in .sql, you can run all your tests like this:
prove -lr --ext .sql --ext .t \
--source pgTAP \
--pgtap-option dbname=dbname \
--pgtap-option username=dbuser \
--pgtap-option suffix=.pg \
--pgtap-option set=schema=customerX
If you use Module::Build, you can also have ./Build test run all the tests, too, as I've done for circle.
See the TAP::Parser::SourceHandler::pgTAP documentation for details.

Can py.test support multiple -k options?

Can py.test supports multiple -k options?
Each testcase belongs to a particular group such as _eventnotification or _interface, etc.
Is it possible to run test cases that belong to either one or both at the same time?
ie, run testcases that has _eventnotification or _interface in the name at the same time.
I tried the following and only the testcases with _interface were executed.
If that is not supported, is there another way to do this?
py.test -k "_eventnotification" -k "_interface"
The bad news: pytest-2.3.3 does not support it.
The good news: i took your question as an opportunity to finally enhance "-k" behaviour, so that you can use "not", "or", "end" etc, see the [extended -k example][1]. It works now like "-m" except that it matches on (substrings of) test names, not markers. You can use this in-development pytest version with "pip install -i http://pypi.testrun.org -U pytest".