Starting AnyLogic experiment programmatically - anylogic

I need to run a large number of experiments and would like to do so over night as to waste as little time as possible. I have some output that I can export using PrintWriter, but I need to be able to start the next experiment programmatically after the other.
So something like
After experiment:
Experiment63.start().run();

If a parameter variation experiment doesn't do what you need and you really need to run mulitple sensitivity analyses, try this:
Create a new Custom Experiment
Delete everything in the properties window
Use YourExperimentClass.main(new String[] {}) to start each experiment.
For example, lets say you have three sensitivity analyses to run:
SensitivityToHeatExperiment.main(new String[] {});
SensitivityToSpeedExperiment.main(new String[] {});
SensitivityToFrictionExperiment.main(new String[] {});
These calls bring up a window for each experiment. Since experiments don't start automatically, you'll need to add that logic if you don't want to click "run" a bunch of times! In each Experiment's Initial experiment setup section, put run();. This automatically starts the simulation for you.
I haven't quite figured out how to close the windows automatically using this approach: system.exit(0) and experiment.close() shut all windows opened by the experiment, so you'll need a way to tell if all experiments are done running. One option is to use a common file and a FileLock to ensure the simulations don't encounter concurrency problems. Note that FileLock might be handy if all sensitivity experiments need to write to common files.

Related

minisat randomize variable selection is not working on gcloud

I want to get different solution every time I run minisat on the same problem. I can do that with using "rnd-seed" parameter of minisat. It simply randomize the variable selection so each time I can get different solution. Even though this parameter works perfectly on my machine (Ubuntu16) it does not work on gcloud (Google Cloud) running on Ubuntu machine.
I think I am missing a small part but I can not figure out what that is.
Note: I do not want to feed negotation of a solution to minisat to get different solution. I actually need to randomize the variable selection.
Edit: Let me explain why I need randomized solution. I solve lots of SAT problems and usually these SAT problems look like each other a lot. So, if I can not randomized variable selection, I get most of the time very similar solutions which I do not want. Therefore, I actually do not run minisat on the same problem.
Edit-2: #sascha wanted me to explain what I mean by "works" and "not works". When I run a cnf file on my PC, each time I get different solution. However, when I run the same cnf file on gcloud machine, I get always same solution.
The option -rnd-seed doesn't randomize branch variable selection. Rather, it allows you to set a seed for the pseudo-random number generator that Minisat uses.
Variable selection for branching does not involve randomness unless the -rnd-freq option is used. Pass in a floating point value between 0 and 1. 0 means no randomness, 1 means try to use a random variable every branch. The code only makes one attempt to choose a variable randomly, presumably because searching for an unset variable in an arbitrarily large priority queue can get pretty expensive. If that one attempt fails, Minisat branches using the normal priority queue.

How to run GUI in my script

I am using CVAP clustering toolbox that is based on GUI. After loading my data, I am using Run Clustering and Run validation commands, respectively. Then, choosing error rate option from tool menu. I need to repeat this process 20-30 times. And,in each time I need to save and open result file, to look at clustering outputs.To avoid this manual process, Is there any way to run GUI in my script? Basically, I just need to "click" Run clustering and Run validation button then choose Error rate from tool menu in my script.
Here is a simple example of how to use a GUI from a script. Assuming you know about GUI's this should make sense. If not let me know.
First get a handle of the GUI
guihandle = guidata(GUINAME);
Then in your script you can press buttons (execute the button callback function) using this type of command:
GUINAME('callback_functionname',guihandle.callback_functionname,callback_inputs,guihandle);
This will run whatever the callback does. Just make sure that before this you have manipulated whatever inputs the button callback will need. You mentioned you'll need to choose an error rate option. Since I don't know your exact code it's hard to say exactly how to do this. But from a script you can set the tool menu value like this:
set(guihandle.tool_menu,'Value',value);
Maybe that isn't a helpful example but that's the idea. Let me know if this doesn't make sense.

Managing multiple anylogic simulations within an experiment

We are developing an ABM under AnyLogic 7 and are at the point where we want to make multiple simulations from a single experiment. Different parameters are to be set for each simulation run so as to generate results for a small suite of standard scenarios.
We have an experiment that auto-starts without the need to press the "Run". Subsequent pressing of the Run does increment the experiment counter and reruns the model.
What we'd like is a way to have the auto-run, or single press of Run, launch a loop of simulations. Within that loop would be the programmatic adjustment of the variables linked to passed parameters.
EDIT- One wrinkle is that some parameters are strings. The Optimization or Parameter Variation experiments don't lend themselves to enumerating a set of strings to be be used across a set of simulation runs. You can set a string per parameter for all the simulation runs within one experiment.
We've used the help sample for "Running a Model from Outside Without Presentation Window", to add the auto-run capability to the initial experiment setup block of code. A method to wait for Run 0 to complete, then dispatch Run 1, 2, etc, is needed.
Pointers to tutorial models with such features, or to a snip of code for the experiment's java blocks are much appreciated.
maybe I don't understand your need but this certainly sounds like you'd want to use a "Parameter Variation" experiment. You can specify which parameters should be varied in which steps and running the experiment automatically starts as many simulation runs as needed, all without animation.
hope that helps
As you, I was confronted to this problem. My aim was to use parameter variation with a model and variation were on non numeric data, and I knew the number of runs to start.
Then i succeed in this task with the help of Custom Variation.
Firstly I build an experiment typed as 'multiple run', create my GUI (user was able to select the string values used in each run.
Then, I create a new java class which inherit from the previous 'multiple run' experiment,
In this class (called MyMultipleRunClass) was present:
- overload of the getMaximumIterations method from default experiment to provide to default anylogic callback the correct number of iteration, and idnex was also used to retrieve my parameter value from array,
- implementation of the static method start,
public static void start() {
prepareBeforeExperimentStart_xjal( MyMultipleRunClass.class);
MyMultipleRunClass ex = new MyMultipleRunClass();
ex.setCommandLuneArguments_xjal(null);
ex.setup(null);
}
Then the experiment to run is the 'empty' customExperiment, which automatically start the other Multiple run experiment thru the presented subclass.
Maybe it exists shortest path, but from my point of view anylogic is correctly used (no trick with non exposed interface) and it works as expected.

How to halt the invocation of the mapper or reducer

I am trying to run my hadoop map/reduce job inside eclipse (not a node and or cluster) to debug my map/reduce logic. I want to able to put a break point on the mapper and reducer and make eclipse to stop on these break points however this is not happening and the things mapper get stuck. I noticed that if I hit suspend and run a couple of times, it will eventually break on the mapper and reducer. I am very new to eclipse. What am doing wrong?
I am literally running the word count code at http://wiki.apache.org/hadoop/WordCount and have break points on lines 22, 35.
Maybe you have disabled break points? The break points will be displayed with a strike through icon if that is the case.
When not running locally it is possible that your break points will not be hit, because the tasks are run in new isolated JVMs. However that does not seem to be the case here, because suspend would not work either in that case.

Easy clock simulation for testing a project

Consider testing the project you've just implemented. If it's using the system's clock in anyway, testing it would be an issue. The first solution that comes to mind is simulation; manually manipulate system's clock to fool all the components of your software to believe the time is ticking the way you want it to. How do you implement such a solution?
My solution is:
Using a virtual environment (e.g. VMWare Player) and installing a Linux (I leave the distribution to you) and manipulating virtual system's clock to create the illusion of time passing. The only problem is, clock is ticking as your code is running. Me, myself, am looking for a solution that time will actually stop and it won't change unless I tell it to.
Constraints:
You can't confine the list of components used in project, as they might be anything. For instance I used MySQL date/time functions and I want to fool them without amending MySQL's code in anyway (it's too costy since you might end up compiling every single component of your project).
Write a small program that changes the system clock when you want it, and how much you want it. For example, each second, change the clock an extra 59 seconds.
The small program should
Either keep track of what it did, so it can undo it
Use the Network Time Protocol to get the clock back to its old value (reference before, remember difference, ask afterwards, apply difference).
From your additional explanation in the comments (maybe you cold add them to your question?), my thoughts are:
You may already have solved 1 & 2, but they relate to the problem, if not the question.
1) This is a web application, so you only need to concern yourself with your server's clock. Don't trust any clock that is controlled by the client.
2) You only seem to need elapsed time as opposed to absolute time. Therefore why not keep track of the time at which the server request starts and ends, then add the elapsed server time back on to the remaining 'time-bank' (or whatever the constraint is)?
3) As far as testing goes, you don't need to concern yourself with any actual 'clock' at all. As Gilbert Le Blanc suggests, write a wrapper around your system calls that you can then use to return dummy test data. So if you had a method getTime() which returned the current system time, you could wrap it in another method or overload it with a parameter that returns an arbitrary offset.
Encapsulate your system calls in their own methods, and you can replace the system calls with simulation calls for testing.
Edited to show an example.
I write Java games. Here's a simple Java Font class that puts the font for the game in one place, in case I decide to change the font later.
package xxx.xxx.minesweeper.view;
import java.awt.Font;
public class MinesweeperFont {
protected static final String FONT_NAME = "Comic Sans MS";
public static Font getBoldFont(int pointSize) {
return new Font(FONT_NAME, Font.BOLD, pointSize);
}
}
Again, using Java, here's a simple method of encapsulating a System call.
public static void printConsole(String text) {
System.out.println(text);
}
Replace every instance of System.out.println in your code with printConsole, and your system call exists in only one place.
By overriding or modifying the encapsulated methods, you can test them.
Another solution would be to debug and manipulate values returned by time functions to set them to anything you want