i’m currently working on project using anylogic. I’m making a system dynamic to modelling a SIR model. and I make a manual calculation of each stock in excel (using euler method) , but the results in excel are different from the result in anylogic. I’m curious about how anylogic calculate the model that I build on it. anyone know how the calculation works on anylogic?
if your SD model is mixed with discrete-events or agent-based, the time step that you set up in the configuration for your model is not considered anymore and a different time step is used for which you have no access, unless you run the simulation in virtual mode (at least it's more likely to behave as you expect that way)
I have extensively tested this, and as long as your model is 100% system dynamics, your euler equations should work as expected, in which case the reason is that your excel is incorrect.
On the other hand if you use RK4 approximation in anylogic, it doesn't really work properly, so I don't even know why they still have it as an option.
I suggest you Vensim and make some tests to see the difference in results and be sure you are calculating correctly in Excel..
In my course i talk in detail about this topic: noorjax.teachable.com
Related
I have been trying to use optimizer(SGD, Adagrad) from BigDL library on TransE with scala. My current implementation works with mini batch in sequential way. I followed this example to optimize the embeddings(as Tensors) without creating a layered model.My code is somewhat similar to this example. My current problem is, with some parameters my losses gets at a plateau point (the value of margin) no matter how many epochs I run. With this, my hit#10 in testing is not that good. Can someone give any idea why losses get at a plateaued point and if this generates bad testing results?
P.S. I have checked my loss calculation and it is fine. The only place I have control over my implementation is the optimizer.
Thanks in advance.
Looking for advice on how to determine wether my model output data distribution is similar (and if so, then how similar) to the observed datasets distribution.
Basically I have a GBM model with mean reversion that provides seemingly good results, when I compare its distribution to observed data. You can see their PDFs side-by-side in the attached picture.
PDF of observed and model data
Both datasets are huge (~6 million datapoint), and I start to suspect that this is part of the problem...
I am looking for a way to verify that the datasets distributions are similar. I tried the two-sample Kolmogorov-Smirnov test, two-sample t-test, but for some reason both of them rejected the Null hypothesis (always, even with different Alphas). In some threads I've read that these tests are unreliable, when applied to huge datasets, but there wasn't a consensus about this.
I am using Matlab currently, but I am open to others if necessary.
Any help would be appreciated! I primarily looking for a hypothesis test for verification, but if you have a different idea don't hold it back!
Some general Modelica advice?
We've built a model with ~2000 equations and three vectors of input from measured data. Using OpenModelica, attempts at simulation have begun to hang in the translation stage (which runs for hours where it used to take less than a minute) and now I regularly "lose connection to omc.exe." Is there perhaps something cumulative occurring that's degrading translation/compilation performance?
In general, are there any good rules of thumb for keeping simulations lighter and faster? I realize that, depending on the couplings, additional equations could be exponentially increasing the size of the resulting system of equations - could this be a problem?
Thanks for your thoughts!
It shouldn't take that long. Seems like a bug.
You can report this bug here:
https://trac.openmodelica.org/OpenModelica (New Ticket).
If your model is public you can post it there, if not you can contact the OpenModelica team privately.
I did some cleaning in the code; and got the part that repeats 12x (the module) down to ~180 equations; in the process I reduced the size of my input vectors (and also a 2D look-up table the module refers to) by quite a bit - they're both down to a few hundred values. It's working now--simulations run in reasonable time, a few minutes each.
Since all these tables were defined within Modelica functions (as you pointed out, Mr. Tiller) perhaps shrinking them helped to improve the performance. I had assumed that all that data just got spread out in a memory array, without going through any real processing, but maybe that's not the case...time to know more about what's going on under the hood in this environment (as always).
Thanks for the help!
background
I'm working on a group project to simulate some consensus algorithms used by a group of independent robots to form an arbitrary shape on a 2D plane. The robots are modeled as unit disks, and all run the same algorithm. Basically, each robot can move, wait, or observe its local environment at any moment, but cannot communicate explicitly with an other robots. We'd like to find a simulation or even 2d graphics library to help us without writing too much from scratch.
Question
Can anyone recommend a simulation library meeting the requirements below, which could be used for a multi-robot 2D simulation?
I've never coded a simulation before, so it's possible some of my concerns are readily addressed by many existing libraries. However, the Mason project is the only resource I've found that seems promising so far. Unfortunately, a few of our team members are not very proficient in Java, so I'd like to find something suitable in a different language, if possible.
Requirements
* language preference (descending order): python, c++, (maybe) java
* open source/FOSS recommendations only
* Options/flags to disable simulation: We plan on running several thousand trials of randomly generated shapes against each algorithm, so for the bulk of trials we don't care about any visual representation, just data. So the simulation logic has to be decoupled from the graphics components if this makes sense.
* collision detection
* Customizable visual representations: Within a simulation, we'd like to have several views (or toggles for a single view) that present additional information about each robot like current state, the area it's currently observing etc.
For such simple graphics you can surely get away with either pyqt or wxpython.
The simulation itself should be its own python module; the GUI should just load the module, then call its "timestep" function at regular intervals (timer, GUI idle callback, etc); the step function should evolve the robot system by one small time step.
The GUI should just display the simulation state. Avoid mixing everything (display and simulation) in one module, it'll get pretty messy, plus if your simulation engine is a separate module you can then also run it directly from the command line and look at the output file.
It would be pretty easy to write a python script that reads such output file and generates commands to represent it graphically in either excel or powerpoint using win32com, in which case you don't even need pyqt or wxpython.
For the collision detection, look at pybox2d.
I wonder if there a way to "debug" a modelica code, I mean debugging the code line by line and you can see how variables change, things like that?
I know that the modelica code is translated into C, I just want to know if there's a possibility to do that somehow, if there is, I believe it's gonna be a great improvement for any of the simulation environments. Thanks.
HY
This is a good question and it comes up a lot. But first, let's step back for a second.
The idea of debugging "line by line" is something comes from imperative programming languages. By "imperative" I mean that a program is simply a sequence of instructions to be carried out in the specified order.
When someone debugs Java or Python, this "line by line" approach makes sense because the statements are the fundamental way behavior is represented. This "line by line" approach could also be extended to modeling formalisms like block diagrams (e.g. Simulink) because, while graphical, they are also imperative (i.e. they constitute steps to be carried out in a specified order).
But Modelica is not an imperative language. There is no notion of steps, statements or instructions. Instead, we have omnipresent equations. So thinking linearly about debugging doesn't work in Modelica. It is true that you could think about debugging the C code generated from Modelica, but that is typically not very useful because it bears only a partial resemblance to the equations.
So how do you debug Modelica code? Well, debugging Modelica code is really debugging Modelica equations. Normally, Modelica models are composed of components. The equations that are generated when components are connected are automatically generated so lets stipulate that the Modelica compiler generates those correctly. So what's left is the equations in the component models.
The simplest way to approach this is to test each component individually (or at least in the smallest possible models). I often say that trying to debug Modelica components by throwing them all together in a big model is like listening to an orchestra and trying to figure out the one instrument that is out of tune. The fact that these equations in Modelica tend to form simultaneous systems of equations means that errors, when they occur, can propagate immediately to a number of variables.
So your best bet is to go through and create tests for each individual component and verify the behavior of the component. My experience is that when you do this, you can track down and eliminate bugs pretty easily.
Update: You shouldn't need to add outputs to other people's component models to debug them. An output can be created at any level, e.g.
model SystemModel
SomeoneElsesComponent a;
SomeOtherGuysComponent b;
end SystemModel;
model SystemModel_Debug
extends SystemModel;
output Real someNestedSignalFromA = a.someSubsystem.someSubcomponent.someSignal;
output Real someOtherNestedSignalFromB = b.anotherSubsystem.anotherSignal;
end SystemModel_Debug;
Of course, this becomes impractical if you have multiple instantiations of a signal component. In those cases, I admit that it is easier to modify the underlying model. But if they make their models replaceable, you can use the same trick as above (extends their model, add a bunch of custom outputs and then redeclare your model in place of the original).
There is a transformation debugger in OpenModelica now. You can find here which variable is evaluated from which equation.