Global measure to test when steady state of system is reached - modelica

As I stared at the little blue lines inching to the right on my screen I got to thinking that it would be nice to have a feature in Dymola/OpenModelica (if it doesn't exist already).
The feature I'm thinking of would monitor the behavior of the system and either report back when steady state is achieved or can terminate the simulation when steady state is achieved. I imagine this could be tied to monitor the derivatives of all the state variables and when they all equal zero (within some user defined tolerance). Clearly this could be done by the user for simple models but for complex this would need to be an automated feature that occurs "behind the scenes".
I can think of a couple use cases:
When you want to generate a steady state solution to for restarting another simulation this would avoid needing to simulate for very long times and assuming that you simulated long enough.
If there was a function/variable, etc. like time that is built into the solution then the model perhaps reference that variable to add delay for switching on/off behaviors such as controller logic that you don't want to turn on until a steady state condition is reached.
It seems that this would be a fairly simple feature to add but potentially quite useful.
Does a feature like this exist or can you think of good reasons why it doesn't/shouldn't?

As far as I know we don't have this feature in OpenModelica yet, but sounds rather easy to implement. I opened a ticket about it and we'll see when we have time to implement it:
https://trac.openmodelica.org/OpenModelica/ticket/4301

Related

Anylogic forklift collision logging

I need to measure the time forklift spends in collision, however movement_log
for agent type that is a forklift managed by transporter, fleet is not available. I also can not use statecharts because it uses much performance.
Situation: I am simulating a warehouse with one-way aisles and the capacity of these one-way aisles is 2 vehicles. There are situations
where a forklift (the yellow one) needs to wait behind another one in one-way aisle, I currently have that modeled properly I just don't know how to detect this situation and log it.
Thank you
I would do it as following:
Create a new 2-dimensional variable called collisionLog.
Check the speed [getSpeed() function] and state [TransporterState getState() function] every 1 second.
Write these into the collisionLog.
Once the simulation is completed, remove the rows with idle status.
Then do the calculations based on the fact that when speed is zero and transporter is busy, then you have the waiting vehicle.
There is no accessible trigger point (typically an action of a block) to trap when transporters have collisions. Yes, that situation obviously has to be captured internally to enable the transporters to avoid collisions, but in this situation that is not exposed as a block action, or action anywhere else. (AnyLogic space markup elements never have actions, except for some of the newer Material Handling library ones like Station, because these effectively represent a process step.)
The Transporter Control block has all the settings for collision detection and avoidance, but no related actions.
So your alternatives are really
'Scan' for this situation occurring: Yashar's answer, inferring that zero speed when non-idle means 'waiting due to collision' (which may or may not be 100% robust) being one way.
Explicitly break down the movement (from the process perspective) to define the potential 'conflicts' and decision-making within the process flow (e.g., if you're trying to move to an aisle, move to an entrance node, reserve a space in the aisle using resource pools or similar, and only enter when free). Clearly that doesn't cover every possible case, but may be useful in some situations.
The actions that do exist in the Transporter Control block could help a bit here (for both alternatives) since at least you have action points on entering paths and nodes. (You could also raise an enhancement request with AnyLogic to add collision-related actions here....)
I have a huge model with large number of forklifts, checking any attribute every second would result in huge performance loss
I also can not use statecharts because it uses much performance
Have you actually tried it though? Some things do not result in as much of a performance hit as you might think, and performance should not be an a priori 'that will be too slow' thing; ideally you have requirements for acceptable performance and you work round that. (There are always trade-offs between performance, functionality and maintainability.)
[You also don't say how you think using statecharts could have helped. Did you mean doing the 'scanning' approach via a statechart, say with cyclic entry/exit from a Scan state?]

How to implement deterministic single threaded network simulation

I read about how FoundationDB does its network testing/simulation here: http://www.slideshare.net/FoundationDB/deterministic-simulation-testing
I would like to implement something very similar, but cannot figure out how they actually did implement it. How would one go about writing, for example, a C++ class that does what they do. Is it possible to do the kind of simulation they do without doing any code generation (as they presumeably do)?
Also: How can a simulation be repeated, if it contains random events?? Each time the simulation would require to choose a new random value and thus be not the same run as the one before. Maybe I am missing something here...hope somebody can shed a bit of light on the matter.
You can find a little bit more detail in the talk that went along with those slides here: https://www.youtube.com/watch?v=4fFDFbi3toc
As for the determinism question, you're right that a simulation cannot be repeated exactly unless all possible sources of randomness and other non-determinism are carefully controlled. To that end:
(1) Generate all random numbers from a PRNG that you seed with a known value.
(2) Avoid any sort of branching or conditionals based on facts about the world which you don't control (e.g. the time of day, the load on the machine, etc.), or if you can't help that, then pseudo-randomly simulate those things too.
(3) Ensure that whatever mechanism you pick for concurrency has a mode in which it can guarantee a deterministic execution order.
Since it's easy to mess all those things up, you'll also want to have a way of checking whether determinism has been violated.
All of this is covered in greater detail in the talk that I linked above.
In the sims I've built the biggest issue with repeatability ends up being proper seed management (as per the previous answer). You want your simulations to give different results only when you supply a different seed to your random number generators than before.
After that the biggest issue I've seen seems tends to be making sure you don't iterate over collections with nondeterministic ordering. For instance, in Java, you'd use a LinkedHashMap instead of a HashMap.

In drools is there a way to detect endless loops and halt a session programmatically?

in short my questions are:
Is there anything built-in into drools that allows/facilitates detection of endless loops?
Is there a way to programmatically halt sessions (e.g. for the case of a detected endless loop)?
More details:
I'm planning to have drools (6.2 or higher) run within a server/platform where users will create and execute their own rules. One of the issues I'm facing is that carelessly/faulty rule design can easily result in endless loops (whether its just a forgotten "no-loop true" statement or the more complex rule1 triggers rule2 triggers rule3 (re)triggers rule1 circles that lead to endless loops.
If this happens, drools basically slows down my server/platform to a halt.
I'm currently looking into how to detect and/or terminate sessions that run in an endless loop.
Now as a (seemingly) endless loop is nothing that is per-se invalid or in certain cases maybe even desired I can imagine that there is not a lot of built-in detection mechanism for this case (if any). But as I am not an expert I'd be happy to know if there is anything built-in to detect endless loops?
In my use case I would be ok to determine a session as "endlessly looped" based on a threshold of how often any rule might have been activated.
As I understand I could use maybe AgendaEventListeners that keep track of how often any rule has been fired and if a threshold is met either insert a control fact or somehow trigger a rule that contains the drools.halt() for this session.
I wonder (and couldn't find a lot of details) if it is possible to programmatically halt/terminate sessions.
I've only come across a fireUntilHalt() method but that didn't seem like the way to go (or I didnt understand it really).
Also, at this point I was only planning to use stateless session (but if it's well encapsulated I could also work with stateful sessions if that makes my goal easier to achieve).
Any answers/ideas/feedback to my initial approach is highly welcome :)
Thanks!
A fundamental breaking point of any RBS implementation is created where the design lets "users create and design their own rules". I don't know why some marketing hype opens the door for non-programmers to write what is program code, without any safeguarding.
Detecting whether a session halts is theoretically impossible. Google "Halting problem".
For certain contexts you might come up with a limit of the number of rules that might be executed at most or something similar. And you can use listeners to count and raise an exception, etc etc.
Basically you have very bad cards once you succumb to the execution of untested code created by amateurs.

How to handle the two signals depending on each other?

I read Deprecating the Observer Pattern with Scala.React and found reactive programming very interesting.
But there is a point I can't figure out: the author described the signals as the nodes in a DAG(Directed acyclic graph). Then what if you have two signals(or event sources, or models, w/e) depending on each other? i.e. the 'two-way binding', like a model and a view in web front-end programming.
Sometimes it's just inevitable because the user can change view, and the back-end(asynchronous request, for example) can change model, and you hope the other side to reflect the change immediately.
The loop dependencies in a reactive programming language can be handled with a variety of semantics. The one that appears to have been chosen in scala.React is that of synchronous reactive languages and specifically that of Esterel. You can have a good explanation of this semantics and its alternatives in the paper "The synchronous languages 12 years later" by Benveniste, A. ; Caspi, P. ; Edwards, S.A. ; Halbwachs, N. ; Le Guernic, P. ; de Simone, R. and available at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1173191&tag=1 or http://virtualhost.cs.columbia.edu/~sedwards/papers/benveniste2003synchronous.pdf.
Replying #Matt Carkci here, because a comment wouldn't suffice
In the paper section 7.1 Change Propagation you have
Our change propagation implementation uses a push-based approach based on a topologically ordered dependency graph. When a propagation turn starts, the propagator puts all nodes that have been invalidated since the last turn into a priority queue which is sorted according to the topological order, briefly level, of the nodes. The propagator dequeues the node on the lowest level and validates it, potentially changing its state and putting its dependent nodes, which are on greater levels, on the queue. The propagator repeats this step until the queue is empty, always keeping track of the current level, which becomes important for level mismatches below. For correctly ordered graphs, this process monotonically proceeds to greater levels, thus ensuring data consistency, i.e., the absence of glitches.
and later at section 7.6 Level Mismatch
We therefore need to prepare for an opaque node n to access another node that is on a higher topological level. Every node that is read from during n’s evaluation, first checks whether the current propagation level which is maintained by the propagator is greater than the node’s level. If it is, it proceed as usual, otherwise it throws a level mismatch exception containing a reference to itself, which is caught only in the main propagation loop. The propagator then hoists n by first changing its level to a level above the node which threw the exception, reinserting n into the propagation queue (since it’s level has changed) for later evaluation in the same turn and then transitively hoisting all of n’s dependents.
While there's no mention about any topological constraint (cyclic vs acyclic), something is not clear. (at least to me)
First arises the question of how is the topological order defined.
And then the implementation suggests that mutually dependent nodes would loop forever in the evaluation through the exception mechanism explained above.
What do you think?
After scanning the paper, I can't find where they mention that it must be acyclic. There's nothing stopping you from creating cyclic graphs in dataflow/reactive programming. Acyclic graphs only allow you to create Pipeline Dataflow (e.g. Unix command line pipes).
Feedback and cycles are a very powerful mechanism in dataflow. Without them you are restricted to the types of programs you can create. Take a look at Flow-Based Programming - Loop-Type Networks.
Edit after second post by pagoda_5b
One statement in the paper made me take notice...
For correctly ordered graphs, this process
monotonically proceeds to greater levels, thus ensuring data
consistency, i.e., the absence of glitches.
To me that says that loops are not allowed within the Scala.React framework. A cycle between two nodes would seem to cause the system to continually try to raise the level of both nodes forever.
But that doesn't mean that you have to encode the loops within their framework. It could be possible to have have one path from the item you want to observe and then another, separate, path back to the GUI.
To me, it always seems that too much emphasis is placed on a programming system completing and giving one answer. Loops make it difficult to determine when to terminate. Libraries that use the term "reactive" tend to subscribe to this thought process. But that is just a result of the Von Neumann architecture of computers... a focus of solving an equation and returning the answer. Libraries that shy away from loops seem to be worried about program termination.
Dataflow doesn't require a program to have one right answer or ever terminate. The answer is the answer at this moment of time due to the inputs at this moment. Feedback and loops are expected if not required. A dataflow system is basically just a big loop that constantly passes data between nodes. To terminate it, you just stop it.
Dataflow doesn't have to be so complicated. It is just a very different way to think about programming. I suggest you look at J. Paul Morison's book "Flow Based Programming" for a field tested version of dataflow or my book (once it's done).
Check your MVC knowledge. The view doesn't update the model, so it won't send signals to it. The controller updates the model. For a C/F converter, you would have two controllers (one for the F control, on for the C control). Both controllers would send signals to a single model (which stores the only real temperature, Kelvin, in a lossless format). The model sends signals to two separate views (one for C view, one for F view). No cycles.
Based on the answer from #pagoda_5b, I'd say that you are likely allowed to have cycles (7.6 should handle it, at the cost of performance) but you must guarantee that there is no infinite regress. For example, you could have the controllers also receive signals from the model, as long as you guaranteed that receipt of said signal never caused a signal to be sent back to the model.
I think the above is a good description, but it uses the word "signal" in a non-FRP style. "Signals" in the above are really messages. If the description in 7.1 is correct and complete, loops in the signal graph would always cause infinite regress as processing the dependents of a node would cause the node to be processed and vice-versa, ad inf.
As #Matt Carkci said, there are FRP frameworks that allow loops, at least to a limited extent. They will either not be push-based, use non-strictness in interesting ways, enforce monotonicity, or introduce "artificial" delays so that when the signal graph is expanded on the temporal dimension (turning it into a value graph) the cycles disappear.

Set custom production firing time in ACT-R

When defining a model in ACT-R, I would like to set for each of my productions, a different firing time.
How could I do that?
Thanks!
Not too many ACT-R modelers here, huh?
First off, keep a copy of the ACT-R reference manual handy. This a great resource that answers 90% of the questions you will have.
You can set a production's action time using (spp <production-name> :at <time>) or you can set the default action time using (sgp :dat <time>). Times are in seconds, so the default is .05.
That being said, you should modify these parameters very rarely, if at all. The whole point of production firing time is that it's supposed to represent a psychological constant. If you're tinkering with this, your model may fit the data but is less likely to be psychologically plausible. And if you don't care about psychological plausibility, then you shouldn't be using ACT-R! But there's an exception to every rule, so proceed with caution.
While this is a bit old, this question still comes up fairly high on Google when searching for ACT-R production firing times, so I feel it is acceptable to post a response.
As a published ACT-R modeler with 4 years under my belt, I would like to echo Jeff's statements. You very, very rarely modify most ACT-R parameters for the exact reason Jeff stated. All aspects of ACT-R and the amount of time certain modules take to fire are empirically backed by many studies. If you start changing these, then your model, like Jeff said, is completely implausible. While some modelers do change these values, they have empirical data to back up their reasons for changing any parameters.