I need to measure the time forklift spends in collision, however movement_log
for agent type that is a forklift managed by transporter, fleet is not available. I also can not use statecharts because it uses much performance.
Situation: I am simulating a warehouse with one-way aisles and the capacity of these one-way aisles is 2 vehicles. There are situations
where a forklift (the yellow one) needs to wait behind another one in one-way aisle, I currently have that modeled properly I just don't know how to detect this situation and log it.
Thank you
I would do it as following:
Create a new 2-dimensional variable called collisionLog.
Check the speed [getSpeed() function] and state [TransporterState getState() function] every 1 second.
Write these into the collisionLog.
Once the simulation is completed, remove the rows with idle status.
Then do the calculations based on the fact that when speed is zero and transporter is busy, then you have the waiting vehicle.
There is no accessible trigger point (typically an action of a block) to trap when transporters have collisions. Yes, that situation obviously has to be captured internally to enable the transporters to avoid collisions, but in this situation that is not exposed as a block action, or action anywhere else. (AnyLogic space markup elements never have actions, except for some of the newer Material Handling library ones like Station, because these effectively represent a process step.)
The Transporter Control block has all the settings for collision detection and avoidance, but no related actions.
So your alternatives are really
'Scan' for this situation occurring: Yashar's answer, inferring that zero speed when non-idle means 'waiting due to collision' (which may or may not be 100% robust) being one way.
Explicitly break down the movement (from the process perspective) to define the potential 'conflicts' and decision-making within the process flow (e.g., if you're trying to move to an aisle, move to an entrance node, reserve a space in the aisle using resource pools or similar, and only enter when free). Clearly that doesn't cover every possible case, but may be useful in some situations.
The actions that do exist in the Transporter Control block could help a bit here (for both alternatives) since at least you have action points on entering paths and nodes. (You could also raise an enhancement request with AnyLogic to add collision-related actions here....)
I have a huge model with large number of forklifts, checking any attribute every second would result in huge performance loss
I also can not use statecharts because it uses much performance
Have you actually tried it though? Some things do not result in as much of a performance hit as you might think, and performance should not be an a priori 'that will be too slow' thing; ideally you have requirements for acceptable performance and you work round that. (There are always trade-offs between performance, functionality and maintainability.)
[You also don't say how you think using statecharts could have helped. Did you mean doing the 'scanning' approach via a statechart, say with cyclic entry/exit from a Scan state?]
Related
The current model I am working on involves equipments seizing resources (materials, operators) to perform taskings within the equipment. During the model run, the number of equipment (a resource as well) will be varied using a slider. Upon reduction of the resourcepool quantity, excess equipments will be destroyed. How do I release all the resources that are still seized by this equipment after it is destroyed so that the resources can continue to be utilised by the rest of the model?
As far as I know when you reduce the number of resource units in a resource pool they will only be destroyed once the current agents that are seizing them have released them. Thus you don't need to worry about releasing all the resources that are still seized by this equipment.
This looks like a design problem, but you don't really explain how your flows are linked (or when your equipment agents enter the top flow).
Your information suggests that your equipment agents go through the top flow (seizing materials and operators); your bottom flow is presumably triggered after an 'EQ run' to simulate operators doing something at the machine before unload can start/complete(?) by releasing the Hold in the top flow.
The problem is that you should never have resources (i.e., agents that are part of a resource pool) flowing through processes; agents are either flow-through-processes agents or resource agents. Changing resource pool sizes will do nothing to affect those agents if they're in a process flow because AnyLogic is only concerned about tasks those resources are doing as resources.
In your case, it depends what 'removing the machine' represents in real-life (given the machine is potentially processing things at the time). That will determine what 'sub-flow' you need to enact in that case to free resources, etc. This is likely to mean in your case:
Have some trigger (e.g., event, schedule action, button action, some specific state elsewhere in your model) which means an equipment is now 'removed'.
This then needs code to know what block that equipment is currently in (agents have a currentBlock function for this purpose), remove it from that block (typically using the remove function provided by most blocks, but this is not consistent across all blocks) and then probably send it into a 'clean-up' process flow (using an Enter block, with a Sink block at the end to delete it) to do whatever needs doing (like releasing all resources).
Some of the code needed here is slightly complex (removing agents from within a process flow is not something 'cleanly' supported by AnyLogic except in the Pedestrian library) and will involve casting (since currentBlock returns you the block as a generic FlowchartBlock supertype).
P.S. A more 'radical' design rework (but probably a better one for what you want) is to keep equipments as resource agents but redesign your top flow so that the agent represents, say, a 'request for machine processing (of something)'. Your top flow will then be seizing/releasing equipment resources along with everything else, and you can use dynamic resource pool capacity changes as you were trying to. (It is quite common in process modelling to need to 'reframe' the processes via agents which represent some abstract 'request', rather than a physical thing.)
Image to illustrate point of freezing Context:
Creating a scalable model for a production line to increase Man Machine Optimization ratio. Will be scaling the model for an operator (resource) to work on multiple machines (of the same type). During the process flow at a machine, the operator will be seized and released multiple times for different taskings.
Problem:
Entire process freezes when the operator is being seized at multiple seize blocks concurrently.
Thoughts:
Is there a way to create a list where taskings are added in the event the resource is currently seized. Resource will then work on the list of taskings whenever it becomes idle. Any other methods to resolve this issue is also appreciated!
If this is going to become a complex model, you may want to consider using a pure agent-based approach.
Your resource has a LinkedList of JobRequest agents that are created and send by the machines when necessary. They are sorted by some priority.
The resource then simply does one JobRequest after the next.
No ResourcePools or Seieze elements required.
This is often the more powerful and flexible approach as you are not bound to the process blocks anymore. But obviously, it needs good control and testing from you :)
Problem: Entire process freezes when the operator is being
seized at multiple seize blocks concurrently.
You need to explain your problem better: it is not possible to "seize the same operator at multiple seize blocks concurrently" (unless you are using a resource choice condition or similar to try to 'force' seizing of a particular resource --- even then, this is more accurately framed as 'I've set up resource choice conditions which mean I end up having no valid resources available').
What does your model "freezing" represent? For example, it could just be a natural consequence of having no resources available, especially if you have long delay times or are using Delay blocks with "Until stopDelay() is called" set --- i.e., you are relying on events elsewhere in your model to free agents (and seized resources) from blocks, which an incorrect model design might mean never happen in some circumstances. (If your model is "freezing" because of no resources being available, it should 'unfreeze' when one does.)
During the process flow at a machine, the operator will be
seized and released multiple times for different taskings.
You can just do this bit by breaking down the actions at a machine into a number of Seize/Delay/Release actions with different characteristics (or a process flow that loops around a set of these driven by some data if you want it to be more flexible / data-driven).
As I stared at the little blue lines inching to the right on my screen I got to thinking that it would be nice to have a feature in Dymola/OpenModelica (if it doesn't exist already).
The feature I'm thinking of would monitor the behavior of the system and either report back when steady state is achieved or can terminate the simulation when steady state is achieved. I imagine this could be tied to monitor the derivatives of all the state variables and when they all equal zero (within some user defined tolerance). Clearly this could be done by the user for simple models but for complex this would need to be an automated feature that occurs "behind the scenes".
I can think of a couple use cases:
When you want to generate a steady state solution to for restarting another simulation this would avoid needing to simulate for very long times and assuming that you simulated long enough.
If there was a function/variable, etc. like time that is built into the solution then the model perhaps reference that variable to add delay for switching on/off behaviors such as controller logic that you don't want to turn on until a steady state condition is reached.
It seems that this would be a fairly simple feature to add but potentially quite useful.
Does a feature like this exist or can you think of good reasons why it doesn't/shouldn't?
As far as I know we don't have this feature in OpenModelica yet, but sounds rather easy to implement. I opened a ticket about it and we'll see when we have time to implement it:
https://trac.openmodelica.org/OpenModelica/ticket/4301
I read Deprecating the Observer Pattern with Scala.React and found reactive programming very interesting.
But there is a point I can't figure out: the author described the signals as the nodes in a DAG(Directed acyclic graph). Then what if you have two signals(or event sources, or models, w/e) depending on each other? i.e. the 'two-way binding', like a model and a view in web front-end programming.
Sometimes it's just inevitable because the user can change view, and the back-end(asynchronous request, for example) can change model, and you hope the other side to reflect the change immediately.
The loop dependencies in a reactive programming language can be handled with a variety of semantics. The one that appears to have been chosen in scala.React is that of synchronous reactive languages and specifically that of Esterel. You can have a good explanation of this semantics and its alternatives in the paper "The synchronous languages 12 years later" by Benveniste, A. ; Caspi, P. ; Edwards, S.A. ; Halbwachs, N. ; Le Guernic, P. ; de Simone, R. and available at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1173191&tag=1 or http://virtualhost.cs.columbia.edu/~sedwards/papers/benveniste2003synchronous.pdf.
Replying #Matt Carkci here, because a comment wouldn't suffice
In the paper section 7.1 Change Propagation you have
Our change propagation implementation uses a push-based approach based on a topologically ordered dependency graph. When a propagation turn starts, the propagator puts all nodes that have been invalidated since the last turn into a priority queue which is sorted according to the topological order, briefly level, of the nodes. The propagator dequeues the node on the lowest level and validates it, potentially changing its state and putting its dependent nodes, which are on greater levels, on the queue. The propagator repeats this step until the queue is empty, always keeping track of the current level, which becomes important for level mismatches below. For correctly ordered graphs, this process monotonically proceeds to greater levels, thus ensuring data consistency, i.e., the absence of glitches.
and later at section 7.6 Level Mismatch
We therefore need to prepare for an opaque node n to access another node that is on a higher topological level. Every node that is read from during n’s evaluation, first checks whether the current propagation level which is maintained by the propagator is greater than the node’s level. If it is, it proceed as usual, otherwise it throws a level mismatch exception containing a reference to itself, which is caught only in the main propagation loop. The propagator then hoists n by first changing its level to a level above the node which threw the exception, reinserting n into the propagation queue (since it’s level has changed) for later evaluation in the same turn and then transitively hoisting all of n’s dependents.
While there's no mention about any topological constraint (cyclic vs acyclic), something is not clear. (at least to me)
First arises the question of how is the topological order defined.
And then the implementation suggests that mutually dependent nodes would loop forever in the evaluation through the exception mechanism explained above.
What do you think?
After scanning the paper, I can't find where they mention that it must be acyclic. There's nothing stopping you from creating cyclic graphs in dataflow/reactive programming. Acyclic graphs only allow you to create Pipeline Dataflow (e.g. Unix command line pipes).
Feedback and cycles are a very powerful mechanism in dataflow. Without them you are restricted to the types of programs you can create. Take a look at Flow-Based Programming - Loop-Type Networks.
Edit after second post by pagoda_5b
One statement in the paper made me take notice...
For correctly ordered graphs, this process
monotonically proceeds to greater levels, thus ensuring data
consistency, i.e., the absence of glitches.
To me that says that loops are not allowed within the Scala.React framework. A cycle between two nodes would seem to cause the system to continually try to raise the level of both nodes forever.
But that doesn't mean that you have to encode the loops within their framework. It could be possible to have have one path from the item you want to observe and then another, separate, path back to the GUI.
To me, it always seems that too much emphasis is placed on a programming system completing and giving one answer. Loops make it difficult to determine when to terminate. Libraries that use the term "reactive" tend to subscribe to this thought process. But that is just a result of the Von Neumann architecture of computers... a focus of solving an equation and returning the answer. Libraries that shy away from loops seem to be worried about program termination.
Dataflow doesn't require a program to have one right answer or ever terminate. The answer is the answer at this moment of time due to the inputs at this moment. Feedback and loops are expected if not required. A dataflow system is basically just a big loop that constantly passes data between nodes. To terminate it, you just stop it.
Dataflow doesn't have to be so complicated. It is just a very different way to think about programming. I suggest you look at J. Paul Morison's book "Flow Based Programming" for a field tested version of dataflow or my book (once it's done).
Check your MVC knowledge. The view doesn't update the model, so it won't send signals to it. The controller updates the model. For a C/F converter, you would have two controllers (one for the F control, on for the C control). Both controllers would send signals to a single model (which stores the only real temperature, Kelvin, in a lossless format). The model sends signals to two separate views (one for C view, one for F view). No cycles.
Based on the answer from #pagoda_5b, I'd say that you are likely allowed to have cycles (7.6 should handle it, at the cost of performance) but you must guarantee that there is no infinite regress. For example, you could have the controllers also receive signals from the model, as long as you guaranteed that receipt of said signal never caused a signal to be sent back to the model.
I think the above is a good description, but it uses the word "signal" in a non-FRP style. "Signals" in the above are really messages. If the description in 7.1 is correct and complete, loops in the signal graph would always cause infinite regress as processing the dependents of a node would cause the node to be processed and vice-versa, ad inf.
As #Matt Carkci said, there are FRP frameworks that allow loops, at least to a limited extent. They will either not be push-based, use non-strictness in interesting ways, enforce monotonicity, or introduce "artificial" delays so that when the signal graph is expanded on the temporal dimension (turning it into a value graph) the cycles disappear.
I noticed that for problems such as Cloudbalancing, move factories exist to generate moves and swaps. A "move move" transfers a cloud process from one computer to another. A "swap move" swaps any two processes from their respective computers.
I am developing a timetabling application.
A subjectTeacherHour (a combination of subject and teacher) have
only a subset of Periods to which they may be assigned. If Jane teaches 6 hours at a class, there are 6 subjectTeacherHours each which have to be allocated a Period, from a possible 30 Periods of that class ;unlike the cloudbalance example, where a process can move to any computer.
Only one subjectTeacherHour may be allocated a Period (naturally).
It tries to place subjectTeacherHour to eligible Periods , till an optimal combination is found.
Pros
The manual seems to recommend it.
...However, as the traveling tournament example proves, if you can remove
a hard constraint by using a certain set of big moves, you can win
performance and scalability...
...The `[version with big moves] evaluates a lot less unfeasible
solutions, which enables it to outperform and outscale the simple
version....
...It's generally a good idea to use several selectors, mixing fine
grained moves and course grained moves:...
While only one subjectTeacher may be allocated to Period, the solver must temporarily break such a constraint to discover that swapping two certain Period allocations lead to a better solution. A swap move "removes this brick wall" between those two states.
So a swap move can help lead to better solutions much faster.
Cons
A subjectTeacher have only a subset of Periods to which they may be assigned. So finding intersecting (common) hours between any two subjectTeachers is a bit tough (but doable in an elegant way: Good algorithm/technique to find overlapping values from objects' properties? ) .
Will it only give me only small gains in time and optimality?
I am also worried about crazy interactions having two kinds of moves may cause, leading to getting stuck at a bad solution.
Swap moves are crucial.
Consider 2 courses assigned to a room which is fully booked. Without swapping, it would have to break a hard constraint to move 1 course to a conflicted room and chose that move as the step (which is unlikely).
You can use the build-in generic swap MoveFactory. If you write your own, you can say the swap move isDoable() false when your moving either sides to an ineligible period.