How to handle the two signals depending on each other? - scala

I read Deprecating the Observer Pattern with Scala.React and found reactive programming very interesting.
But there is a point I can't figure out: the author described the signals as the nodes in a DAG(Directed acyclic graph). Then what if you have two signals(or event sources, or models, w/e) depending on each other? i.e. the 'two-way binding', like a model and a view in web front-end programming.
Sometimes it's just inevitable because the user can change view, and the back-end(asynchronous request, for example) can change model, and you hope the other side to reflect the change immediately.

The loop dependencies in a reactive programming language can be handled with a variety of semantics. The one that appears to have been chosen in scala.React is that of synchronous reactive languages and specifically that of Esterel. You can have a good explanation of this semantics and its alternatives in the paper "The synchronous languages 12 years later" by Benveniste, A. ; Caspi, P. ; Edwards, S.A. ; Halbwachs, N. ; Le Guernic, P. ; de Simone, R. and available at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1173191&tag=1 or http://virtualhost.cs.columbia.edu/~sedwards/papers/benveniste2003synchronous.pdf.

Replying #Matt Carkci here, because a comment wouldn't suffice
In the paper section 7.1 Change Propagation you have
Our change propagation implementation uses a push-based approach based on a topologically ordered dependency graph. When a propagation turn starts, the propagator puts all nodes that have been invalidated since the last turn into a priority queue which is sorted according to the topological order, briefly level, of the nodes. The propagator dequeues the node on the lowest level and validates it, potentially changing its state and putting its dependent nodes, which are on greater levels, on the queue. The propagator repeats this step until the queue is empty, always keeping track of the current level, which becomes important for level mismatches below. For correctly ordered graphs, this process monotonically proceeds to greater levels, thus ensuring data consistency, i.e., the absence of glitches.
and later at section 7.6 Level Mismatch
We therefore need to prepare for an opaque node n to access another node that is on a higher topological level. Every node that is read from during n’s evaluation, first checks whether the current propagation level which is maintained by the propagator is greater than the node’s level. If it is, it proceed as usual, otherwise it throws a level mismatch exception containing a reference to itself, which is caught only in the main propagation loop. The propagator then hoists n by first changing its level to a level above the node which threw the exception, reinserting n into the propagation queue (since it’s level has changed) for later evaluation in the same turn and then transitively hoisting all of n’s dependents.
While there's no mention about any topological constraint (cyclic vs acyclic), something is not clear. (at least to me)
First arises the question of how is the topological order defined.
And then the implementation suggests that mutually dependent nodes would loop forever in the evaluation through the exception mechanism explained above.
What do you think?

After scanning the paper, I can't find where they mention that it must be acyclic. There's nothing stopping you from creating cyclic graphs in dataflow/reactive programming. Acyclic graphs only allow you to create Pipeline Dataflow (e.g. Unix command line pipes).
Feedback and cycles are a very powerful mechanism in dataflow. Without them you are restricted to the types of programs you can create. Take a look at Flow-Based Programming - Loop-Type Networks.
Edit after second post by pagoda_5b
One statement in the paper made me take notice...
For correctly ordered graphs, this process
monotonically proceeds to greater levels, thus ensuring data
consistency, i.e., the absence of glitches.
To me that says that loops are not allowed within the Scala.React framework. A cycle between two nodes would seem to cause the system to continually try to raise the level of both nodes forever.
But that doesn't mean that you have to encode the loops within their framework. It could be possible to have have one path from the item you want to observe and then another, separate, path back to the GUI.
To me, it always seems that too much emphasis is placed on a programming system completing and giving one answer. Loops make it difficult to determine when to terminate. Libraries that use the term "reactive" tend to subscribe to this thought process. But that is just a result of the Von Neumann architecture of computers... a focus of solving an equation and returning the answer. Libraries that shy away from loops seem to be worried about program termination.
Dataflow doesn't require a program to have one right answer or ever terminate. The answer is the answer at this moment of time due to the inputs at this moment. Feedback and loops are expected if not required. A dataflow system is basically just a big loop that constantly passes data between nodes. To terminate it, you just stop it.
Dataflow doesn't have to be so complicated. It is just a very different way to think about programming. I suggest you look at J. Paul Morison's book "Flow Based Programming" for a field tested version of dataflow or my book (once it's done).

Check your MVC knowledge. The view doesn't update the model, so it won't send signals to it. The controller updates the model. For a C/F converter, you would have two controllers (one for the F control, on for the C control). Both controllers would send signals to a single model (which stores the only real temperature, Kelvin, in a lossless format). The model sends signals to two separate views (one for C view, one for F view). No cycles.
Based on the answer from #pagoda_5b, I'd say that you are likely allowed to have cycles (7.6 should handle it, at the cost of performance) but you must guarantee that there is no infinite regress. For example, you could have the controllers also receive signals from the model, as long as you guaranteed that receipt of said signal never caused a signal to be sent back to the model.
I think the above is a good description, but it uses the word "signal" in a non-FRP style. "Signals" in the above are really messages. If the description in 7.1 is correct and complete, loops in the signal graph would always cause infinite regress as processing the dependents of a node would cause the node to be processed and vice-versa, ad inf.
As #Matt Carkci said, there are FRP frameworks that allow loops, at least to a limited extent. They will either not be push-based, use non-strictness in interesting ways, enforce monotonicity, or introduce "artificial" delays so that when the signal graph is expanded on the temporal dimension (turning it into a value graph) the cycles disappear.

Related

Anylogic forklift collision logging

I need to measure the time forklift spends in collision, however movement_log
for agent type that is a forklift managed by transporter, fleet is not available. I also can not use statecharts because it uses much performance.
Situation: I am simulating a warehouse with one-way aisles and the capacity of these one-way aisles is 2 vehicles. There are situations
where a forklift (the yellow one) needs to wait behind another one in one-way aisle, I currently have that modeled properly I just don't know how to detect this situation and log it.
Thank you
I would do it as following:
Create a new 2-dimensional variable called collisionLog.
Check the speed [getSpeed() function] and state [TransporterState getState() function] every 1 second.
Write these into the collisionLog.
Once the simulation is completed, remove the rows with idle status.
Then do the calculations based on the fact that when speed is zero and transporter is busy, then you have the waiting vehicle.
There is no accessible trigger point (typically an action of a block) to trap when transporters have collisions. Yes, that situation obviously has to be captured internally to enable the transporters to avoid collisions, but in this situation that is not exposed as a block action, or action anywhere else. (AnyLogic space markup elements never have actions, except for some of the newer Material Handling library ones like Station, because these effectively represent a process step.)
The Transporter Control block has all the settings for collision detection and avoidance, but no related actions.
So your alternatives are really
'Scan' for this situation occurring: Yashar's answer, inferring that zero speed when non-idle means 'waiting due to collision' (which may or may not be 100% robust) being one way.
Explicitly break down the movement (from the process perspective) to define the potential 'conflicts' and decision-making within the process flow (e.g., if you're trying to move to an aisle, move to an entrance node, reserve a space in the aisle using resource pools or similar, and only enter when free). Clearly that doesn't cover every possible case, but may be useful in some situations.
The actions that do exist in the Transporter Control block could help a bit here (for both alternatives) since at least you have action points on entering paths and nodes. (You could also raise an enhancement request with AnyLogic to add collision-related actions here....)
I have a huge model with large number of forklifts, checking any attribute every second would result in huge performance loss
I also can not use statecharts because it uses much performance
Have you actually tried it though? Some things do not result in as much of a performance hit as you might think, and performance should not be an a priori 'that will be too slow' thing; ideally you have requirements for acceptable performance and you work round that. (There are always trade-offs between performance, functionality and maintainability.)
[You also don't say how you think using statecharts could have helped. Did you mean doing the 'scanning' approach via a statechart, say with cyclic entry/exit from a Scan state?]

Method to create a list of taskings for resource to work on when resource becomes idle

Image to illustrate point of freezing Context:
Creating a scalable model for a production line to increase Man Machine Optimization ratio. Will be scaling the model for an operator (resource) to work on multiple machines (of the same type). During the process flow at a machine, the operator will be seized and released multiple times for different taskings.
Problem:
Entire process freezes when the operator is being seized at multiple seize blocks concurrently.
Thoughts:
Is there a way to create a list where taskings are added in the event the resource is currently seized. Resource will then work on the list of taskings whenever it becomes idle. Any other methods to resolve this issue is also appreciated!
If this is going to become a complex model, you may want to consider using a pure agent-based approach.
Your resource has a LinkedList of JobRequest agents that are created and send by the machines when necessary. They are sorted by some priority.
The resource then simply does one JobRequest after the next.
No ResourcePools or Seieze elements required.
This is often the more powerful and flexible approach as you are not bound to the process blocks anymore. But obviously, it needs good control and testing from you :)
Problem: Entire process freezes when the operator is being
seized at multiple seize blocks concurrently.
You need to explain your problem better: it is not possible to "seize the same operator at multiple seize blocks concurrently" (unless you are using a resource choice condition or similar to try to 'force' seizing of a particular resource --- even then, this is more accurately framed as 'I've set up resource choice conditions which mean I end up having no valid resources available').
What does your model "freezing" represent? For example, it could just be a natural consequence of having no resources available, especially if you have long delay times or are using Delay blocks with "Until stopDelay() is called" set --- i.e., you are relying on events elsewhere in your model to free agents (and seized resources) from blocks, which an incorrect model design might mean never happen in some circumstances. (If your model is "freezing" because of no resources being available, it should 'unfreeze' when one does.)
During the process flow at a machine, the operator will be
seized and released multiple times for different taskings.
You can just do this bit by breaking down the actions at a machine into a number of Seize/Delay/Release actions with different characteristics (or a process flow that loops around a set of these driven by some data if you want it to be more flexible / data-driven).

Multi-instance and Loop in BPMN

I am trying to model a certain behaviour, where couple of activities in differents swimlanes supposed to be processed in a loop. Now BPMN uses tokens to ilustrate the flow and paths taken. I wonder how such tokens work in case of loops. Does every activity iteration creates a token which consequently travel through the connected activities?
E.g. Let's say Activity1 will be performed in a loop 10 times. Will that create 10 tokens where each will travel through the remaining activities of the process? Such behaviour would be undesirable, however if I am not mistaken multi-instance activities work that way.
The only solution on my mind which would comply with BPMN specification would be to create a Call activity for the whole block of activities and then run the Call activity in a loop.
Can anyone clarify for me the use of loops and multi-instances in BPMN from the view of tokens?
Thank you in advance!
Based upon my reading of the documentation: https://www.omg.org/spec/BPMN/2.0/PDF The answer from #qwerty_so does not seem to conform to the standard, although in part this seems to be because the question also seems imprecise or at least underspecified.
A token (see glossary) is simply an imaginary object that represents the flow unit in the process diagram. There are at least three different types of loops specified in the standard, which suggest different implications for the flow unit.
Sections 13.2.6 and 12.2.7 describe Loop Activity and Multiple Instance Activities respectively. While the latter, on its face, might not seem like a loop, the standard defines attributes of the activity that suggest otherwise including: MultipleInstanceLoopCharacteristics and ExpressionloopCardinality.
In the former case, it seems that the operational semantics suggest a single flow unit that repeats multiple times according to some policy or even unbounded.
In the latter case, the activity has "multiple instances spawned," including a parallel variant.
That multiple instances can flow forward in parallel, on its face, suggests that the system must at least allow for the possibility of spawning multiple tokens (or conceptually splitting the original token) to support multiple threads proceeding simultaneously along different paths.
That said, the Loop Activity (13.2.6) appears to support the OP's desired semantics.

Single function handling events or a function for each event?

What to prefer in which situation: Single function handling events or a function for each event?
Here is a basic code example:
Option 1
enum Notification {
case A
case B
case C
}
protocol One {
func consumer(consumer: Consumer, didReceiveNotification notification: Notification)
}
or
Option 2
protocol Two {
func consumerDidReceiveA(consumer: Consumer)
func consumerDidReceiveB(consumer: Consumer)
func consumerDidReceiveC(consumer: Consumer)
}
Background
Apple use both options. E.g. for NSStreamDelegate we have the first option, while in CoreBluetooth (e.g. CBCentralManagerDelegate) we see option two.
One big difference I see is that Swift does not support optional protocol methods nicely (via extension or #obj keyword).
What would you prefer? What's the (dis)advantages?
In terms of achieving the loosest form of coupling and highest degree of cohesion, naturally the choice would sway towards individual events, not this kind of multi-event bundle of responsibilities.
Yet there are a lot of practical concerns that might move you towards favoring the opposite, coarser way of dealing with events instead of a separate function per granular event.
Here are some possible ones (not listed in any specific order).
Boilerplate
While it's not the biggest thing to worry about, writing a bunch of functions tends to take a bit more effort than writing a bunch of if/else statements or switch cases within one. More importantly than this, however, is the code needed to connect/disconnect event-handling slots to event-handling signals. Avoiding the need to write that subscription/unsubscription kind of code for every single teeny event handled can save considerably on the amount of code to maintain.
Performance
It might seem counter-intuitive that performance can favor the coarser multi-event handler. After all, the granular event-handler requires less branching (one dynamic dispatch to get to the precise event handler), while the coarser one requires twice as much (one dynamic dispatch to get to a coarse event-handling site, and another local series of branches to get to the precise event-handling code).
Yet the cost of dynamic dispatch leans heavily on branch prediction. If you're branching into coarser event handlers, then often you're branching more often into the same set of instructions, and that can be an optimization strategy. To have two sets of more predictable branches can often produce more optimal results than one less predictable branch.
Moreover, coarser event-handling typically implies fewer aggregates, fewer lists of functions to call on the side of those triggering events. And that can translate to reduced memory usage and improved locality of reference.
On the flip side, to branch into coarser event handlers often means branching more often. For example, some site might only be interested in push kind of input events, not resize events. If we lump all these together into a coarse event handler and without some filtering mechanism on top, then typically we would have to pay the cost of dynamic dispatch even for a resize event that isn't even handled for a particular site.
Yet I've found that this is actually often better than I thought it would be to branch needlessly into the same coarse functions (most likely due to the branch predictor succeeding) as opposed to branching into a wide variety of disparate functions and only as needed.
So there's a balancing act here and even performance doesn't clearly side with one strategy over the another. It still varies case-by-case.
Nevertheless, lacking measurements and very detailed data about the critical code paths, it's typically safer from a performance perspective to err on the side of these coarser multi-event handlers. After all, even if that proves to be the wrong decision from a performance standpoint, it's easier to optimize from coarse to fine (we can even do so very non-intrusively by keeping the coarse and using fine-grained event-handling in cases that benefit most from it) than vice versa.
Event Subscription/Unsubscription
This can likewise swing one way or the other, but in my experience (from team settings), most of the human errors associated with event handling do not occur within the event-handling code, but outside. The most common source of errors I see relate to failing to subscribing to events and, most commonly, failing to unsubscribe when the events are no longer of interest.
When events are handled at a coarser level, there's typically less of that error-prone subscription/unsubscription code involved (this relates to the boilerplate concerns above, but this is unusual kind of boilerplate in that it can be quite error-prone and not merely tedious to write).
This is also very case-by-case. In the systems I've often been involved in, there was a common need for entities to continue to exist that unsubscribed from certain events prematurely. Those premature cases often required the code to unsubscribe from events to be written manually, as they could not be tied to an entity's lifetime. That may have pointed more to design issues elsewhere, but in that scenario, the number of mistakes made team-wide went down with coarser event handling.
Type Safety
While not shown in the examples here, typically with coarser event-handling is a need to squeeze more disparate types of data through more generic parameters. That might translate in an extreme scenario like in C to squeezing more data through void pointers and more dangerous pointer type casts. With that, compile-time type safety is obliterated and we could start seeing a whole new source of human error.
In higher-level languages, this might translate to more down casts or things of that sort when we cannot model the signature of a delegate to perfectly fit the parameters passed in when an event is triggered.
I've found typically that this isn't the biggest source of confusions and bugs provided that there is at least some form of runtime type safety when casting or unboxing these parameters. But it is a con on the side of coarser event-handling.
Intellectual Overhead
This might vary per individual but I tend to look at systems from a very administrative/overview kind of standpoint and specifically with respect to control flow. It's because I tend to work in lower-level portions of the system, including things like proprietary UI toolkits.
In those cases, when a button is pushed, what functions are called? It turns into a mystery in a large-scale codebase composed of hundreds of thousands of little functions without tracing into the code actually invoked when a button is pushed and seeing each and every function that is called.
That's an inevitability with an event-driven paradigm and something I never became 100% comfortable about, but I find it alleviates some of that explosive complexity that I perceive in my personal mental model (something resembling a very complex graph) when there's less code decentralization. With coarser event handlers comes fewer, more centralized functions to branch into throughout a system on such a button push, and that helps me increase my familiarity when there are fewer but bigger functions involved in my mental graph.
There is a very simple practical benefit here where if you want to find out when a specific entity responds to a series of events, we can simply put a breakpoint on this one coarse event-handling site (while still being able to drill down a specific event for that specific entity by putting a breakpoint in a local branch of code).
Of course, I might be an exception there working in these low-level systems that everyone uses. It seems a lot of people are comfortable with the idea of just subscribing to a button push event in their code without worrying about all the other subscribers to the same event.
From my kind of holistic control flow view of the system, it helps me to absorb the complexity more easily when there are fewer but coarser event-handling sites in the codebase even though I normally otherwise find monolithic functions to be a burden. Especially in a debugging context where I face a concern like, "What caused this to happen?", that combined with the event-handling concern of "What functions are actually going to be called when this happens?" can really multiply the complexity. With fewer potential target sites where events are handled, the latter concern is mitigated.
Conclusion
So these are some factors that might sway you to choose one design strategy over another. I find myself somewhere in the middle. I generally don't choose design as coarse as say, wndproc on Windows which wants to associate a single, ultra-coarse event handler for every single window event imaginable. Yet I might favor designing at a coarser event-handling level than some just to alleviate this kind of mental complexity, reduce code decentralization, possibly improve performance (always with a profiler in hand).
And then there are times when I choose to design at a very granular level when the complexity isn't that great (typically when the package triggering events isn't that central), when performance isn't a concern or performance actually favors this route, and for the improved type safety. It's all case-by-case.

How to implement deterministic single threaded network simulation

I read about how FoundationDB does its network testing/simulation here: http://www.slideshare.net/FoundationDB/deterministic-simulation-testing
I would like to implement something very similar, but cannot figure out how they actually did implement it. How would one go about writing, for example, a C++ class that does what they do. Is it possible to do the kind of simulation they do without doing any code generation (as they presumeably do)?
Also: How can a simulation be repeated, if it contains random events?? Each time the simulation would require to choose a new random value and thus be not the same run as the one before. Maybe I am missing something here...hope somebody can shed a bit of light on the matter.
You can find a little bit more detail in the talk that went along with those slides here: https://www.youtube.com/watch?v=4fFDFbi3toc
As for the determinism question, you're right that a simulation cannot be repeated exactly unless all possible sources of randomness and other non-determinism are carefully controlled. To that end:
(1) Generate all random numbers from a PRNG that you seed with a known value.
(2) Avoid any sort of branching or conditionals based on facts about the world which you don't control (e.g. the time of day, the load on the machine, etc.), or if you can't help that, then pseudo-randomly simulate those things too.
(3) Ensure that whatever mechanism you pick for concurrency has a mode in which it can guarantee a deterministic execution order.
Since it's easy to mess all those things up, you'll also want to have a way of checking whether determinism has been violated.
All of this is covered in greater detail in the talk that I linked above.
In the sims I've built the biggest issue with repeatability ends up being proper seed management (as per the previous answer). You want your simulations to give different results only when you supply a different seed to your random number generators than before.
After that the biggest issue I've seen seems tends to be making sure you don't iterate over collections with nondeterministic ordering. For instance, in Java, you'd use a LinkedHashMap instead of a HashMap.