Simulink Stateflow: Wait for Parallel States - simulink

I would like to create a state machine in Stateflow that enters into multiple parallel states (A&B&C) and then exits to an end state (D) only when a condition from each parallel state has been achieved. The picture demonstrated below exits when any of the exit conditions for any state in {A,B,C} is satisfied. (In Enterprise Architect's state chart, I believe this would be a Synchronized state).
Is this possible to do in Stateflow? If so, how?

Would something like this work?
You'll probably need to use atomic subchart mappings (see https://www.mathworks.com/help/stateflow/ug/mapping-variables-for-atomic-subcharts.html) to map variables in1, in2, in3 to some data in the corresponding atomic subcharts, and assign them in there.

Related

For black-box analysis of the outcome of a system call, is a complete comparison of before-and-after forensic system images the right way to measure?

I'm doing x86-64 binary obfuscation research and fundamentally one of the key challenges in the offense / defense cat and mouse game of executing a known-bad program and detecting it (even when obfuscated) is system call sequence analysis.
Put simply, obfuscation is just achieving the same effects on the system through a different sequence of instructions and memory states in order to minimize observable analysis channels. But at the end of the day, you need to execute certain system calls in a certain order to achieve certain input / output behaviors for a program.
Or do you? The question I want to study is this: Could the intended outcome of some or all system calls be achieved through different system calls? Let's say system call D, when executed 3 times consecutively, with certain parameters can be heuristically attributed to malicious behavior. If system calls A, B, and C could be found to achieve the same effect (perhaps in addition to other side-effects) desired from system call D, then it would be possible to evade kernel hooks designed to trace and heuristically analyze system call sequences.
To determine how often this system call outcome overlap exists in a given OS, I don't want to use documentation and manual analysis for a few reasons:
undocumented behavior
lots of work, repeated for every OS and even different versions
So rather, I'm interested in performing black-box analysis to fuzz system calls with various arguments and observing the effects. My problem is I'm not sure how to measure the effects. Once I execute a system call, what mechanism could I use to observe exactly which changes result from it? Is there any reliable way, aside from completely iterating over entire forensic snapshots of the machine before and after?

Method to create a list of taskings for resource to work on when resource becomes idle

Image to illustrate point of freezing Context:
Creating a scalable model for a production line to increase Man Machine Optimization ratio. Will be scaling the model for an operator (resource) to work on multiple machines (of the same type). During the process flow at a machine, the operator will be seized and released multiple times for different taskings.
Problem:
Entire process freezes when the operator is being seized at multiple seize blocks concurrently.
Thoughts:
Is there a way to create a list where taskings are added in the event the resource is currently seized. Resource will then work on the list of taskings whenever it becomes idle. Any other methods to resolve this issue is also appreciated!
If this is going to become a complex model, you may want to consider using a pure agent-based approach.
Your resource has a LinkedList of JobRequest agents that are created and send by the machines when necessary. They are sorted by some priority.
The resource then simply does one JobRequest after the next.
No ResourcePools or Seieze elements required.
This is often the more powerful and flexible approach as you are not bound to the process blocks anymore. But obviously, it needs good control and testing from you :)
Problem: Entire process freezes when the operator is being
seized at multiple seize blocks concurrently.
You need to explain your problem better: it is not possible to "seize the same operator at multiple seize blocks concurrently" (unless you are using a resource choice condition or similar to try to 'force' seizing of a particular resource --- even then, this is more accurately framed as 'I've set up resource choice conditions which mean I end up having no valid resources available').
What does your model "freezing" represent? For example, it could just be a natural consequence of having no resources available, especially if you have long delay times or are using Delay blocks with "Until stopDelay() is called" set --- i.e., you are relying on events elsewhere in your model to free agents (and seized resources) from blocks, which an incorrect model design might mean never happen in some circumstances. (If your model is "freezing" because of no resources being available, it should 'unfreeze' when one does.)
During the process flow at a machine, the operator will be
seized and released multiple times for different taskings.
You can just do this bit by breaking down the actions at a machine into a number of Seize/Delay/Release actions with different characteristics (or a process flow that loops around a set of these driven by some data if you want it to be more flexible / data-driven).

How to reduce the overhead of emitting an event from DPI?

I’m using e coverage for sampling signals in my DUT.
In order to sample the covergroup, I’m emitting the coverage sample event inside a DPI code (defined in c interface of e, called in my hdl code).
But it seems like when emitting this event there is a lot of overhead which is not related to the coverage collection.
What can I do in-order to reduce this overhead?
Try to define, emit and handle all coverage groups and event in e.
That way you'll not get the overhead of transition between languages.
Instead of emitting the event, use the API for procedural sampling of the event (covers.sample_cg())
For example if you have a covergroup named cg1, defined in a type t1, and you’d like to sample it for t1_inst of t1, then instead of calling :
emit t1_inst.cg;
call :
covers.sample_cg(“t1.cg1”, t1_inst);
You should define event(s) that is more relevant to the needed signals.
You can also define event on specific bits of a bus change.

UVM blocking assignment race conditions

I have a doubt about race conditions in SystemVerilog, especially in UVM. In the standard case what we have is multiple drivers that drive our dut in the same front of the clock, generating some function calls in the scoreboard. These calls are simultaneous, and it is realistic that they check/modify some shared variables in the golden reference model. If these operations would be done with non blocking assignment there would be no problem, but with blocking assignment there could be race conditions. Which is the best way to overcome this problem? To implement the golden reference model not in a class?
Thanks in advance
an example of pseudocode of scoreboard could be:
function void write_A(input TrA A);
if(GRF.b >= 100 && A.a==1)
GRF.c = 1;
endfunction
function void write_B(input TrB B);
GRF.b+=B.b;
endfunction
Of course the result depends on the order of execution of these two functions, which is unknown. One can solve with some synchronization mechanism, but things become harder with many write parallel functions. Using non-blocking assignment would make the situation way more clear and simple...maybe a solution could be to have all members of GRF as static?
The problem here is that you're trying to simulate RTL behavior using behavioral code. You are using multiple functions in multiple threads and calling them all on the same clock edge in a random order. There is no solution to this problem other than enforcing an order upon your operations.
The easiest way to do this is to combine all the #(posedge clk) threads in your scoreboard into one thread. This will force you to call the functions in the same order every time.
So instead of
#(posedge clk)
write_A(A);
#(posedge clk)
write_B(B);
You have
#(posedge clk) begin
write_A(A);
write_B(B);
end
The latter code will run the same way every time.
Race condition is created just because of expression or assignments are try to access same signal at a same time.
If two signals try to access same signal at different time stamp then user can remove race condition.
Actually code is written in verilog or system verilog is execute in different time region like active region, reactive region.
Race condition can be removed using following things.
(1) Program block
(2) Clocking block
(3) Non blocking assigment
Before program block and clocking block race condition is removed using non blocking assignment.
As I explained above statement written in verilog code or system verilog code is not execute code in single time same. There are different region in which specific syntax is executed by tool.
Here I mainly talked about Active and Reactive region. Active region consider continuous assignments, blocking assignments. Reactive region consider LHS of non blocking assignments are evaluated in this region.
First active region is evaluated then reactive region is evaluated.
So before program block to remove race condition verification engineers take care of this things(regions of execution).
Now in system verilog there are many other regions are added like prepone region, observed region, postpone region.

How to handle the two signals depending on each other?

I read Deprecating the Observer Pattern with Scala.React and found reactive programming very interesting.
But there is a point I can't figure out: the author described the signals as the nodes in a DAG(Directed acyclic graph). Then what if you have two signals(or event sources, or models, w/e) depending on each other? i.e. the 'two-way binding', like a model and a view in web front-end programming.
Sometimes it's just inevitable because the user can change view, and the back-end(asynchronous request, for example) can change model, and you hope the other side to reflect the change immediately.
The loop dependencies in a reactive programming language can be handled with a variety of semantics. The one that appears to have been chosen in scala.React is that of synchronous reactive languages and specifically that of Esterel. You can have a good explanation of this semantics and its alternatives in the paper "The synchronous languages 12 years later" by Benveniste, A. ; Caspi, P. ; Edwards, S.A. ; Halbwachs, N. ; Le Guernic, P. ; de Simone, R. and available at http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1173191&tag=1 or http://virtualhost.cs.columbia.edu/~sedwards/papers/benveniste2003synchronous.pdf.
Replying #Matt Carkci here, because a comment wouldn't suffice
In the paper section 7.1 Change Propagation you have
Our change propagation implementation uses a push-based approach based on a topologically ordered dependency graph. When a propagation turn starts, the propagator puts all nodes that have been invalidated since the last turn into a priority queue which is sorted according to the topological order, briefly level, of the nodes. The propagator dequeues the node on the lowest level and validates it, potentially changing its state and putting its dependent nodes, which are on greater levels, on the queue. The propagator repeats this step until the queue is empty, always keeping track of the current level, which becomes important for level mismatches below. For correctly ordered graphs, this process monotonically proceeds to greater levels, thus ensuring data consistency, i.e., the absence of glitches.
and later at section 7.6 Level Mismatch
We therefore need to prepare for an opaque node n to access another node that is on a higher topological level. Every node that is read from during n’s evaluation, first checks whether the current propagation level which is maintained by the propagator is greater than the node’s level. If it is, it proceed as usual, otherwise it throws a level mismatch exception containing a reference to itself, which is caught only in the main propagation loop. The propagator then hoists n by first changing its level to a level above the node which threw the exception, reinserting n into the propagation queue (since it’s level has changed) for later evaluation in the same turn and then transitively hoisting all of n’s dependents.
While there's no mention about any topological constraint (cyclic vs acyclic), something is not clear. (at least to me)
First arises the question of how is the topological order defined.
And then the implementation suggests that mutually dependent nodes would loop forever in the evaluation through the exception mechanism explained above.
What do you think?
After scanning the paper, I can't find where they mention that it must be acyclic. There's nothing stopping you from creating cyclic graphs in dataflow/reactive programming. Acyclic graphs only allow you to create Pipeline Dataflow (e.g. Unix command line pipes).
Feedback and cycles are a very powerful mechanism in dataflow. Without them you are restricted to the types of programs you can create. Take a look at Flow-Based Programming - Loop-Type Networks.
Edit after second post by pagoda_5b
One statement in the paper made me take notice...
For correctly ordered graphs, this process
monotonically proceeds to greater levels, thus ensuring data
consistency, i.e., the absence of glitches.
To me that says that loops are not allowed within the Scala.React framework. A cycle between two nodes would seem to cause the system to continually try to raise the level of both nodes forever.
But that doesn't mean that you have to encode the loops within their framework. It could be possible to have have one path from the item you want to observe and then another, separate, path back to the GUI.
To me, it always seems that too much emphasis is placed on a programming system completing and giving one answer. Loops make it difficult to determine when to terminate. Libraries that use the term "reactive" tend to subscribe to this thought process. But that is just a result of the Von Neumann architecture of computers... a focus of solving an equation and returning the answer. Libraries that shy away from loops seem to be worried about program termination.
Dataflow doesn't require a program to have one right answer or ever terminate. The answer is the answer at this moment of time due to the inputs at this moment. Feedback and loops are expected if not required. A dataflow system is basically just a big loop that constantly passes data between nodes. To terminate it, you just stop it.
Dataflow doesn't have to be so complicated. It is just a very different way to think about programming. I suggest you look at J. Paul Morison's book "Flow Based Programming" for a field tested version of dataflow or my book (once it's done).
Check your MVC knowledge. The view doesn't update the model, so it won't send signals to it. The controller updates the model. For a C/F converter, you would have two controllers (one for the F control, on for the C control). Both controllers would send signals to a single model (which stores the only real temperature, Kelvin, in a lossless format). The model sends signals to two separate views (one for C view, one for F view). No cycles.
Based on the answer from #pagoda_5b, I'd say that you are likely allowed to have cycles (7.6 should handle it, at the cost of performance) but you must guarantee that there is no infinite regress. For example, you could have the controllers also receive signals from the model, as long as you guaranteed that receipt of said signal never caused a signal to be sent back to the model.
I think the above is a good description, but it uses the word "signal" in a non-FRP style. "Signals" in the above are really messages. If the description in 7.1 is correct and complete, loops in the signal graph would always cause infinite regress as processing the dependents of a node would cause the node to be processed and vice-versa, ad inf.
As #Matt Carkci said, there are FRP frameworks that allow loops, at least to a limited extent. They will either not be push-based, use non-strictness in interesting ways, enforce monotonicity, or introduce "artificial" delays so that when the signal graph is expanded on the temporal dimension (turning it into a value graph) the cycles disappear.