See the simplified example code here:
process job[num_objs];
// assume also, arr_obj1s (array of type obj1) and
// arr_obj2s (array of type obj2) are arrays of size
// num_objs, and the objects define a run() function
foreach (arr_obj1s[i]) begin
fork
automatic int j = i;
arr_obj1s[j].run(); // these run forever loops
begin
job[j] = process::self();
arr_obj2s[j].run(); // these run finite logic
end
join_none
end
foreach (job[i]) begin
wait (job[i] != null);
job[i].await();
end
// How do we ever reach here?
My confusion is that the calls to arr_obj1s[j].run() will never return (they run forever loops) and I don't quite follow the meaning of that call's placement outside the begin/end block. Which process is that forever run() executed on, and how can it be that each call to await() will return if some process is running a run() which won't return?
EDIT: Here is some more information. Posting the full code would be pages and pages, but I hope this extra bit helps.
obj1's run() function looks like this:
virtual task run;
fork
run_a(); // different logically separated tasks
run_b();
run_c();
join
endtask: run
And as an example, run_a looks basically like this (they are all similar):
virtual task run_a;
// declare some local variables
forever begin
#(posedge clk)
// ...
end
endtask: run_a
But obj2's run() function looks basically like this:
virtual task run;
fork
run_d(); // different logically separated tasks
run_e();
join
endtask: run
And as an example run_d() looks like this:
virtual task run_d;
while ((data_que.size() > 0)) begin
// process a pre-loaded queue,
// data will not be pushed on during the simulation
end
endtask:run_d
This code fragment looks like it is demonstrating process control so here's my guess as to what's going on. There is a group of processes in arr_obj1s and arr_obj2s:
Those in arr_obj1s run forever so they only need to be spawned once and forgotten about.
Those in arr_obj2s accomplish some task and return, so the parent process needs to know when this happens.
All processes have the same parent
My confusion is that the calls to arr_obj1s[j].run() will never return
(they run forever loops) and I don't quite follow the meaning of that
call's placement outside the begin/end block
So all that's needed to spawn all processes are three lines of code in the fork..join_none block.
foreach (arr_obj1s[i]) begin
fork
automatic int j = i; // Spawns process
arr_obj1s[j].run(); // Spawns process
arr_obj2s[j].run(); // Spawns process
join_none
end
The join_none keyword indicates that execution will continue after the parallel block completes, thus the entire foreach loop will execute and then the parent process will continue on to the next foreach loop. Further, the join_none also means that the child processes will not start until the parent process reaches a blocking statement.
However this won't allow us to detect when the child processes complete, unless they have some sort of shared variable they modify. To get around having to code that, SystemVerilog allows a handle to a process so it can schedule an event when the process completes. It doesn't, however, provide the ability to get the handle of a single statement. You must use process::self() inside a procedural context to get the process handle. Thus this won't work right if added directly to the fork-join block.
foreach (arr_obj1s[i]) begin
fork
automatic int j = i;
arr_obj1s[j].run();
job[j] = process::self(); // Will return parent process
arr_obj2s[j].run();
join_none
end
To fix this we need to create a new sequential procedural context that we can get the process handle of, then run the function from there:
foreach (arr_obj1s[i]) begin
fork
automatic int j = i;
arr_obj1s[j].run(); // Spawns a new process for those that don't complete
begin // Spawns a new process for those that complete
job[j] = process::self(); // Saves handle to this begin..end process
arr_obj2s[j].run(); // Process continues though here
end
join_none
end
The final foreach loop only waits on processes for which we have a handle of. The processes that run forever are ignored.
First off, the way fork/join works in Verilog, each statement in the fork/join block executes concurrently. Absent a begin/end, each line is a statement in itself.
So your example is forking off at least two processes for each iteration of the loop.
fork
automatic int j = i; <= Statement 1 ??
arr_obj1s[j].run(); // these run forever loops <= Statement 2
begin \
job[j] = process::self(); | <= Statement 3
arr_obj2s[j].run(); // these run finite logic |
end /
join_none
I say at least, because I don't fully understand how the automatic int j is treated in this case.
So here is what happens.
For each iteration of the loop:
arr_obj1s[j].run() is started. (This runs a forever loop and will never end.)
arr_obj2s[j].run() is started. (This will end after running for some finite time.) The process ID for the process that started this is stored in job[j].
The code which is calling await is only waiting on the processes which started the calls to arr_obj2s[j].run(). They will complete since they are running a finite task.
The forever loops will still be running, even after the await calls have all completed.
Related
I have several agents each with their own monitor and analysis ports connected to a checker. The checker is organized like below where it calls each check() function every cycle in a specific order. This is done this way to handle the case where we get an input and output txn in the same cycle (design has "bypass" logic to immediately output the txn it sees on its input in the same cycle).
If we go with design #2 (below), there is no guarantee that we will process the input_txn first, so if we happen to process the output_txn first, the assertion could fire because it doesn't know that there was an input_txn in the same cycle. I have had success using Design #1 to handle the case where we get an input and output txn in the same cycle; however I now realize this is still not guaranteed to work correctly because it's possible that the simulator could execute the checker's run_phase() after the output_agent's run_phase() but before the input_agent's run_phase(), and I could get the same issue.
What I really want is almost a "check_phase" for each timestep, so I can guarantee all agents monitors' have finished executing in the current timestep before the checker starts executing. Is there any way to guarantee the checker executes after all other processes in the current timestep?
P.S. I'm not looking for advice on how to improve my checker, this is just a very dumbed down version of my actual testbench I made to easily convey the problem I have.
## Design 1 ##
class my_checker extends uvm_component;
//boiler plate uvm...
task run_phase();
forever begin
check_inputs();
check_outputs();
#(posedge vinft.clk);
end
endtask
function check_inputs();
input_txn_c txn;
if (input_analysis_fifo.try_get(txn)) begin // non-blocking try_get()
//do check
pending_txn_cnt++;
end
endfunction
function check_outputs();
output_txn_c txn;
if (output_analysis_fifo.try_get(txn)) begin //non-blocking try_get()
assert(pending_txn_cnt > 0);
pending_txn_cnt--;
end
endfunction
endclass
## Design 2 ##
class my_checker extends uvm_component;
//boiler plate uvm...
task run_phase();
fork
check_inputs();
check_outputs();
join_none
endtask
task check_inputs();
input_txn_c txn;
forever begin
input_analysis_fifo.get(txn); //blocking get()
//do check
pending_txn_cnt++;
end
endtask
task check_outputs();
output_txn_c txn;
forever begin
output_analysis_fifo.get(txn); //blocking get
assert(pending_txn_cnt > 0);
pending_txn_cnt--;
end
endtask
endclass
Since you use a FIFO for both the input and output, you should be able to use this simple design:
class my_checker extends uvm_component;
//boiler plate uvm...
input_txn_c txni;
output_txn_c txno;
task run_phase();
forever begin
// Wait until there is an input transaction: txni
input_analysis_fifo.get(txni);
// Wait until there is an output transaction: txno
output_analysis_fifo.get(txno);
// Now there is a pair of transactions to compare: txni vs. txno
// call compare function...
end
endtask
// compare function...
endclass
Since the get calls are blocking, you just need to wait until you have an input transaction, then wait until you have an output transaction. It does not matter if they arrive in the same timestep. Once you have an in/out pair, you can call your compare function.
I don't think you need to check the transaction count for every pair. If you want, you could check if the FIFO's still have anything in them at the end of the test.
I'm working on a complicated function that calls several subfunctions (within the same file). To pass data around, the setappdata/getappdata mechanism is used occasionally. Moreover, some subfunctions contain persistent variables (initialized once in order to save computations later).
I've been considering whether this function can be executed on several workers in a parallel pool, but became worried that there might be some unintended data sharing (which would otherwise be unique to each worker).
My question is - how can I tell if the data in global and/or persistent and/or appdata is shared between the workers or unique to each one?
Several possibly-relevant things:
In my case, tasks are completely parallel and their results should not affect each other in any way (parallelization is done simply to save time).
There aren't any temporary files or folders being created, so there is no risk of one worker mistakenly reading the files that were left by another.
All persistent and appdata-stored variables are created/assigned within subfunction of the parfor.
I know that each worker corresponds to a new process with its own memory space (and presumably, global/persistent/appdata workspace). Based on that and on this official comment, I'd say it's probable that such sharing does not occur... But how do we ascertain it?
Related material:
This Q&A.
This documentation page.
This is quite straightforward to test, and we shall do it in two stages.
Step 1: Manual Spawning of "Workers"
First, create these 3 functions:
%% Worker 1:
function q52623266_W1
global a; a = 5;
setappdata(0, 'a', a);
someFuncInSameFolder();
end
%% Worker 2:
function q52623266_W2
global a; disp(a);
disp(getappdata(0,'a'));
someFuncInSameFolder();
end
function someFuncInSameFolder()
persistent b;
if isempty(b)
b = 10;
disp('b is now set!');
else
disp(b);
end
end
Next we boot up 2 MATLAB instances (representing two different workers of a parallel pool), then run q52623266_W1 on one of them, wait for it to finish, and run q52623266_W2 on the other. If data is shared, the 2nd instance will print something. This results (on R2018b) in:
>> q52623266_W1
b is now set!
>> q52623266_W2
b is now set!
Which means that data is not shared. So far so good, but one might wonder whether this represents an actual parallel pool. So we can adjust our functions a bit and move on to next step.
Step 2: Automatic Spawning of Workers
function q52623266_Host
spmd(2)
if labindex == 1
setupData();
end
labBarrier; % make sure that the setup stage was executed.
if labindex == 2
readData();
end
end
end
function setupData
global a; a = 5;
setappdata(0, 'a', a);
someFunc();
end
function readData
global a; disp(a);
disp(getappdata(0,'a'));
someFunc();
end
function someFunc()
persistent b;
if isempty(b)
b = 10;
disp('b is now set!');
else
disp(b);
end
end
Running the above we get:
>> q52623266_Host
Starting parallel pool (parpool) using the 'local' profile ...
connected to 2 workers.
Lab 1:
b is now set!
Lab 2:
b is now set!
Which again means that data is not shared. Note that in the second step we used spmd, which should function similarly to parfor for the purposes of this test.
There is another not-sharing-of-data that bit me.
Persistent variables are not even copied from the current workspace to the workers.
To demonstrate, a simple function with a persistent variable is created (MATLAB 2017a):
function [ output_args ] = testPersist( input_args )
%TESTPERSIST Simple persistent variable test.
persistent var
if (isempty(var))
var = 0;
end
if (nargin == 1)
var = input_args;
end
output_args = var;
end
And a short script is executed:
testPersist(123); % Set persistent variable to 123.
tpData = zeros(100,1);
parfor i = 1 : 100
tpData(i) = testPersist;
testPersist(i);
end
any(tpData == 0) % This implies the worker started from 0 instead of 123 as specified in the first row.
Output is 1 - workers disregarded the 123 from the parent workspace and started anew.
Checking the values in tpData additionally shows how each worker did its job by noting say "tpData(14) = 15 - this means worker that completed 15 continued with 14 next"
So, creating a worker = creating completely new instance of MATLAB completely unrelated to the instance of MATLAB you have open in front of you. For every worker separately.
Lesson I gained from that = don't use simple persistent variables as the simulation config file. It worked fine and looked elegant as long as no parfor was used... but broke horribly afterwards. Use objects.
Does anybody can explain why the next script does not work? What is the cause for the Label not found for "last SOME_BLOCK" error?
#!/usr/bin/perl
use v5.14;
SOME_BLOCK: {
alarm 1;
$SIG{ALRM} = sub {
last SOME_BLOCK;
};
my $count = 0;
while (1) {
$count += 1;
say $count;
}
};
Exiting a subroutine via last or next is forbidden according to perldoc (and generally triggers a warning). This is because it's quite messy - Perl would need to search dynamically up scopes to find the block that you're trying to skip, and call return from various functions (but what return value should be used?). return is generally safer.
In the signal handling context, it's extra messy because Perl actually has to pause execution of your script in order to execute the signal handler. So it's now running two separate execution contexts, and the signal handler context cannot affect the control flow of the main context directly, which is why you get that error.
There are two things you can do:
throw an exception (using die) and catch it in the outer block. This is undesirable, as it could interrupt pretty much anything.
set a global flag defined outside the signal handler e.g. ($caught_signal = 1) and check for that in the inner code at a convenient point.
This is a question about Scala continuations. Can resets be nested? If they can: what are nested resets useful for ? Is there any example of nested resets?
Yes, resets can be nested, and, yes, it can be useful. As an example, I recently prototyped an API for the scalagwt project that would allow GWT developers to write asynchronous RPCs (remote procedure calls) in a direct style (as opposed to the callback-passing style that is used in GWT for Java). For example:
field1 = "starting up..." // 1
field2 = "starting up..." // 2
async { // (reset)
val x = service.someAsyncMethod() // 3 (shift)
field1 = x // 5
async { // (reset)
val y = service.anotherAsyncMethod() // 6 (shift)
field2 = y // 8
}
field2 = "waiting..." // 7
}
field1 = "waiting..." // 4
The comments indicate the order of execution. Here, the async method performs a reset, and each service call performs a shift (you can see the implementation on my github fork, specifically Async.scala).
Note how the nested async changes the control flow. Without it, the line field2 = "waiting" would not be executed until after successful completion of the second RPC.
When an RPC is made, the implementation captures the continuation up to the inner-most async boundary, and suspends it for execution upon successful completion of the RPC. Thus, the nested async block allows control to flow immediately to the line after it as soon as the second RPC is made. Without that nested block, on the other hand, the continuation would extend all the way to the end of the outer async block, in which case all the code within the outer async would block on each and every RPC.
reset forms an abstraction so that code outside is not affected by the fact that the code inside is implemented with continuation magic. So if you're writing code with reset and shift, it can call other code which may or may not be implemented with reset and shift as well. In this sense they can be nested.
I'm running a recursive function which searches a room for an object. This code is working in conjunction with another process essentially running the same code. The first thing the code does is check to see if the other one has found the object and, if so, it is supposed to break out of the function.
When I do the check to see if the other process has found the object, if it has, I use "return" to break out of that function at which time it's supposed to move onto other lines of code...However, for some reason it doesn't fully break out but instead just runs the function again and again.
Any ideas on how I can get it to break out?
I would and can provide the code but it's kind of long
EDIT
Parent script
!matlab -r "zz_Mock_ROB2_Find" & distT = 0.3;
Rob1_ObjFound = 0;
matrix = search_TEST_cam(rob, vid, 0.3, XYpos, 'north', stack, matrix, 0);
disp('I"M OUT')
Recursive code
function matrix = search_TEST_cam(rob1, vid, distT, startingPos, currentDir, stack, matrix, bT)
Rob1_ObjFound = 0;
Rob2_ObjFound = 0;
try
load('Rob2_ObjFound.mat', 'Rob2_ObjFound');
catch
end
if(Rob2_ObjFound == 1)
setDriveWheelsCreate(rob1, 0, 0);
disp('ROB_2 FOUND THE OBJECT')
pause(0.5)
BeepRoomba(rob1)
pause(0.5)
setDriveWheelsCreate(rob1, 0, 0);
return
end
Use break to break out of a for or a while loop and terminate execution, i.e., statements after that are ignored. For e.g.,
for i=1:5
if i==3
break
else
fprintf('%u,',i)
end
end
outputs 1,2, and the code terminates when i=3. If you have nested loops, break will only break out of its current loop and move on to the parent loop.
To skip only the current iteration and move on to the next, use continue. Using the same example,
for i=1:5
if i==3
continue
else
fprintf('%u,',i)
end
end
outputs 1,2,4,5,.
Using return in a function just returns control to the parent function/script that called it.
It seems like you're using the wrong one in your code. However, it's hard to tell without knowing how you're using them. Anyway, you can try one of these three and see if it makes a difference.
It's hard to say without seeing your code, but I doubt it's a problem with the RETURN statement. It's more likely a problem with how you've set up your recursion. If your recursive function has called itself a number of times, then when you finally invoke a RETURN statement it will return control from the current function on the stack to the calling function (i.e. a previous call to your recursive function). I'm guessing the calling function doesn't stop the recursion properly and ends up calling itself again, continuing the recursion.
My advice: check the exit conditions for your recursive function to make sure that, when the object is found and the most recent call returns, every previous call is properly informed that it should return as well.