UVM task based phases - do the components must be sync'ed? - system-verilog

task based phases - do the components must be sync'ed? meaning, can componentA be in reset_phase and componentB be in main_phase?
When using objections, that cannot be, right? The objections must be dropped before moving to the next phase.
But when not raising objection, if I recall, if componentA complete its reset_phase and there is no objection protection, flow will move to the next phase and kill ALL other reset_phases in all components.
If that is so - how does the jump() in uvm_domain works - if componentA and componentB were in main_phase and componentA jumped to reset_phase - what will happen to each of the components (with and without objection protection)?
ADDITION:
Here are several case - what will happen in each of them:
Case 1)
ComponentA has a counter to 100 (with 1cycle delay between each count) in main_phase
ComponentB has a counter to 50 (with 1cycle delay between each count) in main_phase
Both uvm_components above do not have raise_objection
Once ComponentB reaches 50 -- what will happen?
Case 2)
ComponentA has a counter to 100 (with 1cycle delay between each count) in main_phase
ComponentB has a counter to 50 (with 1cycle delay between each count) in main_phase
ComponentA has raise_objection - ComponentB do not
Once ComponentB reaches 50 -- what will happen?

The UVM starts each task name_phase in order (run_phase, reset_phase, main_phase, ...) by concurrently calling the virtual method name_phase in every instance of a uvm_component. Unless you are doing user defined phases, (which I strongly recommend against doing) every component has a predefined set of phases that get called regardless of whether you have provided an override for that phase.
After forking of all of a particular phase in every component, the UVM waits for one delta cycle before waiting for an objection to that phase ending. That means if all your reset_phase methods in every component takes 100ns but none of them raise an objection, they all will be immediately terminated, and the process get repeated for the next phase (post_reset_phase). It is only the lack of objections to a phase that make the UVM move on to the next phase, not reaching the end of the any task.
When you can get phase jumping to work, it changes the next phase to execute, and then drops all the objections to the current phase.
The alternative to phase jumping and user defined phase domains is the proper use of sequences. This is much easier to communicate and makes it easier to integrate IP and not have to deal with different concepts of phasing.
Additional info:
Case 1) is what I mentioned above—both components terminate main_phase tasks immediately without getting to the next cycle.
Case 2) Reaching the the end of a task is not what terminates a phase, only when the last objection gets dropped. So both components will finish their main_phase tasks. If ComponentA does not drop its objections, the main_pase will get stuck not advancing to the next phase and eventually time-out after some predetermined time.

Related

AnyLogic - dropoff block - Can I combine "while condition is true" and "given quantity"?

I am wondering how I can combine "while condition is true" and "given quantity" at the same time, regarding AnyLogic dropoff block.
The following chart is working well. While condition (agent.aclass == true) is true, element agent (Cargo) are dropped off. For your reference, Cargo has a bool parameter (aclass or !aclass) .
However, my problem is that "all available element agents are dropped off". I would like to specify a certain quantify (for example, 1, 2, or whatever) to be dropped off, keeping the condition "while condition is true".
Would you help me?
All agents are dropped off even though I want to specify the quantity to be dropped off.
Yes, the Dropoff block doesn't support what you want directly, since the dropoff 'protocol' can only be "All available" or "Specified number" or "While condition is true".
Easy solution
The easy way to get round it is to incorporate the 'have I dropped off beyond my threshold?' check into the condition.
So you would
Maintain a count (via an int Variable called, say, numDroppedOffThisBatch) of the number currently dropped off in the current 'batch', reset when the agent enters the Dropoff block ("On enter" action).
Change your Dropoff while condition to be agent.classA && (numDroppedOffThisBatch++ < 2) if, say, your threshold was 2.
That is, you only dropoff it it's a class A cargo and the number you have previously dropped off for this carrier agent is less than the threshold. The ++ after the variable name is a postfix operator in Java, which means it only increments the variable after it has been used in the expression. (So it does represent the number already dropped off.) You could equally have used ++numDroppedOffThisBatch <= 2 but the variable name is a bit misleading then.
Overly-complex but interesting alternative
Another option is as below, which effectively 'instantly re picks-up' (i.e., in zero sim time) the agents that shouldn't really have been dropped off. This is much more complex but is an interesting approach to understand (and similar things are useful in other contexts), so I've left it in...
It's tricky because you have to understand some under-the-covers details of the order in which things happen with a Dropoff block. Basically, dropped-off agents have their "On dropoff" actions run and are sent to their next block before the carrier agent leaves the Dropoff block. This actually makes things easier for us since we know we can 'initially process' dropped-off agents before the carrier agent leaves. So, we
Dropoff using the while condition as currently, but then have the dropping-off agent go into Pickup block (set to pickup "All available agents") and the dropped-off agents go into a Select Output block which goes either to a Queue (attached to the Pickup block via its special port) or its normal current destination (which looks like a MoveTo block from your (tiny!) screenshot).
Maintain a count (via an int Variable called, say, numDroppedOffThisBatch) of the number currently dropped off in the current 'batch', reset when the agent enters the Dropoff block ("On enter" action) and incremented as the dropped-off agents are dropped-off ("On dropoff" action) which is before they actually leave the block or test the following SelectOutput condition.
Also maintain another count variable numProcessedThisBatch (also reset to zero when the carrier agent enters the Dropoff block).
Agents dropped off then go into a SelectOutput which, if numProcessedThisBatch is greater than or equal to your 'how many should really have been dropped off' threshold, get routed to the Queue; otherwise they carry on as normal. (Note we are not checking numDroppedOffThisBatch here, which will always be the total number dropped off at this point.)
When a dropped-off agent enters the Queue or MoveTo blocks (i.e., when it finishes 'processing' either way), increment numProcessedThisBatch.
Below is a sample process flow screenshot (note the meaningful naming of blocks). The bit outside the red box is just stuff I added to setup a carrier agent with a mixture of class A and non-class-A Cargo agents via the buttons (and I just represented the 'follow on' flows for the carrier and should-actually-have-been-dropped-off cargo agents as nominal Delay blocks).
Here I had the carrier agent contain 3 class-A cargo agents and 2 non-class-A one, with a threshold of 2 agents to 'actually' dropoff (which I held in a variable for clarity).
To make the sequencing clearer (and to show how everything occurs at the same sim time), below are some traceln console messages produced which help understand what's happening. (The ones after I manually set up the carrier via the buttons are prefixed with the simulation time.)
Creating cargo class A agent
Creating cargo class A agent
Creating cargo class A agent
Creating cargo non-class-A agent
Creating cargo non-class-A agent
Creating carrier agent
6.699999999999991: carrier dropping off cargo agents by condition
6.699999999999991: cargo On dropoff action
6.699999999999991: cargo On dropoff action
6.699999999999991: cargo On dropoff action
6.699999999999991: Cargo agent starting normal process: class A true
6.699999999999991: Cargo agent starting normal process: class A true
6.699999999999991: Cargo agent ready for instant pickup: class A true
6.699999999999991: Carrier agent entering post-dropoff pickup
Notes:
It would also be a lot better to then encapsulate all this dropoff-by-condition-and-number logic into a custom block which you can then reuse wherever you want it (and avoid polluting the main process-containing agent with variables). That's obviously another level of detail and complexity.
This behaviour of the Dropoff block with a following SelectOutput is actually quite subtle. SelectOutput block conditions are evaluated before agents actually leave the previous block. (Because SelectOutputs are just routing decisions, they aren't really 'part' of the flow, in the sense that agents spend no time there; think of them as being used to say 'Where should I go next when I leave the preceding block?'.) That is why many blocks (like Delay) have "On exit" and "On at exit" actions. (The latter runs before AnyLogic even tries to see what block it should go to next, so will happen before a following SelectOutput check.) It just happens that Dropoff blocks run their "On dropoff" actions before checking what the onward block might be; thus they function in the same way as "On at exit" actions in other blocks.

StopDelay for cars based on variable value (AnyLogic)

I am trying to stopDelay at delayNucSafe1 (See Screenshot) when the car exits at carMovetoScale1 They way I am currently doing it is at the "On Exit" block of carMovetoScale1 typing: delayNucSafe1 .stopDelay() but I am getting an error that says:
Description: The method stopDelay(Agent) in the type Delay is not applicable for the arguments (). Location: Scale House/Main/carMoveToScale1 - CarMoveTo
Logic Flowchar
where I am asking to stopDelay
Can someone help with this?
The stopDelay(Agent) method is for use when there are multiple agents waiting within the delay and you need to stop the delay for ONE specific agent. If this is the case for you, you would need to know which agent you want to stop the delay for.
For instance, you would call: delayNucSafe1.stopDelay(delayNucSafe1.get(0)) to stop the delay for the agent at index 0 in delayNucSafe1. (This code would also work if there is only 1 agent in the delay).
On the other hand, if you know for sure that there will only ever be 1 agent in the delay (or if you'd like to stop the delay for every agent simultaneously), you would use the method: stopDelayForAll(). This method has the benefit that it doesn't need an argument, but it will obviously cause problems if there are multiple agents waiting in the delay, each of which need to be released independently.
So in summary:
delayNucSafe1.stopDelay(delayNucSafe1.get(agentIndex))
will stop the delay for the agent at index agentIndex within delayNucSafe1. And:
delayNucSafe1.stopDelayForAll()
requires no arguments, and will stop the delay for all agents within delayNucSafe1

UVM End of test Objection Mechanism and Phase Ready to End Implementation

I am exploring different ways to end a UVM test. One method that has come often from studying different blogs from Verification Academy and other sites is to use the Phase Ready to End. I have some questions regarding the implementation of this method.
I am using this method in scoreboard class, where my understanding is after my usual run phase is finished, it will call the phase ready to end method and implement it. The reason I am using it my scoreboard's run_phase finishes early, and there are some data into queues that need to be processed. So I am trying to prolong this scoreboard run_phase using this method. Here are is some pseudo-code that I have used.
function void phase_ready_to_end(uvm_phase phase);
if (phase.get_name() != "run") return;
if (queue.size() != 0) begin
phase.raise_objection(.obj(this));
fork
begin
delay_phase(phase);
end
join_none
end
endfunction
task delay_phase(uvm_phase phase);
wait(queue.size() == 0);
phase.drop_objection(.obj(this));
endtask
I have taken inspiration for this implementation from this link UVM-End of Test Mechanism for your reference. Here are some of the ungated thoughts in my mind on which I need guidance and help.
to the best of my understanding the phase_ready_to_end is called at the end of run_phase and when it runs it raises the objection for that scoreboard run_phase and runs delay_phase task.
That Delay Phase task is just waiting for the queue to end, but I am not seeing any method or task which will pop the items from the queue. Does I have to call some method to pop from the queue or as according to the 1st point above the raised objection will start the run phase so there is no need for that and we have to wait for a considerable amount of time?
Let me give you some pre-context to this question. I have a scoreboard where there are two queues whose write methods are implemented and they are being fed correctly by their source.
task run_phase (uvm_phase phase);
forever begin
compare_queues(); // this method takes data from two queues and compares them, both queues implementation are fine and they take data from their respective sources. Let me give you a scenario, let's suppose there are a total of 10 transactions being generated but the scoreboard was able to process only 6 of them and there are 4 transactions left when all objections are dropped. So to tackle that I implement this phase_to_ready_end method in my scoreboard.
end
endtask
The problem with this method that I am having is that, when I raise the objection in this phase_ready_to_end and call delay_phase method, nothing happens. And I am curious is there more to this implementation or not?
Sorry for the delay. I have shared more context to the existing question. Please see to that, let me know if it is confusing.
We have a pair of monitors that calls write method implemented inside the scoreboard. The monitors typically capture the transaction from BUS and call these WR methods to push the transactions. Thus two source and destination monitors WR into two - source and destination - queues as and when they find the transactions.
We have a checker task with RD-n-check running in forever loop in the run-phase of scoreboard. It's in a while loop and watches if the destination queue has non-zero entry. Once it finds so, it pops the head entry from destination queue and then pops the head entry from source queue as well and compares the two entries to declare if the check was a PASS or FAIL.
There are more than 2 queues and more than a pair of source/destination of course, but broadly this is the architecture around here.
Now in the current scenario, it seems that the checker tasks stop prints after certain point of time in some of the test cases. Upon adding debug prints thoroughly, it seems that checker tasks that does the job #2/#3 above and gets called inside the forever loop of the run-phase, exits gracefully one last time. However they are entered again - which is to say that the forever loop that should be calling them didn't call. As if the forever loop of run-phase stopped completely.
We also added another forever loop in run-phase that observes whether the queues are empty. From prints inside that parallel loop and from the monitor prints, we know that the queues aren't empty and monitors did push WRs into the queues for a long time.
It seems that the forever loop stopped working suddenly ( going by prints spewed out) all of a sudden but another set of threads that we added in runphase in another forever loop just to monitor those queues - keep printing that the queues have contents. So run-phase shouldn't be over but the checker tasks running in forever has stopped.
We are using Vivado 2020.2 for the simulation. This is a baffling/weird problem for us and we did go through prints multiple times to make sure nothing has been missed out. It seems we are missing very very basic or has hit a bug/broken some basics of UVM coding to land into here.
If you have any help, thoughts here, will appreciate that greatly.
The function phase_ready_to_end() gets called at the end of every task-based phase when all objections have been dropped (or never raised at all).
Typically a scoreboard has a queue or some kind of array of transactions waiting to be checked sent from a monitor via an analysis_port write() method. If your scoreboard is an in-order comparison checker, the queue size is zero when there are no more transactions waiting to be received.
If you look at the code in the link you shared, there is the following in the write_south method doing exactly that:
if (!item.compare(item_stream.pop_front()))

tasklet, taskqueue, work-queue -- which to use?

I am going through ldd3 for last few months. I read first few chapters many times.
These two links are using diffrent way, one is using work queue other is using task-queue. To implement a bottom half.
http://www.tldp.org/LDP/lkmpg/2.4/html/x1210.html
http://www.linuxtopia.org/online_books/linux_kernel/linux_kernel_module_programming_2.6/x1256.html
I have some doubt about tasklet, taskqueue, work-queue all seems to be doing some task at free time :--
a) What exactly the diffrence between these three ?
b) Which should be used for interrupt handler bottom half ?
confused ...???
Tasklet and work-queue are normally used in bottom half but they can be used anywhere, their is no limitation on them
Regarding the difference.
1) The Tasklet are used in interrupt context. All the tasklet code must be atomic,so all rules that are applied on atomic context are applied to it.
For eg. They cannot sleep(as they cannot be reschecduled) or hold a lock for long time.
2) Unlike Tasklet work-queue executes is in process context means they can sleep and hold the lock for longtime.
In short tasklet are used for fast execution as they cannot sleep where as workqueue are used in case of normal execution of bottom half. Both are executed at later time by the kernel.
Softirq and tasklet both are interrupt context tasklet which is executed in interrupt context and workques are executed in process context code.Process context code is allowed to sleep in execution but interrupt context code is not allowed to sleep while execution (Only another interrupt can preempt scheduled interrupt context bottom half. )
Which bottom half mechanism you use is totally depend on driver you are writing and its requirement.
For Ex. If you are writing nw driver which is sending packets to and from HW on interrupt basis you would like to complete this activity without any delay so only options available is softirq or tasklets.
Note: Better you go through Linux Kernel Development by Robert Love chapter 8.I have also read LDD but still Linux Kernel Development by Robert Love is better for interrupt related understanding.

How to yield control to a calling method

Say I have a Task object, with an Execute method. This method has one to several steps, each of which requires a user to click a 'Continue' button, e.g. when Execute is invoked, the Task tells it's container (a Windows form in this case) to display an introductory message, and wait for a button click, before continuing with step 2, notifying the user that what is taking place and performing some work.
I don't want the controller to have to be aware of the steps in the task, either implicitly, through e.g. calling Execute(Steps.ShowIntro), Execute(Steps.PerformTask) etc. or explicitly, with more than one Execute method, e.g. ExecuteIntro(), ExecuteTask(), etc.
Currently I'm using a Phase enumeration to determine which action to carry out when the Continue button is clicked:
show phase 1 intro.
set current_phase = PhaseOne.
on continue_button click
switch current_phase
case PhaseOne:
show phase 1 'Now doing:' message.
execute phase 1 task.
show phase 2 intro.
set phase to PhaseTwo.
case PhaseTwo:
show phase 2 'Now doing:' message.
execute phase 2 task.
show phase 3 intro.
set phase to PhaseThree.
Why don't you simply implement as many classes with Execute method as steps and put instances of those classes in the queue.
By pressing "Continue" you will take another instance of the class with Execute and call it.
class Task
method execute()
foreach task in queue execute task
method addSubTask( task )
add task to queue
class ShowIntroSubTask extends Task
class ExecuteIntroSubTask extends Task
Mykola's answer sounds good, but if you'd like an alternative, consider passing in a ConfirmContinuation callback, which the Execute could use as needed (e.g. on step transitions). If you wanted to keep things abstract, just call it something like NextStep and leave the semantics up to the container.