I'm simulating a security control process, and i can't do that each passenger pickup their baggage. I have tried with Match, Combine, Pickup, but I still can't execute the commands correctly.
I've created the follow flowchart, and the problem is in the wReclaimPax, pickup and wReclaimBags blocks (you can see them in the picture).
https://ibb.co/v3V57Tm
I saw this link Anylogic - Combined multiple items back to original owner to understand something, but I still need help.
I've created 3 functions:
isMatch:
if(equipaje.pasajeroLink.equals(pasajero.equipajeLink)){
return true;
}
return false;
paxBags:
for(int i=0;i<wait.size();i++){
Pasajero p=(Pasajero)wait.get(i);
if(isMatch(p,bag))
return p;
}
return null;
bagsPax:
for(int i=0;i<wait.size();i++){
Equipaje e=(Equipaje)wait.get(i);
if(isMatch(pasajero,e))
return e;
}
return null;
Assumed context
You haven't really explained how your code is related to your process but I'm assuming the following:
Because this is luggage-retrieval, you want to ensure that a passenger
agent (Pasajero) only enters the Pickup block (representing taking bag from
carousel) when his bag (Equipaje agent by the look of it) has
arrived into the wReclaimBag Wait, and been released from it to
queue4 Queue.
For this you need triggers (to remove agents from Wait blocks) when
either a passenger (Pasajero) arrives in wReclaimPax Wait, or a bag (Equipaje) arrives
in the wReclaimBag Wait (because you don't know whether the passenger or their bag will get to their respective Wait blocks first).
So your paxBags function is called in on-entry action of the wReclaimBag Wait, and your bagsPax function in the on-entry action of the wReclaimPax Wait.
Possible problems with current approach
Without knowing more of your model it's hard to say but problems I can think of based on what you've supplied are:
Your functions return the Pasajero or Equipaje if there is one that matches. Your match check relies seemingly on bidirectional connections (links) between Pasajero and Equipaje. Obviously if they're not setup properly the model won't work and, if you're using bidirectional connections you shouldn't need to check both ends.
Your functions need calling so that, if they return non null, they then free the matching agent from the other Wait block, and free themselves. Are you doing that? Without checking, there may be issues with calling free for yourself as you enter a Wait block (since this kind of depends on AnyLogic internals as to whether you count as being 'in' the block at this stage and can be freed). If this seems to be the problem you could create a timeout 0 dynamic event instance to do the free so that you're not doing it within the scope of the on-enter action.
Your pickup block (since it's been setup so that the entering agent will always want to pickup the first agent (Equipaje) in queue4) just needs to be set as waiting for quantity 1 (though see below).
If you've done all this the most likely problem is that the underlying events ordering of AnyLogic is affecting things. When you free agents I'm fairly sure the freeing actually happens in a timeout 0 event scheduled under-the-covers. So it may be that the passenger arrives at the Pickup before their Equipment arrives in queue4 though, if you set the Pickup to be "Exact quantity (wait for)", with quantity of 1, it should handle that.
The animation of the process (numbers in/out/within each block and details when clicking on blocks) should also help you debug what is going wrong; e.g., are bags being left in the Wait when they should have been released, etc.
P.S. With this kind of thing you should always create a minimal example model to make testing the issue/solution easier (and for sharing in help forums such as this where the rest of the complexity of your model is irrelevant). Often you find the problem 'naturally' in the process of trying to construct such a model that reproduces your problem in a minimal way.
Related
In my model I am transporting (path guided) several types of products (motors) by an agv trough a manufacturing line with 27 cycles. It´s a flowing manufacturing line. That means the product gets manufactured while the agv is constantly running.
To model that I created an agent population called "motors" with parameter "axelType" (string) which is loaded from column "axel_type" in database "manufacturing_sequence" (local excelsheet) and is placed on Main.
Each motor is placed on an transporter "agvAssembly" (in Flowchart as: Transporter) and runs from node "locationCycle1" all the way to "locationCycle27".
Now I want to change the transporters speed at each of the 27 cycle nodes dependent on the currently loaded motor. To do that I got another database called "speeds_axel" which includes all the needed speeds for the cycles and respective parameter name for axelType (column axel_type).
So now, when the transporter enters a node I have to check first the nodes name. Than I want to read out the parameter "axelType" of the currently in that node entered agent "Motor" and search in the database for the respective speed.
In the block "transporter Fleet" - "On enter node:" I wrote as an example for cycle 1 the following:
if (node == locationCycle1) {
unit.setMaximumSpeed(
selectFrom(speeds_axel)
.where(speeds_axel.axel_type.eq(motor.axelType))
.firstResult(false, speeds_axel.cycle1)/60.0,MPS);
}
When running the model I get the following error:
"motor cannot be resolved to a variable"
(location: TransporterFleet)
I think that error accures because my approach doesn't specify which motor I mean. How can I clarify to AnyLogic that I always mean the current motor which enters the node?
I need something like this:
if (node == locationCycle1) {
unit.setMaximumSpeed(
selectFrom(speeds_axel)
.where(speeds_axel.axel_type.eq("get motor which is currently in locationCycle1".axelType))
.firstResult(false, speeds_axel.cycle1)/60.0,MPS);
}
the biggest problem in your model is that you don't follow conventions. An agent type should be named with an uppercase first letter
Why is this important? Because you want to make the difference between Motor, motor, and motors
Motor is the class (or Agent Type)
motor is an instance of that class (or agent type)
motors is a population.
Since you don't follow this convention, you make mistakes of this kind since motor is a class
in your case when you do motor.cycle1
if you followed the conventions, you would be doing
Motor.cycle1 (which is obvioulsy wrong)
Note that Motor is the Agent Type name, and what you really want is to know for a particular motor what the value of cycle1 is
The first thing you need to do with this model, is get back to using the conventions, and this will probably solve this problem and many problems in the future.
Just because you have a population called motors (which the agents within your flow 'come from' --- though it looks like they don't actually; see later), this does not mean that you can 'magically' refer to the current agent via the singular form motor at any point in the code. (And, similarly, your agent type being Motor does not mean you can use the lower-case form to refer to 'the current one'.)
In Transporter Fleet block "On enter node" actions, you refer to the current transporter via the unit keyword; see the help page. The motor is not the transporter; it's the thing being transported.
So, just use the "On exit" (or, better, "On at exit") actions of your MoveByTransporter blocks (which trigger on arrival at the destination node), where you can refer to the current motor (the agent in the flow) via the agent keyword; again, see the block help.
But, from a quick glance, there appear to be a few important changes/simplifications you should also make (though hard to definitively tell without details of all your code):
You are using a Source block to add agents (motors) to the flow when you have already created the motors in your population. You should be using an Enter block instead to add the (already-existing) Motor agents to the flow at the appropriate time (triggered by code: I imagine you want one to start at model start time, requiring code in Main's "On startup" action, and then others added either at certain simulation times or when earlier motors reach a certain stage in the process...). If your population was actually just for 'templates' of data for each motor axle type (and you would then potentially create multiple instances of motors for a given axle type) then your current logic might make sense, but your screenshot of your manufacturing_sequence table makes clear that isn't the case. (If you did go that route, you would want to rename your population so that its purpose was clear.)
It looks like you have a 'looping' process (with the same process behaviour for all your in-sequence nodes), so you should look to make your process generic with an explicit loop (so you don't have near-copies of blocks for every node in your sequence). There is a fair amount of detail in terms of how you do this but roughly:
Track the node the motor is currently in (or its sequence number) via a variable in the Motor agent.
Use a collection (List) which contains the nodes-in-order to determine where it has to go to next.
Your process would have a MoveByTransporter --> Delay --> SelectOutput sequence (plus TimeMeasureStart/End if you're using them), where SelectOutput loops back to the MoveByTransporter if the motor is not yet at the last node.
I am exploring different ways to end a UVM test. One method that has come often from studying different blogs from Verification Academy and other sites is to use the Phase Ready to End. I have some questions regarding the implementation of this method.
I am using this method in scoreboard class, where my understanding is after my usual run phase is finished, it will call the phase ready to end method and implement it. The reason I am using it my scoreboard's run_phase finishes early, and there are some data into queues that need to be processed. So I am trying to prolong this scoreboard run_phase using this method. Here are is some pseudo-code that I have used.
function void phase_ready_to_end(uvm_phase phase);
if (phase.get_name() != "run") return;
if (queue.size() != 0) begin
phase.raise_objection(.obj(this));
fork
begin
delay_phase(phase);
end
join_none
end
endfunction
task delay_phase(uvm_phase phase);
wait(queue.size() == 0);
phase.drop_objection(.obj(this));
endtask
I have taken inspiration for this implementation from this link UVM-End of Test Mechanism for your reference. Here are some of the ungated thoughts in my mind on which I need guidance and help.
to the best of my understanding the phase_ready_to_end is called at the end of run_phase and when it runs it raises the objection for that scoreboard run_phase and runs delay_phase task.
That Delay Phase task is just waiting for the queue to end, but I am not seeing any method or task which will pop the items from the queue. Does I have to call some method to pop from the queue or as according to the 1st point above the raised objection will start the run phase so there is no need for that and we have to wait for a considerable amount of time?
Let me give you some pre-context to this question. I have a scoreboard where there are two queues whose write methods are implemented and they are being fed correctly by their source.
task run_phase (uvm_phase phase);
forever begin
compare_queues(); // this method takes data from two queues and compares them, both queues implementation are fine and they take data from their respective sources. Let me give you a scenario, let's suppose there are a total of 10 transactions being generated but the scoreboard was able to process only 6 of them and there are 4 transactions left when all objections are dropped. So to tackle that I implement this phase_to_ready_end method in my scoreboard.
end
endtask
The problem with this method that I am having is that, when I raise the objection in this phase_ready_to_end and call delay_phase method, nothing happens. And I am curious is there more to this implementation or not?
Sorry for the delay. I have shared more context to the existing question. Please see to that, let me know if it is confusing.
We have a pair of monitors that calls write method implemented inside the scoreboard. The monitors typically capture the transaction from BUS and call these WR methods to push the transactions. Thus two source and destination monitors WR into two - source and destination - queues as and when they find the transactions.
We have a checker task with RD-n-check running in forever loop in the run-phase of scoreboard. It's in a while loop and watches if the destination queue has non-zero entry. Once it finds so, it pops the head entry from destination queue and then pops the head entry from source queue as well and compares the two entries to declare if the check was a PASS or FAIL.
There are more than 2 queues and more than a pair of source/destination of course, but broadly this is the architecture around here.
Now in the current scenario, it seems that the checker tasks stop prints after certain point of time in some of the test cases. Upon adding debug prints thoroughly, it seems that checker tasks that does the job #2/#3 above and gets called inside the forever loop of the run-phase, exits gracefully one last time. However they are entered again - which is to say that the forever loop that should be calling them didn't call. As if the forever loop of run-phase stopped completely.
We also added another forever loop in run-phase that observes whether the queues are empty. From prints inside that parallel loop and from the monitor prints, we know that the queues aren't empty and monitors did push WRs into the queues for a long time.
It seems that the forever loop stopped working suddenly ( going by prints spewed out) all of a sudden but another set of threads that we added in runphase in another forever loop just to monitor those queues - keep printing that the queues have contents. So run-phase shouldn't be over but the checker tasks running in forever has stopped.
We are using Vivado 2020.2 for the simulation. This is a baffling/weird problem for us and we did go through prints multiple times to make sure nothing has been missed out. It seems we are missing very very basic or has hit a bug/broken some basics of UVM coding to land into here.
If you have any help, thoughts here, will appreciate that greatly.
The function phase_ready_to_end() gets called at the end of every task-based phase when all objections have been dropped (or never raised at all).
Typically a scoreboard has a queue or some kind of array of transactions waiting to be checked sent from a monitor via an analysis_port write() method. If your scoreboard is an in-order comparison checker, the queue size is zero when there are no more transactions waiting to be received.
If you look at the code in the link you shared, there is the following in the write_south method doing exactly that:
if (!item.compare(item_stream.pop_front()))
I have an airlock (small room called AL_2216) between 2 areas. The airlock has many different agent types passing through it (cart, product, operator, etc). There are queuing areas on either side of the airlock.
Because the space is small, I built a short flowchart that has a queue and restricted area blocks that all agents must pass through when going through this space. If the restricted area's capacity is full, the agents wait in either the InsideQueueArea or OutsideQueueArea depending on the direction they're going.
I send agents via Exit and Enter blocks to this flowchart and it works great on the top portion of the flowchart.
BUT if I try to use an Enter or Exit block in the prepare flowchart, I get this error:
I tried using a custom block instead of Enter and Exit blocks, but that creates a new instance of the code each time and the restrictions don't work together across the multiple custom blocks.
This airlock is just one of many in my model. Without referring to the same code, I'll have multiple copies that need to refer to each other's restricted areas and the flowcharts become huge and complicated. Is there a way to get around this?
EDIT:
I'm not sure what to do with these ports. They have no properties that do anything:
EDIT2:
Here's a file to see the behavior - Model2.zip
The Prepared flowchart portion is set to "ignore" so the code will run. You can see the operators and the carts passing through AL_2216 with only 2 being allowed at a time. If you uncheck "ignore" for the prepare flowchart, the error will trigger.
AnyLogic sent the right answer!!
So I was asking Anylogic a different question and they recognized my name from this post! They sent a fix to me and it works exactly the way it should! The exception error message I was getting "out: 0 isn't supported for..." made me think the exit/enter blocks were not supported in perparation flowcharts.
But actually, the seizeCart block didn't know where to start the prep flowchart because it wasn't directly connected to the resource task start block. A quick setting change under the Advanced section of the seizeCart block defining which resource task start block to start at did the trick! Here's the email from AnyLogic:
-The error text and documentation are not sufficient for understanding this (the error text is confusing), I suppose it is obsolete error text. We will rectify the description;
-Under the question there is a more generic discussion which seems to be unrelated to the initial problem. Please let me know if I miss something or if your model does not work as you expect even after adjustment of seizeCart block property.
I think you should replace the Enter and Exit blocks that lead to the bottom input of your seizeCart Seize block with simple Port objects (from the Agent palette).
As per the help for Seize:
So it wants a direct link to a ResourceTaskStart flow and your Enter/Exit combinations might be ... not "direct" enough... Try it.
So here's what I ended up doing. It's the best I could come up with that could be easily replicated for lots of airlocks.
I've added a wait block (dummyThruAL_2216) to my Product flowchart prior to seizing the cart. This wait block injects a new Agent into sourceDummy at the cartHome node. The dummy then seizes a cart and moves through the airlock and it's restriction. Upon exiting the restriction, I check what type of agent and direct the agent to the correct exit block. The dummy agent and cart move to the Product where the dummy agent releases the cart and sinks. The sink frees the wait block and the Product seizes the cart that is right next to it and continues on it's journey.
It's an easy copy/paste to add more airlocks. Not as nice as my original, but what are you going to do... Thanks for everyone's help and suggestions.
As others have said, there are (not really documented) restrictions on what blocks you can use in preparation and wrap-up flowcharts, which mean what you're attempting won't work.
As you say, it's important to keep a single 'instance' of the airlock flow so that the restrictions (queue and restricted area) are 'global' when this represents the same physical airlock. (Otherwise a repeated custom block is precisely what you should use for each different physical airlock.)
Your best option (and assuming you needed to attach the Cart resource to the Product) is probably to
Add dummy agents (via Source block inject calls) to a separate mini-process that represents your resource preparation requirement (but now not attached to the Seize block).
Replace the Seize in your main process with a Seize-Wait-Release-Seize combination:
The Seize block seizes the cart as normal (without moving or attaching it; no 'Send seized resources' or 'Attach seized resources' options) and then injects an agent into your mini-process (which can use Exit and Enter blocks to use the airlock sub-process). This agent represents the seized resource agent (Cart) and thus should start where it starts and be animated so it looks like it. (You can make the actual Cart temporarily non-visible during this mini-process.)
When the agent reaches the end of the mini-process (at a Sink block), instantly move the related Cart to your node (use jumpTo), make it visible again and free the Product agent from the Wait block
Release the seized Cart and then immediately re-Seize it, but now attaching it (so the animation looks correct). If you use the Resource selection 'Nearest to the agent' option you should be guaranteed to seize the correct cart. (You can also use the 'Customise resource choice' option with some code to ensure that you absolutely always choose the same Cart.)
(It is simpler than the above if you don't care about having a correct animation, and you can use custom blocks to make this block combination reusable and thus not too clunky.)
Edit: A very similar alternative which also works (and is the basis for your own answer) is to have a dummy agent representing your Product in the sub-flow which seizes (and attaches) the actual Cart agent, leaving it at the Product's location to be immediately seized as above. This is slightly better since you don't have to worry about the visibility and 'jumping' of the real resource agent, plus you can move a Seize and a Release from the main flow (which now just has Wait-Seize) to the sub-flow (thus 'hiding them away').
I have a fairly good idea of what the Subject class does and when to use it, but I've just been looking through the language reference on msdn and see there are various other ISubject implementations such as:
AsyncSubject
BehaviorSubject
ReplaySubject
As the documentation is pretty thin on the ground, whats the point of each of these types and under what situations would you use them?
These subjects all share a common property - they take some (or all) of what gets posted to them via OnNext and record it and play it back to you - i.e. they take a Hot Observable and make it Cold. This means, that if you Subscribe to any of these more than once (i.e. Subscribe => Unsubscribe => Subscribe again), you'll see at least one of the same value again.
ReplaySubject: Every time you subscribe to the Subject, you get the entire history of what has been posted replayed back to you, as fast as possible (or a subset, like the last n items)
AsyncSubject: Always plays back the last item posted and completes, but only after the source has completed. This Subject is awesome for async functions, since you can write them without worrying about race conditions: even if someone Subscribes after the async method completes, they get the result.
BehaviorSubject: Kind of like ReplaySubject but with a buffer of one, so you always get the last thing that was posted. You also can provide an initial value. Always provides one item instantly on Subscribe.
In light of the latest version (v1.0.2856.0) and to keep this question up to date, there has been a new set of subject classes:
FastSubject, FastBehaviorSubject, FastAsyncSubject and FastReplaySubject
As per the release notes they
are much faster than regular subjects
but:
don’t decouple producer and consumer by an IScheduler
(effectively limiting them to
ImmediateScheduler);
don’t protect against stack overflow;
don’t synchronize input messages.
Fast subjects are used by Publish and
Prune operators if no scheduler is
specified.
In regards to AsyncSubject
This code:
var s = new AsyncSubject<int>();
s.OnNext(1);
s.Subscribe(Console.WriteLine);
s.OnNext(2);
s.OnNext(3);
s.OnCompleted();
prints a single value 3. And it prints same if subscription is moved to after completion. So it plays back not the first, but the last item, plays it after completion (until complete, it does not produce values), and it does not work like Subject before completion.
See this Prune discussion for more info (AsyncSubject is basically the same as Prune)
Paul's answer pretty much nails it. There's a few things worth adding, though:
AsyncSubject works as Paul says, but only after the source completes. Before that, it works like Subject (where "live" values are received by subscribers)
AsyncSubject has changed since I last ran tests against it. It no longer acts as a live subject before completion, but waits for completion before it emits a value. And, as Sergey mentions, it returns the last value, not the first (though I should have caught that as that's always been the case)
AsyncSubject is used by Prune, FromAsyncPattern, ToAsync and probably a few others
BehaviorSubject is used by overloads of Publish that accept an initial value
ReplaySubject is used by Replay
NOTE: All operator references above refer to the publishing set of operators as they were before they were replaced with generalised publish operators in rev 2838 (Christmas '10) as it has been mentioned that the original operators will be re-added
In my last question, OpenCl cleanup causes segfault. , somebody hinted that missing event handling, i.e. not waiting for code to finish, could cause the seg faults. Since then I looked again into the tutorials I used, but they don't pay attention to events (Matrix Multiplication 1 (OpenCL) and NVIDIA_OpenCL_GettingStartedLinux.pdf) or talk about it in detail and (for me) understandable.
Do you know a tutorial on where and how to wait in OpenCL?
Merci!
I don't have a tutorial on events in OpenCL, and I'm by no means an expert, but since no one else is responding...
As a rule of thumb, you'll need to wait for any function named clEnqueue*. Those functions return immediately before the job is done. The easiest way to make sure your queue is finished is to call clFinish(). It won't return until the entire queue has completed.
If you want to get a little fancier, most of the clEnqueue* functions have an optional cl_event parameter that you can pass in. You can check on a particular event with clGetEventInfo(), and you can wait for a particular set of events to finish with clWaitForEvents().