I have a very short question regarding the batching process in Anylogic.
I would like to print out the IDs of the agents that already exited the previous batch element where they were batched together. As a result, they are at a different element (Release to be precise) and I am struggling to reach their ID inside the batch. The only idea I have is to first unbatch and then print out the IDs.
Is there a way to do it without unbatching them?
Thank you very much in advance.
Kind regards
All batched (not permanently) or picked up agents are stored in the collection named 'contents' inside a batch/container agent.
So, you can access IDs of the agents stored in this collection using the code like:
for(int i = 0; i < agent.contents().size(); i++)
traceln(((MyAgent)agent.contents().get(i)).id);
Related
[extract agents from a parameter]
[1]: https://i.stack.imgur.com/sXoOk.jpg
I have an agent (shipment) with amount parameter. I want to make decisions according to every single shipment to decide where to go. the problem i've struggled with is how to transform my one agent per arrival to many agents equals to the parameter (amount) after he gets out of the queue, queue1 .I used unbatch block to illustrate what i am trying to do.
you can't unbatch things if you haven't batched them before..instead to achieve the same thing you can use a split block
With the split block you can define the number of copies based on that amount parameter and you can define the type of agent copied as well
I have a queue block where agents enter and exit. Each agent has the following attributes (parameters): ID, processing time and due date. After one hour, I want to use an event block to collect the info (ID,processing time and due date) of the agents that are still waiting in the queue at that moment, and write this info to Excel. In Excel, I would like 3 columns, one with the ID's, one with the processing times and one with the due dates of each order in the queue.
I have tried adding and removing info to a LinkedList, but this did not work. Does anyone know how I could get the information I need?
In your event, simply loop across all agents in the queue and write a dbase query to insert data for them using the insertInto syntax.
Could look like this:
for (int i=0; i<queue.size(); i++) {
MyAgent currentAgent = queue.get(i);
insertInto(myDbaseTable)
.columns(myDbaseTable.column1, myDbaseTable.column2)
.values(currentAgent.someInfo, currentAgent.otherInfo)
.execute();
}
I created a network consisting of paths and point nodes. Some of the nodes are part of an arraylist collection with the element type PointNode. On model startup I generate a population of agents with the number of agents equal to the size of the collection of point nodes.
I now want set the agents location so that each point node of the collection contains exactly one newly created agent. What would be a good way to do this?
Ok I got it. In case anyone else is wondering:
On model startup the following code is executed in the main agent:
//set locations for agents
for (int j=0; j< agents.size(); j++)
{
agents(j).location = collectionOfPointNodes.get(j);
agents(j).jumpTo(agents(j).location);
}
I've a problem with a simulation in anylogic.
I have an item (agent) that must be processed by a resource, the result of this service block is the starting object and two different documents which are processed in two separate offices and which at the end of the flow will have to be linked to the article in question.
I can't find a way to do this division into 3 different agents, or in general, to model this flow.
Thanks in advice
You can use 2 split blocks to generate 2 independent documents and connect them through a variable or link to agents... maybe each original agent has an id and the copies in the split block will have something like agent.id=original.id;
Then after, when the documents are processed you can check for which ones have the same id to merge them into an article...
but if you want to get more complicated, there's also the following option:
create 2 enter blocks (enter1 and enter2), one for each document. I will assume your documents correspond to 2 different agent types called Document1 and Document2
On each one of the agent types, you will add a link to agents in order to be able to connect the documents to each other. Read more on link to agents on the help documentation if you don't know what that is.
At the end of the service block, on the on exit action, you can do the following:
Document1 doc1=add_Document1();
Document2 doc2=add_Document2();
doc1.linkToDoc2.connectTo(doc2);
enter1.take(doc1);
enter2.take(doc2);
I don't know if your original agent has to be connected, but you would follow the same principle to do that.
Later, you can just check if the connected docs are completed in order to join them in an article again.
Is there any method available in Azure data factory to exit the for each loop. I am using a for each loop to iterate through files and to process it. But when the copy activity placed inside the loop fails, the loop executes multiple times to reprocess the failed file. I think it has to do something with the number of files available in the Get meta data array. Can anyone suggest a method to resolve this issue.
Regards,
Sandeep
Like #Jay Gong said that Data factory doesn't support break the loop if the inner copy active failed!
Others have created a user voice for data factory, it have been vote up 14 times.
But still with no response. :
Hope this helps.
You can't break the ForEach loop, but you can cancel the entire pipeline running that ForEach via the Rest API in case of an error. Will add example code in a blog post next week.