batch size from data base reference - anylogic

i tried to get batch size values from database table in sequence, but getting errors in process.
i tried to set a condition to choose the size according to id parameter in my agent.
enter image description here
enter image description here
enter image description here

This bit does not allow access to agents flowing through.
As suggested previously, it is easier to store the batch size within the agent as a parameter p_BatchSizeToUse (define it in the Source when the agent is created).
Then, simply set the batch size upfront in a block upstream of the Batch block using myBatchBlock.set_batchSize(agent.p_BatchSizeToUse)
HOWEVER: It is not logical to vary the batch size agent by agent. If the first agent has a batch size of 5 and the second has a batch size of 10, the first agent's batch size would not be considered as the 2nd takes over. You will not get the desired result with your setup, tbh.

Related

Batching multiple agents based on location of agent

In my model I only want to batch agent which are at the same location. So my source block generates the agents according to a database to a specific node (which is sometimes different for agents), now I want the agents that occur at the same node to batch in sizes of 2 and the one that are left over need to be batched alone.
How can I model this, I know that I can use the selectoutput (which says for example if location=node1 use this output etc) option, but do I than have to add for example manually 100 outputs if I've 100 different locations where the agents start or is there a more simple solution for this problem?
Added later:
Or is there another way to model my idea:
So I'm simulating an hospital environment, where logistic employees (in this case the transporter) based on predefined times collects the thrash on certain areas for example the databaserow I show in the picture below:
At 9:50, the thrash at thrash collection point at LAB_Office_2_H_T_N can be collected by the logistic employee.
So in my model I create this 2 agents (which are 2 containers, last column) based on this time and seize a transporter to collect this thrash. Since a logistic employee is able to collect 2 thrash in one time I want to batch it and let the logistic employee collect 2 thrash containers at once.
After that he transports it to the thrash dump area and is released.
The colors changed after the added information. You can use pickup and dropoff blocks instead. You can define your node requirements in the condition cell. You can use local variables like container and agent to code whatever you want. Or use "Quantity (if available)" option. There you can programmatically define how many units will be picked up by using your own function.

Multiple agents arrival based on Variable and database column

In my source block I want to be the amount of agents based on two different factors namely the amount of beds and visitors per bed. The visitors per bed is just a variable (e.g. visitors=3) and the amount of beds is loaded from the database table which is an excel file (see first image). Now I want to code this in the code block as shown in the example in image 2, but I do not know the correct code and do not know if it is even possible.
Simplest solution is just to do the pre-calcs in the input file and have in the dbase.
The more complex solution is to set the Source arrivals as:
Now, you read your dbase code at the start of the model using SQL (i.e. the query constructor). Make the necessary computations and create a Dynamic Event for each arrival when you want it to happen, relative to the model start. Each dynamic event then calls the source.inject(1) method.
Better still is to not use Source at all but a simple Enter block. The dynamic event creates the agent with all relevant properties from your dbase and pushes it into the Enter block using enter.take(myNewAgent)
But as I said: this is not trivial

In Anylogic , I want to store n "Product" that have been sorted using the switch statement, and then use AGV to move them

I want to store n "Product" that have been sorted using the switch statement, and then use AGV to move them.
However, when the first "Product" enters the Batch block, conveyor stops moving.
Is it possible to specify the number of products to be stored in each node from A1 to C3 in a batch?
Many thanks.
enter image description here
enter image description here
Independent batching
Do you want to batch independently for each node?
If so, you should "agentify" your node process flow. For each node, create an AgentType MyNode, store the actual node using a parameter and create a batch process flow with EnterandExitblocks. In your main flow, send products to the respectiveMyNodeagent using anExit` flow block.
If this is too hard, check some example models and the AnyLogic YouTube channel, some good resources there to learn this powerful OOP technique.
dynamic batch size
If I understood wrong, then maybe this will help (i.e. you do not want to batch independently but simply change the batch size dynamically):
Similar to your getTargetNode, you can create a function getBatchSize which returns an integer based on an argument of type Node, name it argNode. In it, you can write some code similar to
if argNode.equals(A1) {
return X; // whatever batch size needed for node A1
else if... // for other nodes
Probably, argNode would be your targetNode but it is not clear from your screens.

Using ADFv2 Validation activity to check minimum size of Virtual Directory of Azure BLOB dataset

ADFv2 Validation activity using Azure BLOB dataset has a property called Minimum size. I would like to validate that a certain virtual directory for a given Azure blob storage has total file size specified in the Minimum size field. For that I have tried leaving the 'File' field of the connected dataset as blank but it didn't work. The activity succeeded even though there was an empty file in the virtual directory. Then again made the 'File' field as * and then the validation activity just kept running, never succeeded. How do I achieve this?
Actually, I tested and found that: Minimum size only works for specified file, like bellow:
Parameter with dynamic content also doesn't support:
If we set the Minimum size value is bigger than the file size, the validation activity will always being in progress.
I think there are the limits about the validation activity. So we can't achieve that. We could call the Azure support to get more helps.
Hope this helps.

How to pass output from a Datastage Parallel job to input as another job?

My requirement is
Parallel Job1 --I extract data from a table, when row count is more than 0
Parallel job 2 should be triggered in the sequencer only when the row count from source query in Job1 is greater than 0
I want to achieve this without creating any intermediate file in job1.
So basically what you want to do is using information from a data stream (of your Job1) and use it in the "above" sequence as a parameter.
In your case you want to decide on sequence level to run subsequent jobs (if more than 0 rows get returned) or not.
Two options for that:
Job1 writes information to a file which is a value file of a parameterset. These files are stored in a fixed directory. The parameter of the value file could then be used in your sequence to decide your further processing. Details for parameter sets can be found here.
You could use a server job for Job1 and set a user status (basic function DSSetUserStatus) in a transfomer. This is also passed back to the sequence and could be referenced in subsequent stages of the sequence. See the documentation but you will find many other information on the internet as well regarding this topic.
There are more solution to this problem - or let us call it challenge. Other ways may be a script called at sequence level which queries the database and will avoid Job1...