Anylogic, how to change the size of production batches dynamically? - simulation

I have a production line where some resources create batches of pieces. The number of pieces created by the "source" block and the number of batches are parameters. For example, if you set 48 pieces created and 4 batches, each batch closes when resources complete 12 pieces. The problem arises when I have for example 51 pieces and 4 batches, in this case I should have batches of different sizes like 12, 12, 12 and the last one with 15 pieces. Is there any way to solve this problem?
Thanks for your advices

Following this Sample Model. Assuming all your parts arrive at the same time you just need to update the batchSize "On exit" at the source block with:
batchSize = numberOfParts/numberOfBatches;
batchparts.set_batchSize(batchSize);
And then, update it again "On exit" on the batch block:
if(queue.size()<2*batchSize){
batchSize=batchSize+(queue.size()%batchSize);
}
batchparts.set_batchSize(batchSize);
Note (queue.size()%batchSize) is the MOD function and gives you the additional number of parts needed to be batched in the last batch.
If the parts don't arrive at the same time, you can create a variable batchNumber that will let you know which number of batch you will do next (1 to numberOfBatches, initialized in 1).
Then, you just need to update it on the "On exit" of the batch block as follows:
//If the next batch is the last one, batch all the
//remaining quantity until completing the total number of parts
if(batchNumber+1=numberOfBatches){
batchSize=batchSize+(numberOfParts%batchSize);
batchparts.set_batchSize(batchSize);
batchNumber=1;
}
batchNumber=batchNumber+1;
I hope this helps.

Related

select Output 5 block to release agents in order and once batch size condition satisfied

I need to put a condition or action in the select output 5 block.
enter image description here
The model has select 5 output block, 5 batch blocks and 5 delays blocks. Each delay block has a delay of 105 hours. I need to control the movement of agents to fill each delay in sequence. If one delay becomes available, then select output 5 block will release agents to the available delay.
For example, select output 5 block control the release of agents from each exit based on a condition. Condition one will check if batch capacity filled (see image attached). Therefore, it will start to release agents from exit two to fill up batch 1. Once batch one capacity completed, the select 5 block will start to release agents from exit 2 to fill batch 2 capacity and so on.
enter image description here
Can I do the above using select output 5 block?
If I understand your question, you want to select the output based on which batches have available space. The problem is that batches aren't really ever full, because as soon as they get, say, 5 agents, it immediately makes a batch and passes that new batched agent onto the next process block. So really, you should be polling the queue in the delay block. For example, the exit condition for the first output (into batch) could be Curing_Drying.size() < Curing_Drying.capacity. This means that there is capacity in that delay for more batched agents and you can continue sending stuff down that line.
This also means that the batch line will be used more than, say, batch4, since that one will only be used whenever all other Curing_Drying delays are full. And if that one fills up and there's no space anywhere else, you'll get an error saying "An agent was not able to leave the port...".

Increasing concurrency in Azure Data Factory

We have a parent pipeline that gets a list of tables and feeds it into a ForEach. Within the ForEach we then call another pipeline passing in some config, this child pipeline moves the data for the table it is passed as config.
When we run this at scale I often see 20 or so instances of the child pipeline created in the monitor. All but 4 will be "Queued", the other 4 are executing as "In progress" . I can't seem to find any setting for this limit of 4. We have several hundred pipelines to execute and I really could do with it doing more than 4 at a time. I have set concurrency as 20 throughout the pipelines and tasks, hence we get 20 instances fired up. But I can't figure out what it is I need to twiddle to get more than 4 executing at the same time.
The ForEach looks like this
activities in ForEach loop look like this
many thanks
I think I have found it. On the child Pipeline (the one that is being executed inside the ForEach loop) on the General Tab is a concurrency setting. I had this set to 4. When I increased this to 8 I got 8 executing, and when I increased it to 20 I got 20 executing.
It seems max 20 loop iteration can be executed at once in parallel.
The documentation is however a bit unclear.
The BatchCount setting that controls this have max value to 50, default 20. But in the documentation for isSequential it states maximum is 20.
Under Limitations and workarounds, the documentation states:
"The ForEach activity has a maximum batchCount of 50 for parallel processing, and a maximum of 100,000 items."
https://learn.microsoft.com/en-us/azure/data-factory/control-flow-for-each-activity

Partial batch sizes

I'm trying to simulate pallet behavior by using batch and move to. This works fine except towards the end where the number of elements left is smaller than the batch size, and these never get picked up. Any way out of this situation?
Have tried messing with custom queues, pickup/dropoff pairs.
To elaborate, the batch object has a queue size of 15. However once the entire set has been processed a number of elements less than 15 remain which don't get picked up by the subsequent moveTo block. I need to send the agents to the subsequent block once the queue size falls below 15.
You can dynamically change the batch size of your Batch object towards "the end" (whatever you mean by that :-) ). You need to figure out when to change the batch size (as this depends on your model). But once it is time to adjust, you can call myBatchItem.set_batchSize(1) and it will now batch things together individually.
However, a better model design might be to have a cool-down period before the model end, i.e. stop taking model measurements before your batch objects run out of agents to batch.
You need to know what the last element is somehow for example using a boolean variable called isLast in your agent that is true for the last agent.
and in the batch you have to change the batch size programatically.. maybe like this in the on enter action of your batch:
if(agent.isLast)
self.set_batchSize(self.size());
To determine if the "end" or any lack of supply is reached, I suggest a timeout. I would save a timestamp in a variable lastBatchDate in the OnExit code of the batch block:
lastBatchDate = date();
A cyclically activated event checkForLeftovers will check every once in a while if there is objects waiting to be batched and the timeout (here: 10 minutes) is reached. In this case, the batch size will be reduced to exactly the number of waiting objects, in order for them to continue in a smaller batch:
if( lastBatchDate!=null //prevent a NullPointerError when date is not yet filled
&& ((date().getTime()-lastBatchDate.getTime())/1000)>600 //more than 600 seconds since last batch
&& batch.size()>0 //something is waiting
&& batch.size()<BATCH_SIZE //not more then a normal batch is waiting
){
batch.set_batchSize(batch.size()); // set the batch size to exactly the amount waiting
}
else{
batch.set_batchSize(BATCH_SIZE); // reset the batch size to the default value BATCH_SIZE
}
The model will look something like this:
However, as Benjamin already noted, you should be careful if this is what you really need to model. Take care for example on these aspects:
Is the timeout long enough to not accidentally push smaller batches during normal operations?
Is the timeout short enough to have any effect?
Is it ok to have a batch of a smaller size downstream in your process?
etc.
You might just want to make sure upstream that the number of objects reaches the batching station are always filling full batches, or you might to just stop your simulation before the line "runs dry".
You can see the model and download the source code here.

how to write data from Database Log to an output in anylogic?

I'm running a similation where i would lige to know the total amount of time agents spends in a delay block. I can access the data when running single simulations in the Dataset log under flowchart_stats_time_in_state_log
https://imgur.com/R5DG51a
However i would like to to write the data from block 5 (spraying) to an output in order to store the data when running multiple simulations.
https://imgur.com/MwPBvO8
Im guessing that the value reffence should look something like the expression below. It is not working however so i would aprreciate it alot if anybody could help me out or suggest an alternate solution for getting the data.
flowchart_stats_time_in_state_log.total_seconds.spraying;
Btw. Time measures dose not work for this situation since i need to know the total amount of time spend in a block after a 12 hour shift. with time measures i do not get the data from the agents that are still in the block when the simulation ends.
Based on the goal of summing all processing times, you could solve it mathematically. Set the output equal to block.statsUtilization.mean() * capacity * time() calculated on simulation end.
For example, if you have a capacity of 1 and a run length of 100 minutes, then if you had a utilization of 50%; that means you had an agent in the block for 50 minutes. Utilization = time busy / total time. Because of this relationship, we can calculate how long agents were actually in the block.
Another alternative would be to have a variable to track time in block, incrementing when the agents leave. At end of the run, you would need to call a function to iterate over the agents still in the block to add their time. AnyLogic allows you to pretty easily loop over queues, delays, or anything that holds agents:
for( MyAgent agent : delayBlockName ){
variable += time() - agent.enterBlockTime;
}
To implement this solution, you would need to create your own agent (name it something better than MyAgent) with a variable for when the agent enters the block. You would then need to then mark the time each agent enters the block.

Spring Batch - gridSize

I want some clear picture in this.
I have 2000 records but I limit 1000 records in the master for partitioning using rownum with gridSize=250 and partition across 5 slaves running in 10 machines.
I assume 1000/250= 4 steps will be created.
Whether data info sent to 4 slaves leaving 1 slave idle? If number
of steps is more than the number of slave java process, I assume the
data would be eventually distributed across all slaves.
Once all steps completed, would the slave java process memory is
freed (all objects are freed from memory as the step exists)?
If all steps completed for 1000/250=4 steps, to process the
remaining 1000 records, how can I start my new job instance without
scheduler triggers the job.
Since, you have not shown your Partitioner code, I would try to answer only on assumptions.
You don't have to assume about number of steps ( I assume 1000/250= 4 steps will be created ), it would be number of entries you create in java.util.Map<java.lang.String,ExecutionContext> that you return from your partition method of Partitioner Interface.
partition method takes gridSize as argument and its up to you to make use of this parameter or not so if you decide to do partitioning based on some other parameter ( instead of evenly distributing count ) then you can do that. Eventually, number of partitions would be number of entries in returned map and values stored in ExecutionContext can be used for fetching data in readers and so on.
Next, you can choose about number of steps to be started in parallel by setting appropriate TaskExecutor and concurrencyLimit values i.e. you might create 100 steps in partition but want to start only 4 steps in parallel and that can very well be achieved by configuration settings on top of partitioner.
Answer#1: As already pointed, data distribution has to be coded by you in your reader using ExecutionContext information you created in partitioner. It doesn't happen automatically.
Answer#2: Not sure what you exactly mean but yes, everything gets freed after completion and information is saved in meta data.
Answer#3: As already pointed out, all steps would be created in one go for all the data. Which steps run for which data and how many run in parallel can be controlled by readers and configuration.
Hope it helps !!