Remove stop times from ResourcePool for OEE calculations [duplicate] - anylogic

I am working on production model where the the input of raw material is on hourly bases and i am running the model for 8 hours ( 1 shift ) so basically for 16 hour the resources are idle. When i was not using the schedule part and running the model for 8 *7 hours ( 56 hours) then the time measurement for each job is fine but now when i schedule the output it include the idle time also. So How can i only calculate the busy time to see the average time spent by a job in a workshop ( from raw material to finish good.
This the time spent by a job in a process it should be 34-16= approx 18

Firstly, one note: although you state that you run 8 hour shifts, the 8 AM to 6 PM time period is actually 10 hours so I will ignore it in this solution and instead assume that the shifts are actually 8 hours and run from 9:00 to 17:00.
Here is a simple model that was used to test (model time unit is SECONDS):
There are 4 elements to make this work:
Service must be configure to allow pre-emption of tasks with recovery this is done by using Priorities / preemption options as below:
ResourcePool must be configure for 'End of Shift' preemption as shown below:
Calculations of true time (excluding dead time between shifts) is done in f_calcTATsec function:
// get 'Service' enter time for that agent
double startTime = col_startTimesSec.get(_agent);
// calculate time spent
double timeSpent = time() - startTime;
traceln("%.2f: agent spent %.2f in service", time(), timeSpent);
traceln("%.2f: 8hrs is %.2f, 16hrs is %.2f",
time(), (8 * hour()), (16 * hour()));
// below is a ternary statement which says:
// if 'timeSpent' is less than 8 hrs then use it
// otherwise
// exclude whole 16 hr periods (can be more than 1)
// and use the remainder
double trueTimeSpent = timeSpent <= (8 * hour()) ?
timeSpent :
timeSpent % (16 * hour());
// return time spent
traceln("%.2f: returning %.2f", time(), trueTimeSpent);
return trueTimeSpent;
Service object needs to be configured to record entry time for each Agent in col_startTimeSec collection and then call f_calcTATsec() function on exit, i.e. On enter = col_startTimesSec.put(agent, time()); and On exit = double trueTimeSpentSec = f_calcTATsec(agent);

When i was not using the schedule part and running the model for 8 *7 hours ( 56 hours) then the time measurement for each job is fine but now when i schedule the output it include the idle time also
So I assume you were using TimeMeasureStart/End blocks to do the time-in-system measurement. They just calculate the elapsed time from the start block to the end block, and so can never account for 'time that shouldn't count'. You don't have to use these blocks to calculate timings; typically you store relevant start times in the (custom) agent type flowing through the process and then calculate the relevant elapsed times as needed (e.g., at on-exit of a Service block, pseudo-code is "current time - time-on-entry = elapsed time in block").
Firstly though, you need to be clearer about what metric you're calculating and why. You want to exclude time where an in-progress (presumably pre-empted) job waits for the resource to return on-shift. But what about jobs which are queueing for a resource which then goes off-shift? What about if there are multiple possible resources that can be used with different shift patterns? What about the more general waiting of jobs for resources (when resources are on-shift)?
It sounds like what you may really want is both
The elapsed time spent working on a job (cf. waiting for anything).
The elapsed time jobs spend waiting for something (typically resources in a Seize/Service block — not just when tasks are pre-empted by shift-end — but could be other wait mechanisms, such as using Wait blocks).
The latter is just the total elapsed time minus the former.
So there are multiple ways to tackle this. Probably the easiest is to retain your overall elapsed time (via TimeMeasureStart/End blocks) and then calculate the working time separately: store it as a variable in the job agent and add to it in each block where it has 'work done to it' (e.g., for Service blocks without pre-emption use duration from on-seize to on-exit, for Delay blocks use duration from on-enter to on-exit).
To handle where a shift-end-pre-empted task waits for a resource to return on-shift you can use the Service block's "On task suspended" and "On task resumed" actions which trigger when a task is suspended (due to pre-emption) or resumed (when the original resource becomes available if that's the preemption option you chose).
This requires an extra variable to store the "current duration start time".
To be explicit:
Type double Variable cumulativeWorkingTimeMins in your Job agent type
Type double Vriable currentWorkStartTimeMins in your Job agent type
...and for a Service block (handling the pre-emption case)
'On seize unit' (or 'On enter delay') action of agent.currentWorkStartTimeMins = time(MINUTE);
'On task suspended' action of agent.cumulativeWorkingTimeMins += (time(MINUTE) - agent.currentWorkStartTimeMins);
'On task resumed' action of agent.currentWorkStartTimeMins = time(MINUTE);
'On exit' action of agent.cumulativeWorkingTimeMins += (time(MINUTE) - agent.currentWorkStartTimeMins);
[Note units specified in variable names to be clear, and explicit specification of units when getting the current time; this ensures the code is robust to changing your model time unit.]
NB: If you really wanted to just subtract the time pre-empted jobs are waiting for off-shift resources to return (and no other waiting time) — which doesn't seem to make sense as a metric — you can still do that using a variant of the above which just captures that waiting time.
(You'll also need to store the relevant final numbers in some HistogramData element or similar when the job finishes to be able to then show this data in charts: the TimeMeasureEnd blocks automatically capture this histogram data in their distribution variable but, when calculating timings yourself, you need to store data yourself for charts.)

Related

Frame collector & Out of memory errors (large memory allocations)

Task: Tasks spawn with fixed time intervals (source), each has remaining processing time which is given by uniform random [0 .. x]. Each task is processed by the module (delay). Each module has a fixed processing time. Module substracts it's processing time from the task's remaining processing time. If a task's remaining processing time depleted (less than 0), that task becomes completed reaches (sink). Otherwise it goes to the next module, and the same process repeats. There are N modules, that are linked one after eachother. If the task's remaining processing time has not depleted after processing at the N'th module, it goes to the 1st module with the highest priority and is being processed there until remaining processing time depletes.
Model Image
The issue: I've created the model, the max amount of spawned/sinked agents i could get is 17 for -Xmx8G and 15 for -Xmx4G. Then CPU/RAM usage rises to max and nothing happens.
Task Manager + Simulation Image
Task Manager Image
I've also checked troubleshooting "I got “Out Of Memory” error message. How can I fix it?" page.
Case
Result
Large number of agents, or agents with considerable memory footprints
My agents have 2 parameters that are unique to each agent. One is double (remaining_processing_time), another one is integer (queue_priority). Also all 17 spawned agents reached sink.
System Dynamics delay structures under small numeric time step settings
Not using that function anywhere, besides delay block.
Datasets auto-created for dynamic variables
This option is turned off
Maybe i missing something, but i can't really analyze with such small amount of agents. I'll leave a model here.
This model really had me stumped. Could not figure out where the memory was going and why as you run the model visually it stops yet the memory keeps on increasing exponentially... Until I did a Java profiling and found that you have 100s of Main in the memory...
]
You create a new Main() for every agent that comes from the source - so every agent is a new Main and every Main is a completely new "simulation" if you will..
Simply change it back to the default or in your case create your own agent type, since you want to save the remaining time and queue priority
You will also need to change the agent type in all your other blocks
Now if you run. your app it uses a fraction of memory

Anylogic: Measuring process time without considering waiting time during evening

I have created a discrete simulation model for our production processes in which the capacity, output, etc. should be simulated for the coming year. The model works, but I have a problem with measuring the process time. Our production only works from 7 a.m. to 3 p.m. Is there a way to set the TimeMeasureStart and TimeMeasureEnd block so that the time is only measured during the shift?
As a simplified example with a TimeMeasureStart, a service and a TimeMeasureEnd block:
The agent passes TimeMeasureStart at 2:30 p.m. and immediately enters the service block. The service time is 2 hours. The worker starts the service and goes home at 3:00 p.m. The agent waits in the service block from 3:00 p.m. to 7:00 a.m. At 7 a.m. the worker continues the service (until 8:30 a.m.). As soon as it is finished, the agent passes the TimeMeasureEnd block. The result is currently a process time of 18 hours. However, I only want to measure the time that is worked, so that I get 2 hours as the process time.
Is there a possibility to set / program the TimeMeasureStart / TimeMeasureEnd blocks accordingly so that the waiting time is not included?
My first suggestion would be to ensure that you really need calendar time, why not just run the model in hours and every hour is a working hour... then you don't need to shift schedule.
But often for reporting or having different shift patterns within your model requires you to need calendar time as the basis.
Here is a simple solution: Simply record the time a resource was seized through your own local variables.
You need to add two double variables to your agent 1 for last start and 1 for the cumulative time
previousServiceStart and cummServiceTime
and then save the times in the resource pool using the On seize and On release code
I casted the agent to my custom agent using the (MyAgent)agent code, so that I can access the variables

Time Distribution and time spent in process in anylogic

I am working on production model where the the input of raw material is on hourly bases and i am running the model for 8 hours ( 1 shift ) so basically for 16 hour the resources are idle. When i was not using the schedule part and running the model for 8 *7 hours ( 56 hours) then the time measurement for each job is fine but now when i schedule the output it include the idle time also. So How can i only calculate the busy time to see the average time spent by a job in a workshop ( from raw material to finish good.
This the time spent by a job in a process it should be 34-16= approx 18
Firstly, one note: although you state that you run 8 hour shifts, the 8 AM to 6 PM time period is actually 10 hours so I will ignore it in this solution and instead assume that the shifts are actually 8 hours and run from 9:00 to 17:00.
Here is a simple model that was used to test (model time unit is SECONDS):
There are 4 elements to make this work:
Service must be configure to allow pre-emption of tasks with recovery this is done by using Priorities / preemption options as below:
ResourcePool must be configure for 'End of Shift' preemption as shown below:
Calculations of true time (excluding dead time between shifts) is done in f_calcTATsec function:
// get 'Service' enter time for that agent
double startTime = col_startTimesSec.get(_agent);
// calculate time spent
double timeSpent = time() - startTime;
traceln("%.2f: agent spent %.2f in service", time(), timeSpent);
traceln("%.2f: 8hrs is %.2f, 16hrs is %.2f",
time(), (8 * hour()), (16 * hour()));
// below is a ternary statement which says:
// if 'timeSpent' is less than 8 hrs then use it
// otherwise
// exclude whole 16 hr periods (can be more than 1)
// and use the remainder
double trueTimeSpent = timeSpent <= (8 * hour()) ?
timeSpent :
timeSpent % (16 * hour());
// return time spent
traceln("%.2f: returning %.2f", time(), trueTimeSpent);
return trueTimeSpent;
Service object needs to be configured to record entry time for each Agent in col_startTimeSec collection and then call f_calcTATsec() function on exit, i.e. On enter = col_startTimesSec.put(agent, time()); and On exit = double trueTimeSpentSec = f_calcTATsec(agent);
When i was not using the schedule part and running the model for 8 *7 hours ( 56 hours) then the time measurement for each job is fine but now when i schedule the output it include the idle time also
So I assume you were using TimeMeasureStart/End blocks to do the time-in-system measurement. They just calculate the elapsed time from the start block to the end block, and so can never account for 'time that shouldn't count'. You don't have to use these blocks to calculate timings; typically you store relevant start times in the (custom) agent type flowing through the process and then calculate the relevant elapsed times as needed (e.g., at on-exit of a Service block, pseudo-code is "current time - time-on-entry = elapsed time in block").
Firstly though, you need to be clearer about what metric you're calculating and why. You want to exclude time where an in-progress (presumably pre-empted) job waits for the resource to return on-shift. But what about jobs which are queueing for a resource which then goes off-shift? What about if there are multiple possible resources that can be used with different shift patterns? What about the more general waiting of jobs for resources (when resources are on-shift)?
It sounds like what you may really want is both
The elapsed time spent working on a job (cf. waiting for anything).
The elapsed time jobs spend waiting for something (typically resources in a Seize/Service block — not just when tasks are pre-empted by shift-end — but could be other wait mechanisms, such as using Wait blocks).
The latter is just the total elapsed time minus the former.
So there are multiple ways to tackle this. Probably the easiest is to retain your overall elapsed time (via TimeMeasureStart/End blocks) and then calculate the working time separately: store it as a variable in the job agent and add to it in each block where it has 'work done to it' (e.g., for Service blocks without pre-emption use duration from on-seize to on-exit, for Delay blocks use duration from on-enter to on-exit).
To handle where a shift-end-pre-empted task waits for a resource to return on-shift you can use the Service block's "On task suspended" and "On task resumed" actions which trigger when a task is suspended (due to pre-emption) or resumed (when the original resource becomes available if that's the preemption option you chose).
This requires an extra variable to store the "current duration start time".
To be explicit:
Type double Variable cumulativeWorkingTimeMins in your Job agent type
Type double Vriable currentWorkStartTimeMins in your Job agent type
...and for a Service block (handling the pre-emption case)
'On seize unit' (or 'On enter delay') action of agent.currentWorkStartTimeMins = time(MINUTE);
'On task suspended' action of agent.cumulativeWorkingTimeMins += (time(MINUTE) - agent.currentWorkStartTimeMins);
'On task resumed' action of agent.currentWorkStartTimeMins = time(MINUTE);
'On exit' action of agent.cumulativeWorkingTimeMins += (time(MINUTE) - agent.currentWorkStartTimeMins);
[Note units specified in variable names to be clear, and explicit specification of units when getting the current time; this ensures the code is robust to changing your model time unit.]
NB: If you really wanted to just subtract the time pre-empted jobs are waiting for off-shift resources to return (and no other waiting time) — which doesn't seem to make sense as a metric — you can still do that using a variant of the above which just captures that waiting time.
(You'll also need to store the relevant final numbers in some HistogramData element or similar when the job finishes to be able to then show this data in charts: the TimeMeasureEnd blocks automatically capture this histogram data in their distribution variable but, when calculating timings yourself, you need to store data yourself for charts.)

how to write data from Database Log to an output in anylogic?

I'm running a similation where i would lige to know the total amount of time agents spends in a delay block. I can access the data when running single simulations in the Dataset log under flowchart_stats_time_in_state_log
https://imgur.com/R5DG51a
However i would like to to write the data from block 5 (spraying) to an output in order to store the data when running multiple simulations.
https://imgur.com/MwPBvO8
Im guessing that the value reffence should look something like the expression below. It is not working however so i would aprreciate it alot if anybody could help me out or suggest an alternate solution for getting the data.
flowchart_stats_time_in_state_log.total_seconds.spraying;
Btw. Time measures dose not work for this situation since i need to know the total amount of time spend in a block after a 12 hour shift. with time measures i do not get the data from the agents that are still in the block when the simulation ends.
Based on the goal of summing all processing times, you could solve it mathematically. Set the output equal to block.statsUtilization.mean() * capacity * time() calculated on simulation end.
For example, if you have a capacity of 1 and a run length of 100 minutes, then if you had a utilization of 50%; that means you had an agent in the block for 50 minutes. Utilization = time busy / total time. Because of this relationship, we can calculate how long agents were actually in the block.
Another alternative would be to have a variable to track time in block, incrementing when the agents leave. At end of the run, you would need to call a function to iterate over the agents still in the block to add their time. AnyLogic allows you to pretty easily loop over queues, delays, or anything that holds agents:
for( MyAgent agent : delayBlockName ){
variable += time() - agent.enterBlockTime;
}
To implement this solution, you would need to create your own agent (name it something better than MyAgent) with a variable for when the agent enters the block. You would then need to then mark the time each agent enters the block.

Gatling during explanation

I´m using Gatling, and I want to repeat a command for an hour so I see there´s one operator during.
The documentation it´s not clear enough
during
.during(duration, counterName, exitASAP) {
myChain
}
duration can be an Int for a duration in seconds, or a duration expressed like 500 milliseconds.
My question is, during will execute the task in 1 hour duration, or it will repeat the task for an hour.
I know we have repeat operator as well, but that it will require me to know how much time it takes my task to finish and then calc the number of repeats.
The code in the during block will run for the duration of seconds you give.
When the scenario is executed then the iterations in the during block are run until the duration.
Using .during() with amount of seconds. you can constantly increase the users, ramp, split the users but cannot repeat the task