I'm quite a beginner in Anylogic, so maybe my question is moronic.
What I'm trying to do is to create a model of M/M/1 with reneging, i.e. an agent waits in queue for a (random) amount of time and then exits the queue via timeOut.
Also, I've inserted timeMeasureStart and timeMeasureEnd in order to find the mean time spent in queue for the agents which left the queue by timeOut: MM1 with reneging.
I've tried to set constant time, uniform, triangular and normal random time - the mean time (and the deviation) was as the theory predicts.
But when I tried to use exponential (and weibull), the mean time was significantly less then the mean value of the distribution.
I wonder if someone could explain to me why it happens?
Related
I have to calculate the probability of a customer waiting more than 5 min in the queue in anylogic. I've already implemented the timemeasureend and-start block, but I have seriously no clue how to compute the probability of a customer waiting longer than 5min? What do I need to write where? Help is highly appreciated
Thanks!
There are two objects in a process modelling library: TimeMeasureStart and TimeMeasureEnd. You can put those around a queue and record the time for each entity after they exit the queue. Save that time to a Statistics object and from there your probability if waiting more that 5 mins should be (no of samples over 5 mins)/(total number of entities). Also, make sure that your model time unit is set to minute to make it easier.
I am currently working on a simple simulation that consists of 4 manufacturing workstations with different processing times and I would like to measure the WIP inside the system. The model is PennyFab2 in case anybody knows it.
So far, I have measured throughput and cycle time and I am calculating WIP using Little's law, however the results don't match he expectations. The cycle time is measured by using the time measure start and time measure end agents and the throughput by simply counting how many pieces flow through the end of the simulation.
Any ideas on how to directly measure WIP without using Little's law?
Thank you!
For little's law you count the arrivals, not the exits... but maybe it doesn't make a difference...
Otherwise.. There are so many ways
you can count the number of agents inside your system using a RestrictedAreaStart block and use the entitiesInside() function
You can just have a variable that adds +1 if something enters and -1 if something exits
No matter what, you need to add the information into a dataset or a statistics object and you get the mean of agents in your system
Little's Law defines the relationship between:
Work in Process =(WIP)
Throughput (or Flow rate)
Lead Time (or Flow Time)
This means that if you have 2 of the three you can calculate the third.
Since you have a simulation model you can record all three items explicitly and this would be my advice.
Little's Law should then be used to validate if you are recording the 3 values correctly.
You can record them as follows.
WIP = Record the average number of items in your system
Simplest way would be to count the number of items that entered the system and subtract the number of items that left the system. You simply do this calculation every time unit that makes sense for the resolution of your model (hourly, daily, weekly etc) and save the values to a DataSet or Statistics Object
Lead Time = The time a unit takes from entering the system to leaving the system
If you are using the Process Modelling Library (PML) simply use the timeMeasureStart and timeMeasureEnd Blocks, see the example model in the help file.
Throughput = the number of units out of the system per time unit
If you run the model and your average WIP is 10 units and on average a unit takes 5 days to exit the system, your throughput will be 10 units/5 days = 2 units/day
You can validate this by taking the total units that exited your system at the end of the simulation and dividing it by the number of time units your model ran
if you run a model with the above characteristics for 10 days you would expect 20 units to have exited the system.
In the blocking world, it is highly recommended to set aggressive timeouts in order to fail fast and release the underlying resources (Section 5.1 of https://pragprog.com/book/mnee/release-it).
In the async/non-blocking world, requests are not blocking the main thread and the resources are available immediately for further processing. Timeouts are still necessary, however does it still make sense to set aggressive values?
In real-time software, network requests or control operations on machinery take a large amount of time in comparison to day-to-day software operations. For instance, telling a step motor to advance to a particular position may take seconds, while normal operations might take milliseconds. Let's say that a typical step motor advance takes n milliseconds, and one that goes the maximum distance takes m milliseconds.
An aggressive timeout would compute n and add a small fudge factor, perhaps 10%, and fail quickly if the goal wasn't reached in that time. As you stated, the aggressive timeout will allow you to release resources. A non-aggressive timeout of m plus epsilon would fail much more slowly, and tie up resources unnecessarily.
In the asynchronous software world, there a number of other choices between success and failure. An asynchronous operation might also calculate n plus 10%, and put up a progress bar (if user feedback is desired) and then show progress towards the estimated goal's end. When the timeout is reached, the progress bar would be full, but you might cause it to pulse or change color to indicate it was taking longer than expected. If the step motor still had not reached its goal after m milliseconds, then you could announce a failure.
In other cases, when the feedback is not important, then you could certainly use m plus epsilon as your timeout.
I am trying to build a network simulation (aloha like) where n nodes decide at any instant whether they have to send or not according to an exponential distribution (exponentially distributed arrival times).
What I have done so far is: I set a master clock in a for loop which ticks and any node will start sending at this instant (tick) only if a sample I draw from a uniform [0,1] for this instant is greater than 0.99999; i.e. at any time instant a node has 0.00001 probability of sending (very close to zero as the exponential distribution requires).
Can these arrival times be considered exponentially distributed at each node and if yes with what parameter?
What you're doing is called a time-step simulation, and can be terribly inefficient. Each tick in your master clock for loop represents a delta-t increment in time, and in each tick you have a laundry list of "did this happen?" possible updates. The larger the time ticks are, the lower the resolution of your model will be. Small time ticks will give better resolution, but really bog down the execution.
To answer your direct questions, you're actually generating a geometric distribution. That will provide a discrete time approximation to the exponential distribution. The expected value of a geometric (in terms of number of ticks) is 1/p, while the expected value of an exponential with rate lambda is 1/lambda, so effectively p corresponds to the exponential's rate per whatever unit of time a tick corresponds to. For instance, with your stated value p = 0.00001, if a tick is a millisecond then you're approximating an exponential with a rate of 1 occurrence per 100 seconds, or a mean of 100 seconds between occurrences.
You'd probably do much better to adopt a discrete-event modeling viewpoint. If the time between network sends follows the exponential distribution, once a send event occurs you can schedule when the next one will occur. You maintain a priority queue of pending events, and after handling the logic of the current event you poll the priority queue to see what happens next. Pull the event notice off the queue, update the simulation clock to the time of that event, and dispatch control to a method/function corresponding to the state update logic of that event. Since nothing happens between events, you can skip over large swatches of time. That makes the discrete-event paradigm much more efficient than the time step approach unless the model state needs updating in pretty much every time step. If you want more information about how to implement such models, check out this tutorial paper.
I was revisiting Operating Systems CPU job scheduling and suddenly a question popped in my mind, How the hell the OS knows the execution time of process before its execution, I mean in the scheduling algorithms like SJF(shortest job first), how the execution time of process is calculated apriori ?
From Wikipedia:
Another disadvantage of using shortest job next is that the total execution time of a job must be known before execution. While it is not possible to perfectly predict execution time, several methods can be used to estimate the execution time for a job, such as a weighted average of previous execution times.[1]
More on http://en.wikipedia.org/wiki/Shortest_job_next
Also, O.S can compute the total needed time for each task, by means of first calculating its CPI.
(CPI: cycles per instruction)
There is a weighted average CPI for each job.
For example, floating point instructions weigh much more than fixed point instructions, meaning they take more time to perform. So a job dealing with fixed point operations: like add or increment is perceived to be shorter. Hence in a shortest job first, it shall be executed prior to the aforementioned job.