New to stackoverflow but had a quick question about queues. I'm not entirely sure how to phrase the question but here it goes:
Say you had a drive in: there is an order queue and a pickup queue. Normally you go and order + pay at the order queue and then once complete switch to the pickup queue. The average time for a customer to come in is 3 minutes, average time in order queue is 1 minute and average time in pickup que is 1 minute.
In a similar situation, you have an order queue, a pay queue and a pickup queue. Now the average wait time for the order que is half a minute, the average wait time for the pay queue is half a minute and the average wait time for the pickup is still 1 minute, with a average time of 3 minutes for every customer to come into the store.
My questions are:
1) Would you expect the total average wait time for a customer to be higher in case 1 or 2?
2) How would you expect the queues to differ if the average time for a customer to enter the store is increased or decreased?
Related
I have one server with a FIFO queue, the server capacity is 1 and simulation stops at 5 minutes. I have 2 events: arrive and leave server the server.
I'm calculating the average delay in queue which is the sum of delay in queue of each costumer dividing by the number of costumers delayed.
If during simulation all my events are type "Arrive" won't my average delay be 0?
Because everyone that got in queue doesn't leave the server before the end of the simulation.
I need to simulate a simple M/M/1 problem in Anylogic. So far, I created the model and calculated all performance measures like the average time in queue and system and the average number in queue and system. Now I need to calculate the Total Costs. The painting time for a car would be 6 hours and costs $70 per hour. The cost of idle time per car is $100 per hour. The cars arrive according to a Poisson process with a mean rate of 1 every 5 hours. Can someone help me how I can calculate the total costs in this model in annylogic?
enter image description here
Refer to this question about measuring time:
Method the measure the time an agent is not in use during a simulation
You need to create an agent type that has variables for time and cost. Then on the On Enter and On Exit fields, record time and cost for individual agents. Once you have measured time, cost is simply time multiplied by the hourly cost.
If you want to measure total cost, you can create variables in main such as totalCost and the code of the sink's On Enter would be:
totalCost += agent.totalCost
Where the second totalCost variable would be the variable inside the agent type.
Anyway, the above should give you a good idea on how to proceed...
I would like to get the mean waiting time of each unit spending in my queue of every hour. (so betweeen 7-8 am for example 4 minutes, 8-9 10 minutes and so on). Thats my current queue with my timemeasure Is there a way to do so?
]
Create a normal dataset and call it datasetHourly. Deactivate the option Use time as horizontal value. This is where we will store your hourly data.
Creat a cyclic event and set the trigger to cyclic, once every hour.
This cyclic event will get the current mean of your time measurement ( waiting time + service time in your example) and save this single value in the extra dataset.
Also we have to clear the dataset that is integrated into the timeMeasurementEnd, in order to get clean statistics again for the next hour interval.
datasetHourly.add(time(HOUR),timeMeasureEnd.dataset.getYMean());
timeMeasureEnd.dataset.reset();
You can now visualise the hourly development by adding the hourlyDataset to a normal plot.
I found "rate limit" and "burst limit" at Design section of API Designer,
What is the difference of them?
Rate limit can be set at second, minute, hour, day an week time interval.
On the other hand, burst limit can be set only second and minute time interval.
Does it mean same to set 1/1sec rate limit and to set 1/1sec burst limit?
Different Plans can have differing rate limits, both between operations and for the overall limit. This is useful for providing differing levels of service to customers. For example, a "Demo Plan" might enforce a rate limit of ten calls per minute, while a "Full Plan" might permit up to 1000 calls per second.
You can apply burst limits to your Plans, to prevent usage spikes that might damage infrastructure. Multiple burst limits can be set per Plan, at second and minute time intervals.
That said, these two parameters have a different meaning and could be used together. E.g.: I want to permit a total of 1000 calls per hour (rate limit) and a maximum spike of 50 calls per second (burst limit).
Rate limit enforce how many calls (total) are possible for a given time frame. After that the calls are not possible anymore. This is to create staged plans with different limits and charges (like e.g. entry or free, medium, enterprise).
Burst limits are used to manage, e.g., system load by capping the maximum calls for a moment (hence seconds or minutes), to prevent usage spikes. They can be used to make sure the allowed number of API calls (the rate limit) is evenly spread across the set time frame (day, week, month). They can also be used to protect the backend system from overloading.
So you could set a rate limit of 1000 API calls for a week and the burst limit to 100 calls a minute. If there were 10 "heavy" minutes, the entire rate would have been consumed. An user could also use 100+ calls per day to reach the 1000 calls a week.
I want to be able to rank users based on how quick they have completed each level. I want this to be an overall leaderboard I.e. shortest overall time for all levels.
The problem here is that for each level completed the totally completion time goes up. But I want to ensure that the leaderboard takes that into account so that a user having completed 10 levels will rank more highly than someone with only 1 completed level.
How can I create some kind of score based on this?
Before submitting the time to leader board.
You could perform a modulation on the total time by the number of levels completed, then for each level completed reduce it by a set amount so people who complete all levels with the same average time will score better then people with the same average time but with fewer levels.
My Preferred Method:
Or you could express it with a score value.
level complete = 1,000.
Each level has a set time limit bonus, the longer you take the less bonus u get.
eg
I Complete the level in 102 secs Goal time is 120 secs
I get 1,000 points for completion and 1,500 points for each second
that i beat the Goal time for.
This way i will get 1,000 + (18* 1,500) = 28,000 points
Next guy completes in 100 secs
He Gets 1,000 + (20*1,500) = 31,000 points
I suggest adding a default amount of time to the total for each incomplete level. So, say, if a player beats a new level in 3 minutes, that replaces a 10 minute placeholder time, and they 'save' 7 minutes from the total.
Without that kind of trick, the iPhone has no provision for multi-factor rankings.
Leaderboard scores in GameKit have to be expressed as a single number (see this section of the GameKit Programming Guide), so that won't be possible.
Your best bet would be to just have a completion time leaderboard for people who have completed all the levels, and maybe another leaderboard (or a few) for people who have completed a smaller number of levels.