How to create an hourly arrival rate schedule based on route costs and agent choices? - anylogic

I am building a model in Anylogic where customers order containers at terminals with a starting hourly rate (which differs per hour), which I load from database to a schedule and then let the customers order at every terminal with the rate of the schedule. When they have ordered, a truck will bring the container from terminal to customer.
However, I want to give each possible route for the trucks (directly or via a hub that is open at night) some costs (depends on time of day and travel time). Depending on these costs, the agents choose which route to take and (as the hub route is partly done at night) at what time to travel. The choices are:
Travel directly (arrive at the terminal at day time)
Travel via hub (arrive at the terminal at night time)
Thus, I want to let the hourly arrival rate schedule change based on the choices agents make after calculations. Does anybody know how to let the arrival rate schedule (different per hour) change depending on agent choices (based on route costs)?

Based on your answers, you don't really need to change the hourly order rate schedule, rather you need to choose when to deliver the containers to the customers. For this, place the arrived orders in a queue and process them with FIFO principle (or LIFO, whichever priority you assign them).
But if you insist on having different hourly order rates, you can use the following approach. So, if you want to distribute 10 arrivals during an hour, you can use exponential(10) as the interarrival distribution. Please see the screenshot below. I update the variable trucks1 dynamically during the simulation to have different number of arrivals for different hours.

Related

AnyLogic Measure time in the system for each arrival stream

I have a flow chart like below.
my current tmE_SystemA measured time in the system for a customer, regardless of their arrival stream. However, I also would like to know the time for a customer in the system for each arrival stream.
I tried to add 3 more Time Measure End before the current tmE_SystemA and each measure for one tmS_A. But it will gives me error when customer from other streams reach to the new Time Measure End (e.g. when customer from customerArrival_A2 reached tmE_A1 it will show error, saying that this agent not pass through corresponding Time Measure Start.)
So how can I properly measure the time for each arrival stream?
Instead of timeMeasureStart and timeMeasureEnd blocks, you can just add variables inside your Customer agent called startTime, endTime and cycleTime of type double.
Then, at endService, you can type agent.cycleTime=agent.endTime-agent.startTime;. You will have the time in system for each agent.

Work shift scheduling with break times for specific agents

I am building a simulation model for a production line. There are two shifts (morning and night shift, 12 hours each) daily. Within each shift, the workers are split into 4 groups and each group goes for meal breaks at a staggered timing (eg. 4 workers in morning shift, first worker goes for break at 9am, second goes at 10am, etc.). These workers will also take ad-hoc breaks at random occurrences during their shift.
Not sure which method would work:
Creating an individual schedule within the agent and let it change states according to the schedule?
Use a common schedule for the entire resource pool, but will it be possible to pick which agent goes for break at the break time? Or will the agent be picked at random? Caus my concern is that i'll need the agents to take breaks but at staggered intervals.
Or should I generate this in a different approach?
Good question!
On option 2)
If you use the resource pool you will not be able to choose a specific agent as shifts and breaks are created for the entire pool.
What you can do is to define the capacity of the resource pool using, multiple schedules
This can help you artificially define the staggered. nature of the break-taking for resources.
Refer to the help for more details - https://anylogic.help/library-reference-guides/process-modeling-library/resourcepool.html
I believe this answers your question already but here are my notes on the other option.
Option 1)
If you require more advanced flexibility and control over the breaks and you do have the required Java skills (and time!) you can create custom code that controls when to send agents on a break and when to to return. You can use StateCharts inside your agents to build this logic. But then this will not be compatible with the resource pool since the resource pool will be oblivious to the state of the agents inside the pool and it will seize units that are taking a break...
So in this case your size delay and release will also be custom.
This is a lot of work and should only be attempted if you have the time, skills and require a level of flexibility and customization not offered by the resource pool.

Is it possible to accelerate time in grafana?

Actually what I want to do,
I created dashboards to monitor the alert status in grafana.
I created fake data in my system to simulate my alert situations on these boards. The time of this data covers the range now - now + 12h. In fact, it takes a long time to analyze the alert status in real data. For this reason, I cannot be very flexible on my alert rules. I have to wait until the end of this period to see alert status in the system. (I have many states like this actually.) Grafana creates pending, alerting, and ok states according to the records in my database. Is there a method to quickly verify my tests without waiting for this time?
The main problem is that it is fairly expensive to do in a data source agnostic way. The way worked in Bosun is you would select a time range, and then an interval or a number of queries to run.
Setting both From and To enables testing multiple iterations of the selected alert over time. The number of iterations depends on the setting to the two linked fields Intervals and Step Duration at 3 Changing one changes the other. Intervals will be the number of runs to do even spaced out over the duration of From to To and Step Duration is how much time in minutes should be between intervals. Doing a test over time will populate the Timeline tab 5 which draws a clickable graphic of severity states for each item in the set:
It would then run all those queries with a pool limiting simultaneous queries. For an interval of say 5 minutes, it would run adjacent 5 minute queries.
So this would speed up the alert authoring and testing workflow significantly. But it would best be implemented as a job system. This is because with more expensive queries, or range/interval combination that is a fair amount of runs, it may take a minute or so - so having to wait on an open network connection is less ideal.
So I found I generally used in two modes:
To tweak a specific alert that had fired at some time
To get a general overview of how much the alert rule would trigger for the historical data
For the general over, a larger time range is generally wanted, which means more queries if the interval is kept the same. And with a feature like FOR (Pending), you would have to use the same interval it would actually run at.
So possible, has some limitations, and some care needs to be taken to do it right. But extremely useful in my experience.

How to keep track of entities when a queue gets closed when modeling supermarket checkout counter in Rockwell Arena?

I'm working on a simulation project with Arena Rockwell Simulation that aims to analyze the waiting queue in a supermarket and reduce the waiting time.
I have 5 checkout counters and the cashiers(=resources) are assigned in a schedule. Before the entities enter the process module(=checkout counters, seize-delay-release) they run through a decision module that
checks which checkout counter is open
if there are resources available in case the service level (4 people waiting in line) of the open counters are reached
names a tie breaker (the smallest check out counter number).
So far so good.
My problem now is that cashiers also have breaks. Let's say there are 4 people waiting in line at check out counter # 2 and the cashier has according to his/her schedule a break now. Then no matter which schedule rule I choose (wait, preempt, ignore) the cashier cashes at the most the current customer up. So the other 3 people are just being left in the model until the break is over and the cashier returns.
Is there any possible adjustment I can make in the model that allows the cashier to cash up the whole waiting line?
I would be very grateful for any advice!

Howto design a clock driven multi-agent simulation

I want to create a multi-agent simulation model for a real word manufacturing process to evaluate some dispatching rules. The simulation needs to produce event logs to evaluate time effect of the dispatching rules compared to the real manufacturing event logs.
How can I incorporate the 'current simulation time' into this kind of multi-agent, message passing intensive simulation?
Background:
The classical discrete event simulation (which handles the time-advancement nicely) cannot be applied here, as the agents in the system represent relatively complex behavior and routing requirements plus the dispatching rules require them to communicate frequently. This and other process complexities rule out a centralized scheduling approach as well.
In the manufacturing science, there are thousands of papers using a multi-agent simulation for their solution of some manufacturing related problem. However, I haven't found a paper yet which describes the inner workings or implementation details of these simulations in the required detail.
Unfortunately, using the shortest process time for discrete time stepping in a system might be infeasible as the range of process time is between 0.1s and 24 hours. There is a possibility my simulation will be used for what-if evaluations in a project later on so the simulation needs to run as fast as possible - no option for overnight simulation runs.
The problem size is about 500 resources and 1000 - 10000 product agents, most of them is finished and not participating in any further communication or resource occupation.
Consequently, in result to the communication new events can trigger an agent to do something before its original 'next time' event would arrive. For example, an agent is currently blocked on a resource lasting an hour. However, another higher priority agent needs that resource right away and asks the fist agent to release that resource.
In some sense, I need a way to create a hybrid of classical message passing agent-simulation and the discrete event simulation.
I considered a mediator agent that is involved in every message - a message router and time enforcer which sends around the messages and the timer tick events. Also the mediator agent keeps a list of next event times for various agents. However, I feel there should be a better way to solve my problem as the concept puts an enormous pressure at the mediator agent.
Update
It took a while, but it seems I managed to create a mini-framework and combined the DES and Agent concept into one. I'm sure its nothing new but at least unique: http://code.google.com/p/tidra-framework/ if you are interested.
This problem sounds as if it should be tackled by using parallel discrete-event simulation - the mediator agent you are planning to implement ('is involved in every message', 'sends around messages and timer tick events') seems to be doing the job of a discrete-event simulator right now. You can make this scale to the desired problem size by using more of such simulators in parallel and then use a synchronization algorithm to maintain causality etc. (see, e.g., this book for details). Of course, this requires some considerable effort, and you might be better off by really trying out the sequential algorithms first.
A nice way of augmenting the classical DES-view of logical processes (= agents) that communicate with each other via events could be to blend in some ideas from other formalisms used to describe discrete-event systems, such as DEVS. In DEVS, each entity can specify the duration it will be in a certain state (e.g., the agent blocking a resource), and will only be interrupted by incoming messages (and then change its state accordingly, e.g. the agent freeing the resource).
BTW In which sense do you think that the agents are too complex to be handled with discrete-event simulation? If you regard each agent as a logical process, it doesn't really matter how complex it is from a simulation point of view - or am I getting something wrong here?