In AnyLogic, how can I let the event be touched after running the simulation, so each time I don't need to copy the table from Log and paste it to excel. I tried to use the database to store the variables but it seems too complicated and I couldn't work with it!
When I ran model in anylogic, event can't be touched off. It showed that event don't be scheduled. I try many ways, but it is also that.
To answer the question about calling event after simulation:
In main, you can call a function on destroy. At the experiment level, you can also call a function after run, iteration, or experiment.
Related
I am currently building a model on a manufacture process line and the simulation was running fine without errors. Suddenly when I entered in virtual mode to run quickly the simulation, the model started to slow down although the step is high. I am trying to identify where the issue is but nothing is working. At a certain time , the simulation just stops while the step is still running.
This is a picture of the pallete, maybe the experiment is causing this.
You created an infinite loop, this can be triggered by various things in your model.
Likely, you have a ' while' loop not finishing, could also be a condition-based transition.
You need to find this yourself, though. 3 options:
(easy): Check the model logic yourself and find the problem
(easy): nudge yourself to where it stops with traceln commands (see where they stop showing, getting you closer to the culprit)
(harder): Use a profiler (google "AnyLogic profiling" or similar if you are not familiar)
Benjamin is correct, you have created an infinite loop. Click on the "Events" tab in the developer panel and see which events are scheduled to occur at about the time that your model slows down to 0 days/sec. You can also pay attention to the "Step: " counter at the bottom of the developer panel and see where the step count spikes - e.g., if your model has roughly 10k steps per day, and suddenly starts climbing to 400k steps around 25.99 days, you can pay attention to which things are happening in your logic at that time and narrow down where the infinite loop is created. traceln will also help immensely
I have a model as shown below with two situations, I am running it for two situations.
In the first run (for situation1), I write traceln function as "traceln(productDemand)" in the "event-generateDemand" placed in "Main". At the end of simulation, I get the values in the first column below.
2)In the second run (for situation2), for once I write traceln function as "traceln(main.productDemand)" in the "event" placed in "Producer" agent.At the end of the second simulation, I get the values in the second column below.
Normally, these two values are always same , it expected that at the every simulation time they have to be same, but they are not same as shown in the Fig.1. what's the problem? Why the "productDemand" variable is different when we try to access from another agent at the same time?
I hope I was able to explain my problem.
Fig.1- The obtained results as table format
Fig.2- The screenshot of Event placed in Main
Fig.3- The screenshot of Event placed in Producer agent
Fig.4- The obtained results for both traceln functions on the running
Fig.5- Simulation experiment screenshot.
fig.1
fig.2
fig.3
fig.4
fig.5
There is no bug in the model it is just a simple case of timing. Not all events occur at exactly the same "time" although they all occur at the same timestep. One will always execute before the other.
See the simple example below:
I have eventA that increases the variable value and then traces the value (similar to your event on main)
Then I have another event that traces the variable as well, similar to yours in the agent.
Yet when I run the model at the same time the variable appears to be different from the different locations of tracing
If you click on the Events tab in the console you will see that event B is scheduled to run before event A
Even though both will run at the same timestep in the model they don't run "at the same time"
If you want to be in total control of what happens on a specific time step, it is advised to have one event that runs at the time interval you want, e.g daily, and have it sit on main and then call all the functions in the order you want them executed.
If you don't do this then AnyLogic will schedule the events as they get created, which most of the time is the order in which you placed them on the canvas.
I am using Anylogic for a simulation-modeling class, and I am not anylogic or coding smart. My last and only coding class was MatLab based about 16 yrs ago. I have a few questions about how to implement modeling concepts in a discrete model with anylogic.
How can I add/inject agents directly into a queue downstream from a source? I have tried adding an additional source to use the “Calls of inject() function,” but I am not sure how to implement it after selecting it ( example: what do I do after selecting the Calls of inject() function). I have the new source feeding directly into the queue where I want the inject.
How can I set the release of an agent to a defined schedule instead of a rate? Currently, I have my working model set to interarrival time. But I would like to set the agent release to a defined schedule. (example: agent-1 released at 120 seconds, agent-2 released at 150 seconds, agent-3 released at 270 seconds)
Any help would be greatly appreciated, especially if it can be written in a “explain to me like I am 5yrs old” format.
Question 1:
If you have a source connected directly to a queue, then when you call source.inject() an agent will be created at the source block and go to the queue. If you have 1 source with multiple possible destinations, then you will have to use select output blocks and some criteria to go from the source to the desired queue.
Since you mentioned not being a strong programmer, this probably wouldn't be for you, but I often find myself creating agents via add_population and then just adding them to an ArrayList until I am ready to pull them into the DES flow. Really, there are near infinite ways to control agent flow within AnyLogic.
Question 2:
Option a: Arrivals by "Arrival Table in Database" You can link an AnyLogic database table to Excel, and then the source block will just have an agent arrive based on that table.
Option b: Arrival Schedule - you could set this up manually within the development environment or load your schedule from a database. I prefer option a over option b given your brief description.
Option c: Read in data to variable and then write code to release based on next arrival time. 1,000s of ways to do this, but one example could be a list of doubles (your arrival times), set an event to delay until next arrival, call inject function, remove that arrival from the list. I think option a would be best for you, but given that AnyLogic allows you to add java code, there are no limits to how sophisticated you could make your arrival logic.
For 2) You could also use an event or a dynamic event. The action could be source.inject(1); and you can schedule them to your preferences with variables. Just be vigilant that you re-start the events if necessary.
There is a demo-model from AnyLogic for dynamic events.
Good day
I'm a new user trying to find my with Anylogic.
Any help with the following question will be appreciated.
Is it possible to start a model with initial values/quantities given to certain blocks/sections in a model? In other words not have the model start from 0 but from the values given.
You can run a "warmup" period manually and save that as a model snapshot. In future runs, you can start off from that snapshot by loading it. See the help on model snapshots
This is the general problem of model initialisation (e.g., if you're modelling a manufacturing facility, you may want the run to start with the facility at the state it would be at on 9am next Monday morning). There is no generic answer: what initialisation you need is 100% model-dependent (as is how easy/hard this is).
In particular, process models make this difficult because entities (agents) are expected to have flowed through the process up to the point they 'start' in. You can use things like extra initialisation-only Source/Enter blocks to 'inject' agents into the appropriate process points, but in most models it's not this easy: you will have all kinds of model state that needs to be made consistent with this (e.g., the agents flowing through the process might have attributes which have changed based on what's happened to them so far, so this would have to be made consistent).
That's why warm-up periods (letting the model run 'from empty' for a period until its state is qualitatively what you want as your starting point) is a common approach. Model snapshots can help you here (see Ben's answer) but they're not the only way of doing it. (You can also just 'reset' all your metrics/output gathering at the point when you determine the warm-up period has ended --- i.e., you are effectively establishing a new 'time zero' --- but, again, exactly what you need to do is 100% model dependent.)
I'm working with code that uses a tumbling window of one day, and would like to send early results to a different DataStream on an hourly basis.
I understand that triggers are a way to go here, but don't really see how it would work.
The current code is as follows:
myStream
.keyBy(..)
.window(TumblingEventTimeWindows.of(Time.days(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
In my understanding, I should register a trigger, and then on its onEventTime method get a hold of a TriggerContext and I can send data to the labeled output from there. But how do I get the current state of MyAggregateFunction from there? Or would I need to my own computation here inside of onEventTime()?
Also, the documentation states that "By specifying a trigger using trigger() you are overwriting the default trigger of a WindowAssigner.". Would my one day window then still fire correctly, or do I need to trigger it somehow differently?
Another way of doing this is creating two different operators - one that windows by 1 hour, and another that windows by 1 day. Would triggers be a preferred approach to that?
Rather than using a custom Trigger, it would be simpler to have two layers of windowing, where the hourly results are further aggregated into daily results. Something like this:
hourlyResults = myStream
.keyBy(...)
.window(TumblingEventTimeWindows.of(Time.hours(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
dailyResults = hourlyResults
.keyBy(...)
.window(TumblingEventTimeWindows.of(Time.days(1)))
.aggregate(new MyAggregateFunction(), new MyProcessWindowFunction())
hourlyResults.addSink(...)
dailyResults.addSink(...)
Note that the result of a window is not a KeyedStream, so you will need to use keyBy again, unless you can arrange to leverage reinterpretAsKeyedStream (docs).
Normally when I get to more complex behavior like this, I use a KeyedProcessFunction. You can aggregate (and save in state) hourly and daily results, set timers as needed, and use a side output for the hourly results versus the regular output for the daily results.
There are quite a few questions here. I will try to ask all of them. First of all, if You specify Your own trigger using trigger() this means You are going to effectively override the default trigger and thus the window may not work the way it would by default. So, if You for example if You create the 1 day event time tumbling window, but override a trigger so that it fires for every 20th element, it will never fire based on event time.
Now, after Your custom trigger fires, the output from MyAggregateFunction will be passed to MyProcessWindowFunction, so It will work the same as for the default trigger, you don't need to access the MyAggregateFunction from inside the trigger.
Finally, while it may be technically possible to implement trigger to trigger partial results every hour, my personal opinion is that You should probably go with the two separate windows. While this solution may create a slightly larger overhead and may result in a larger state, it should be much clearer, easier to implement, and finally much more error resistant.