Data Logging of events along with Time using SimEvents 2016a - matlab

I am working with SimEvents 2016a to simulate an Manufacturing Assembly line which can handle multiple Variants (ex.: part 1 is for BMW 5-series, part 2 for BMW 3-series, something like that)
I would like to record data coming IN and going OUT of the model. Data such as:
Part ID
Time Spent at each Block (Block such as: Entity server or Queue...) [As far as my understanding on this, it similar to timestamping the entity at every block it passes through]
These data I would like to transfer to another Excel file (initially to the workspace at least).
It would be grateful, if any resources or methods to implement it or tutorial is shared so that I implement the above mentioned. :)

Here are a few helpful clues to complete your task :
PartID : you can trigger a Simulink Function in the Event Actions of a standard block, and then write the output using the "To Workspace" block.
Time spent at each block : you can get the current simulation time using the "get_param" function, and use that in Event Action scripts. Indeed, if you can stamp time, then you can evaluate the difference between entry and exit of an entity. However, make sure to write values in fields of the entity.
get_param(bdroot,'SimulationTime')

Related

Multiple agents arrival based on Variable and database column

In my source block I want to be the amount of agents based on two different factors namely the amount of beds and visitors per bed. The visitors per bed is just a variable (e.g. visitors=3) and the amount of beds is loaded from the database table which is an excel file (see first image). Now I want to code this in the code block as shown in the example in image 2, but I do not know the correct code and do not know if it is even possible.
Simplest solution is just to do the pre-calcs in the input file and have in the dbase.
The more complex solution is to set the Source arrivals as:
Now, you read your dbase code at the start of the model using SQL (i.e. the query constructor). Make the necessary computations and create a Dynamic Event for each arrival when you want it to happen, relative to the model start. Each dynamic event then calls the source.inject(1) method.
Better still is to not use Source at all but a simple Enter block. The dynamic event creates the agent with all relevant properties from your dbase and pushes it into the Enter block using enter.take(myNewAgent)
But as I said: this is not trivial

Best Practice to Store Simulation Results

Dear Anylogic Community,
I am struggling with finding the right approach for storing my simulation results. I have datasets created that keep track of every value I am interested in. They live in Main (see below)
My aim is to do a parameter variation experiment. In every run, I change the value for p_nDrones (see below)
After the experiment, I would like to store all the datasets in one excel sheet.
However, when I do the parameter variation experiment and afterwards check the log of the dataset (datasets_log), the changed values do not even show up (2 is the value I did set up in the normal simulation).
Now my question. Do I need to create another type of dataset if I want to track the values that are produced in the experiments? Why are they not stored after executing the experiment?
I really would appreciate if someone could share the best way to set up this export of experiment results. I would like to store the whole time series for every dataset.
Thank you!
Best option would be to write the outputs to some external file at the end of each model run.
If you want to use Excel, which I personally would not advise, even though it has a nice excelFile.writeDataSet() function, you can.
I would rather write the data to a text file as you will have much for control over the writing, the file itself, it is thread-safe, and useable in many many more platforms than Microsoft Excel.
See my example below:
Setup parameters in your model that you will write the data to at the end of the model of type TextFile. Here I used the model on destroy code to write out the data from the data sets.
Here you can immediately see the benefit of using the text file! You can add the number of drones we are simulating (or scenario name or any other parameter) in a column, whereas with Excel this would be a pain...
Now you can pass your specific text file to the model to use by adding it to the parameter variation page, providing it to the model through the parameters.
You will see that I also set up some headers for the text file in the Initial Experiment setup part, and then at the very end of the experiment, I close the text files in the After experiment section so that the text files can be used.
Here is the result if you simply right-click on the text files and open them in Excel. (Excel will always have a purpose, even if it is just to open text files ;-) )

Anylogic ‘how to’ questions

I am using Anylogic for a simulation-modeling class, and I am not anylogic or coding smart. My last and only coding class was MatLab based about 16 yrs ago. I have a few questions about how to implement modeling concepts in a discrete model with anylogic.
How can I add/inject agents directly into a queue downstream from a source? I have tried adding an additional source to use the “Calls of inject() function,” but I am not sure how to implement it after selecting it ( example: what do I do after selecting the Calls of inject() function). I have the new source feeding directly into the queue where I want the inject.
How can I set the release of an agent to a defined schedule instead of a rate? Currently, I have my working model set to interarrival time. But I would like to set the agent release to a defined schedule. (example: agent-1 released at 120 seconds, agent-2 released at 150 seconds, agent-3 released at 270 seconds)
Any help would be greatly appreciated, especially if it can be written in a “explain to me like I am 5yrs old” format.
Question 1:
If you have a source connected directly to a queue, then when you call source.inject() an agent will be created at the source block and go to the queue. If you have 1 source with multiple possible destinations, then you will have to use select output blocks and some criteria to go from the source to the desired queue.
Since you mentioned not being a strong programmer, this probably wouldn't be for you, but I often find myself creating agents via add_population and then just adding them to an ArrayList until I am ready to pull them into the DES flow. Really, there are near infinite ways to control agent flow within AnyLogic.
Question 2:
Option a: Arrivals by "Arrival Table in Database" You can link an AnyLogic database table to Excel, and then the source block will just have an agent arrive based on that table.
Option b: Arrival Schedule - you could set this up manually within the development environment or load your schedule from a database. I prefer option a over option b given your brief description.
Option c: Read in data to variable and then write code to release based on next arrival time. 1,000s of ways to do this, but one example could be a list of doubles (your arrival times), set an event to delay until next arrival, call inject function, remove that arrival from the list. I think option a would be best for you, but given that AnyLogic allows you to add java code, there are no limits to how sophisticated you could make your arrival logic.
For 2) You could also use an event or a dynamic event. The action could be source.inject(1); and you can schedule them to your preferences with variables. Just be vigilant that you re-start the events if necessary.
There is a demo-model from AnyLogic for dynamic events.

data processing of Dymola's result during simulation

I am working on a complex Modelica model that contains a large set of data, and I need the simulation to keep going until I terminate the simulation process, maybe even for days, so the .mat file could get very large, I got trouble with how to do data processing. So I'd like to ask if there are any methods that allow me to
output the data I need after a fixed time step during simulation, but not using the .mat file after simulation. I am considering using Modelica.Utilities.Stream.Print` function to print the data I need into a CSV file, but I have to write a huge amount of code that prints every variable I need, so I think there should be a better solution.
delete the .mat file during a fixed time step, so the .mat file stored on my PC wouldn't get too large, and don't affect the normal simulation of Dymola.
Long time ago I wrote a small C-program that runs the executable of Dymola with two threads. One of them is responsible for terminating the whole simulation after exceeding an input time limit. I used the executable of this C-program within the standard given mfiles from Dymola. I think with some hacking capabilities, one would be able to conduct the mentioned requirements.
Have a look at https://github.com/Mathemodica/dymmat however I need to warn that the associated mfiles were for particular type of models and the software is not maintained since long time. However, the idea of the C-program would be reproducible.
I didn't fully test this, so please think of this more like "source of inspiration" than a full answer:
In Section "4.3.6 Saving periodic snapshots during simulation" of the Dymola 2021 Release Notes you'll find a description to do the following:
The simulator can be instructed to print the simulation result file “dsfinal.txt” snapshots during simulation.
This can be done periodically using the Simulation Setup options "Complete result snapshots", but I think for your case it could be more useful to trigger it from the model using the function Dymola.Simulation.TriggerResultSnapshot(). A simple example is given as well:
when x > 0 then
Dymola.Simulation.TriggerResultSnapshot();
end when;
Also one property of this function could help, as it by default creates multiple files without overwriting them:
By default, a time stamp is added to the snapshot file name, e.g.: “dsfinal_0.1.txt”.
The format of the created dsfinal_[TIMESTAMP].txt is a bit overwhelming at first, as it contains all information for initializing the model, but there should be everything you need...
So some effort is shifted to the post processing, as you will likely need to read multiple files, but I think this is an acceptable trade-off.

Is there a way to not start an Anylogic simulation from scratch every time?

Good day
I'm a new user trying to find my with Anylogic.
Any help with the following question will be appreciated.
Is it possible to start a model with initial values/quantities given to certain blocks/sections in a model? In other words not have the model start from 0 but from the values given.
You can run a "warmup" period manually and save that as a model snapshot. In future runs, you can start off from that snapshot by loading it. See the help on model snapshots
This is the general problem of model initialisation (e.g., if you're modelling a manufacturing facility, you may want the run to start with the facility at the state it would be at on 9am next Monday morning). There is no generic answer: what initialisation you need is 100% model-dependent (as is how easy/hard this is).
In particular, process models make this difficult because entities (agents) are expected to have flowed through the process up to the point they 'start' in. You can use things like extra initialisation-only Source/Enter blocks to 'inject' agents into the appropriate process points, but in most models it's not this easy: you will have all kinds of model state that needs to be made consistent with this (e.g., the agents flowing through the process might have attributes which have changed based on what's happened to them so far, so this would have to be made consistent).
That's why warm-up periods (letting the model run 'from empty' for a period until its state is qualitatively what you want as your starting point) is a common approach. Model snapshots can help you here (see Ben's answer) but they're not the only way of doing it. (You can also just 'reset' all your metrics/output gathering at the point when you determine the warm-up period has ended --- i.e., you are effectively establishing a new 'time zero' --- but, again, exactly what you need to do is 100% model dependent.)