Merging recorded times by different timeMeasureEnd blocks into a single histogram - simulation

I have multiple timeMeasureEnd blocks as presented in the image below. I intend to build a single histogram with the information recorded by all blocks. Can it be done? Thank you in advance!

Here are two options for you, either keep the data in the individual data sets and then save the last entry to the histogram data or clear them and just take entry 0

Related

Reducing sampling in simulation

Is there any way in Modelica, to reduce the sampeling during simulations? I have a DCDC converter with high frequency, consequently generating huge dataset. I am wondering, if there is any way to reduce the size of the dataset during simulation/exportation?
Thanks
Trying to create smaller dataset from models that generate huge ones (models with high frequencies).
Basically when you simulate, do not press the -> press the S on the toolbar and you get several tabs and the ones you care about are General and Output.
In General you can specify the number of intervals to reduce the data stored. It will only be stored at each interval.
In the Output you can say not to store events for example. You can also filter out variables that you are not interested in to reduce the result file size. Note that "Equidistant Time Grid" is activated by default, if not this would generate quite a lot of output maybe even several times per interval.
See more here about the things you have in General/Output:
https://openmodelica.org/doc/OpenModelicaUsersGuide/1.20/omedit.html#simulating-a-model
By including our desired variables in the Variable Filter in Output tab one can reduce the size of the output file without compromising the interval points. Each entry follows a POSIX EXTENDED regular expression format, say:
for a list of x,y,z it is ^x|y|z$. Another best practice could be unchecking the flag Store variables at Events. The best answer for this question is already answered by Adrian Pop.

Find Average time in different flow paths

Im currently building a Anylogic model and want to calculate average time spent by customer in different flow paths (I have added the process flow below). In the picture i have named the paths i want to calculate the average time as path A and path B
Anyligic has dedicated blocks for this (although it can be done simply in code).
See in detail here.
The TimeMeasureEnd block contains a dataset.
The following code returns the average of the Y-axis values:
timeMeasureEnd.dataset.getYMean();
Good luck!

How to obtain certain data from main on a graph using parameter variation in anylogic?

I am having trouble with the coding process or steps to extract data from the main and onto the parameter variation experiment in anylogic. I am currently working on total evacuation time due to random fire obstruction.
For now I have successfully obtained the total max evacuation time for 100 runs in my study but I also need another set of data for the number of exits obstructed during each run. My main has the collection (of 3 exits) availableExits and I can see what is obstructed during the simulation.
Furthermore, I would like to obtain data for the number of people evacuating at a particular time (for example number of pedestrians using exit at 120 seconds). I can see this in main from timeMeasureEnd and creating a histogram distribution graph, which shows number of pedestrians escaping at each time. I've managed to create one in parameter variation but when I run the experiment, I am unable to store or view the data as it keeps changing after every run.
Here is the code from analysis Histogram Data which is entered in after simulation run
data = root.timeMeasureEnd.distribution;
i would recommend to add a dataset to your main which would store all the values you want to keep in parameter variation. Dataset differs from histogram data in a way that it doesnt aggregate, it is just a raw array of values, and later you will not have a problem of "aggregating aggregated data".
So, after each simulation run you can access your dataset in main via "root" reference (as you are already doing it) and loop through it to store all the values one by one.

Bootstrapping in Matlab - how many original data points are used?

I have data sets for two groups, with one being much smaller than the other. For that reason, I am using the MatLab bootstrapping function to estimate the performance of the smaller group. I have code that draws on my original data, and it generates 1000 'new' means. However, it is not clear as to how many of the original data points are used each time. Obviously, if all the original data was used, the same mean would continue to be generated.
Can anyone help me out with this?
Bootstrapping comes from sampling with replacement. You'll use the same number of points as the original data, but some of them will be repeated. There are some variants of bootstrapping which work slightly differently, however. See https://en.wikipedia.org/wiki/Bootstrapping_(statistics).

Collect more than 10,000 data points from a Tektronix oscilloscope?

I am building a MATLAB GUI to do data collection from a Tektronix DPO4104 oscilloscope (MATLAB driver here).
I am playing around with tmtool and with my GUI code and have found that the driver can only collect 10,000 data points, regardless of if the oscilloscope is set to show more than 10k points. I found this post on in CCSM but it hasn't been terribly helpful. (I'm the last post on there if you care to read it.) I am using the DPO4104 driver, whereas this post discusses use of the DPO4100 driver, I believe.
As far as I can tell, the steps are:
Edit driver's readwaveform function to account for the current recordLength - in my case, 100,000 points, say.
Manually edit the driver's MaxNumberPoint from 10,000 to 100,000. (In my case, the default number was 0.. I changed this to 100,000).
Manually edit EndingPoint. I set this to 100,000 also.
Before creating a device object, set(interfaceObj, 'InputBufferLength', 2.5*recordLength), that is, make sure the input buffer can fit more than 100,000 points. It's recommended to use at least double the expected buffer. I used 2.5 just because.
Build device object and waveform object, connect() to it, and readwaveform. Profit.
I am still unable to collect more than 10,000 points, either through tmtool or through my GUI. Any help would be appreciated.
I got ahold of a Tektronix engineer; he basically told me just to use the SCPI commands directly and skip the driver. While annoying, this might be the simplest solution.
Is it possible for you to collect the data points 10,000 at a time, then save them somewhere, collect the next 10,000, append them to the saved points, repeat?
It's a work-around, sure.
I figured it out! I think. Taking a couple weeks to step back and refresh really helped. Here's what I did:
1) Edit the driver's init function to configure a larger buffer size. Complete init code:
function init(obj)
% This method is called after the object is created.
% OBJ is the device object.
% End of function definition - DO NOT EDIT
% Extract the interface object.
interface = get(obj, 'Interface');
fclose(interface);
% Configure the buffer size to allow for waveform transfer.
set(interface, 'InputBufferSize', 12e6);
set(interface, 'OutputBufferSize', 12e6); % Originally is set to 50,000
I originally tried to set the the buffer sizes to 22e6 (I wanted to get 10 million points) but I got out-of-memory errors. Supposedly the buffer should be a little more than double what you expect to get out, plus space for headers. I probably don't need 2 million points worth of "header", but eh.
2) Edit the driver's readwaveform() to first query what the user-settable number of points to collect should be. Then, write SCPI commands to the scope to ensure that the number of data points to be transferred is equal to the number of points the user desires. The following snippet will do the trick in readwaveform:
try
% Specify source
fprintf(interface,['DATA:SOURCE ' trueSource]);
%----------BEGIN CODE TO HANDLE MORE THAN 10k POINTS----------
recordLength = query(interface, 'HORizontal:RECordlength?');
fprintf(interface, 'DATA:START 1');
fprintf(interface, 'DATA:STOP %d', str2num(recordLength));
%----------END CODE TO HANDLE MORE THAN 10k POINTS----------
% Issue the curve transfer command.
fprintf(interface, 'CURVE?');
raw = binblockread(interface, 'int16');
% Tektronix scopes send and extra terminator after the binblock.
fread(interface, 1);
3) In the user code, set a SCPI command to change the record size to the underlying interface object:
% interfaceObj is a VISA object.
fprintf(interfaceObj, 'HORizontal:RECordlength 5000000');
There you have it. Hopefully this helps out anyone else that's trying to figure out this issue.
Here's a bad idea. Start collecting 10,000 points. When you get to 5000 points, start collecting data again (you might need to run that in a new thread). Keep going back and forth until all the data you need are stored in 20 some data structures. Then, combine the structures into one structure by lining up the data points. This might be more work than calling the SCPI commands directly and might have some nasty caveats I haven't thought of. But as I said, it's a bad idea...