AnyLogic Sensitivity analysis visualization - anylogic

I am new to this and lost about how to create visualization for the AnyLogic sensitivity analysis. Here is the summary:
I have dataset that captures dependent and independent variables at the end of simulation run (just captures one pair at the end). Trying to vary independent variable to see the impact on dependent variable. The resulting dataset is correct (when I copy and paste in Excel) but the chart looks blank.
Also, output states that it completed 5 iterations but I specified 10 and the data shows that there were in fact 10 iterations.
There are no parameters for the chart data (see the screenshot) but I am guessing it is automatically populated at the end simulation based on the code (also copied below)? Otherwise, I cannot figure out what goes into the chart data (tried to manually enter variables/datasets to no avail).
This is the code after each simulation runs:
Color color = lerpColor( (getCurrentIteration() - 1) / (double) (getMaximumIterations() - 1), blue, red );
chart0.addDataSet( root.died_friend, format( root.SocFriendBrave ), color, true, `Chart.INTERPOLATION_LINEAR, 1, Chart.POINT_NONE );`
I realize this is a very basic question but I am lost and cannot get on the right path based on the help I found. thank you.

Let's say you have 2 experiments: simulation (normal one) and sensitivity analysis (new one)
The only way for the chart to look blank is if your dataset died_friend is empty. This can happen for many reasons, but the reason I suspect it happens here, is because you fill this dataset with information at the end of the simulation run, which means that you are probably using the java actions of the simulation experiment.
The sensitivity experiment DOES NOT read what you write on the java actions of the simulation experiment, so that might be the problem.
If this is not the case, you need to check other possible reasons on why your dataset is empty when you run the sensitivity analysis.
Remember: the fact that your dataset has data in your simulation experiment, does not necessarily mean the dataset will not be empty on the sensitivity experiment.

Since the Sensitivity Analysis experiment is set up via the wizard, it would be better if you showed us how you'd set that up, rather than the resultant AnyLogic-generated code.
From what you've said here, it looks like you may be incorrectly trying to use datasets instead of scalar values. If there's one 'dependent variable' (model output of interest) then the 'independent variable' needs to be a model parameter (parameter of your top-level agent, normally Main, which the experiment will be varying).
So you should specify:
Varying the relevant parameter (looks like you wanted it to be 0, 0.25, 0.5, 0.75, 1 so min 0, max 1 with step size of 0.25)
Producing a chart for the scalar output you want. (If this was, say, a variable called outputValue, you'd use expression root.outputValue when setting up the chart in the wizard.)
Also you should only be getting more than 5 iterations if you set up your parameter variation criteria incorrectly.
(Dataset outputs in the experiment's charts are typically for where your model produces a time series as an output --- i.e., a dataset of sim time vs. value --- and you want the Sensitivity Analysis experiment to show charts with each run's time series as separate lines (i.e., time on the X-axis). Scalar charts are for where the X-axis is the varying parameter and the Y-axis the output of interest.)

Related

Parameter Variation in AnyLogic: Data for a specific variation

I am using parameter variation in AnyLogic (in a system dynamics model). I am interested in how one parameter changes with the various iterations. The parameter is binary: 0 when supply of water is greater than demand and 1 when supply is lower than demand. The parameters being varied are a given percentage of decrease in outdoor irrigation, a given percentage of decrease in indoor water-use, and a given percentage of households that have rainwater harvesting systems. Visually, I need a time plot where on the x-axis is time (10,950 days; i.e. 30 years) and the binary on the y-axis. This should essentially show which iteration pushes a 1 further into the future.
I have watched videos and seen how histograms and 2D data are used to visualize the results of the iterations, but this does not show which iteration produced which output specifically. Is there a way to first, visually show the output as I have described above and second, return the data for a specific iteration?
Many thanks!
Parameter variation experiments have After Iteration and After Simulation run actions that are executed after each iteration and simulation respectively. Here, it is possible to access the values inside the simulation object after it finished but before it is destroyed. There is also a getCurrentIteration() method which can be used to control the parameter variation experiment and retrieve the data.
For more detail please consult here and see "SIR Agent Based Calibration" example model in AnyLogic example models library (Help -> Example Models).

Boxplot is broken, only showing one line

so my data centres around different treatments and how they impact the day of germination. image of dodgy boxplot data
A while ago whilst making violin plots in R to show the distribution of when germination occurs according to treatment, I attempted to add a boxplot as a descriptive statistic and was met with only one line.
I contacted many people who simply had no idea what the issue was, I used this same data in another violin plot as part of a bigger data collection with more treatments including this one.
I moved on from this and found it odd, now when I have come to perform stats tests in SPSS, I have the same problem as imaged below. When I try a Mann Whitney U test I am told "cannot compute" due to not having solely two variables, when I try a Kruskal Wallis test I am met with the dodgy boxplot below and I am told pairwise comparisons cannot be done due to less than 3 test fields (i.e. 2).
I am at an absolute loss, I have tried rewriting the data out, copying data labels with 'stratified' 'strat' 's' etc and I have no idea where the problem could lie, if anyone could give me any guidance this would be really appreciated!
Thank you
The dependent variable in question appears to have only values 1, 2, and 3 in the Stratified group. If there is at least one case with a value of 1, at least one case with a value of 3, but most values at 2, then a box plot like you're seeing would be expected. In SPSS, run the EXAMINE procedure (Analyze>Descriptive Statistics>Explore in the menus), specifying the same dependent variable and grouping variable, and asking for percentiles. The box plots should match what you're getting, and in the percentiles table you should see that Tukey's hinges show the same value of 2 for the 25th, 50th, and 75th percentiles.
Tukey's hinges are the basis for the box and the line in box plots. The line is at the median or 50th percentile, and the upper and lower box edges are at the 25th and 75h percentiles, respectively. When all three coincide, you get just a line instead of a box.
There are two types of outlying values identified in box plots in SPSS. Points greater than 1.5 box lengths below or above the box edges are outliers, marked with circles, and points greater than 3 box lengths below or above the box edges are extremes, marked with asterisks. Since the box length here is 0, anything at other values is automatically an extreme.
Pairwise comparisons following a Kruskal-Wallis test are available only when there are at least three groups, since with only two groups the overall or omnibus test has already compared the two groups. I'm not sure what the issue was when trying to run a Mann-Whitney test.

NetLogo Experiment Setup

I'm working on a model in Netlogo and I'm having a problem understanding how to set up an "experiment". In my model, I have a matrix that has all of the values that I'm interested in (6 in total) and the matrix is updated whenever a condition is met (every time X turtles are killed off) basically capturing a snapshot of the model at that point. The previous values in the matrix are cleared, so the matrix is a 1x6, not a 10000x6 matrix with only one line being updated for each snapshot.
What I would like to do is to set up an experiment to run my model several hundred times, collecting this matrix each time for the first X number of snapshots or until Y ticks have occurred. But I can't see a way to do that in the experiment setup?
Is this possible to do, or would I have to create the 100x6 (100 snapshots) and then just export that matrix to a CSV somehow?
I've never set up an experiment in Netlogo, so this might be super easy to do or just be completely impossible.
If I understand your question correctly, then you want 6 values reported at specific ticks during the run. Those ticks are chosen by meeting a condition rather than a certain number of ticks. NetLogo has an experiment management tool called BehaviorSpace. It is straightforward to set up your several hundred runs (potentially with different values for any inputs on sliders etc). It's not so straightforward to only output on certain ticks.
The BehaviorSpace dialogue box has a checkmark for every tick or at the end only. If you have it set to every tick, then you can export your six numbers every tick automatically. In your case, it is likely to be easier to do that than to try and only output occasionally. You could add a seventh reporter that is true/false for whether the matrix is being reset this tick. Then all you have to do in post-processing is select the lines where that seventh reporter is true.
If you want to run the model for exactly N snapshots, then you would also need to set up a global variable that is incremented each snapshot point. Your BehaviorSpace settings would then use that counter for the stop condition.
I'm not sure I understand your question, but usually you will have a Setup function and a Run function, correct? So I'm guessing the code structure below should be kind of what you are looking for. I haven't used netlogo in a while so the exact matrix code you'll have to figure out yourself.
globals your-1by6-matrix your-100by6-matrix
to setup
;reset your experiment
end
to run
;run your experiment
end
to run100times
repeat 100[
setup
run
;save your 1by6matrix into your 100by6matrix
]
;use your 100by6matrix to plot or export
end

I cannot reproduce the results with kmeans in Orange

I've tried to repeat the same results with the same flow, and I don't understand the results are different in each situation.
I describe the situation I have a file with 192 instances and 37 features, y select in all cases the same columns and preprocess by Median and StdDev. It computes the PCA with 7 principal components. The following step is to run the k-means algorithm (k is between 2 and 8) from this 'new' dataset. The scatter plot shows the results for k=5.
I attached different images with my flows.
Image1: original flow
The first one is the original flow (it is painted of yellow color), which I would like to repeat without the rest of the options (the second image).
Image2: flows repeated
However, when I tried to do it, I saw that the results are different (the third image) Of course the colors don't determine the differences, however the clusters are different. In addition the Slhouette Scores are different too for the different flows.
Image3: results of the different flows
K-means initializes with the kmean++ and I have the question if I can "control" this, or if the way to initialize k-means is always randomly. I saw in other programmes that there is an option called seed which is used to control that an experiment can be repeated but I didn't see this option here or something similar.
I wonder if it is possible to obtain always the same results with the same flow (using k-means).
It seems that the issue happens because no random seed is set in the k-means widget. So initialization is different each time you repeat an experiment and because of nature of your data, the method converges differently. Can you please report your issue to Orange3 issue tracker.

Simulink From Workspace: can't use timestamps from matrix

I'm using the simulink block From Workspace to read in some audio data provided by a script. I have formatted the data in a matrix with 2 columns, the first is the timestamp and the second is the data.
In the configuration paramaters, I have specified Fixed-Step and Discrete solver. The Start time and Stop also need to be configured manually and don't seem to come from the data.
Also, in the From Workspace block configuration, I need to specify the sample time (1/44100) or I get a warning if I specify -1, to inherit from the data and then get strange sample times.
So, how can I get simulink to use only the sample times in the matrix and use the first and last timestamps as the start and stop time of the simulation?
You should be able to do what you want by doing the following:
Firstly note that your problem is by definition not fixed step, hence you cannot use a fixed-step solver, which by definition is ... fixed-step.
You must use a variable step solver.
Assuming your (2 column) input data is called simin then set the start and stop times to be simin(1,1) and simin(end,1) respectively.
In your From Workspace block set the sample time to be 0 (which should have been the default).
Also de-select the Interpolate data option; and set "Form the output after final data value by:" to zero (you won't be using anything past the end of your data set so this should be OK.
Then you need to tell the solver to take additional steps to those that it would naturally want to take.
Do this on the Data Import/Export pane of the Model Configuration Parameters.
Near the bottom of the pane there is a selection box and an edit box for doing this.
Note however that this does not prevent the solver from taking steps at other time points, it just forces it to take additional steps at the times you specify.
But because you have your From WOrkspace block to not interpolate this shouldn't be a problem either. You should put simin(:,1) in here so that the solver is guaranteed to take steps at the time points in your input data.
Note that if you want an input block that only samples at the time points in the simin time vector then the only way to do this is to write an S-function that uses the mdlGetTimeOfNextVarHit method to tell the solver what the next sample time (for this block) should be.