Using "Save start values in the model" option to help the convergence in Dymola - modelica

I build a model in dymola. Even though there are some errors during the initialization process, but the calculation succeeded at last.
After the model converged successfully, I tried to use the "Save start values in the model" option to get the right iteration variable strat values stored into the model so that the model would NOT get errors in the next calculation. But after I did this and tried to do calculation once more, I still got the same errors.
So, my question is:
Could I use the "Save start values in the model" to help convergence?
If so, how should I do it?

Are you certain that there are error messages?
The simulation log indicates that you have enabled
Simulation Setup>Debug>Nonlinear iterations
That gives debug messages in the simulation log for every iteration of the non-linear solver, regardless of whether there is a problem or not. (Which can be good for analyzing errors, but should not be on as default as it generates large log-file.)
If disabling that flag doesn't remove all messages, it would be necessary to see the remaining messages and the model to understand the problem; as the previously indicated procedures should work.

The reason is I set the fixed attribute of some parameter as false, the fixed attribute of some variable as true, so I could use the variable's value to initialize the system, and the corresponding parameter would be calculated. when using "save start values in the model" option, it would store the result into the parameter's start attribute, but its value attribute would keep unchanged. When I do simulation again, Dymola would NOT use the parameter's start attribute, it would still use the parameter's value attribute. After I change the value attribute manually, there would be no error anymore.

Related

AnyLogic: Empty agent parameter in the function

I was faced with a problem in AnyLogic.
I created a function with an argument. The argument is agent Message. However, the function doesn't get the current agent. It seems that the argument is empty. Why?
this is one of the most confusing things in anylogic that things get calculated in reverse order compared to what you expect.
First the condition is calculated in order for the agent to decide where to go, and after that the agent exits the source...
Based on what we see here, the volume is probably calculated on the "on exit" action of your source and you should calculate it on the "on at exit" or you can also put a delay of 1 milisecond after the source and everything will be ok

Best Practice to Store Simulation Results

Dear Anylogic Community,
I am struggling with finding the right approach for storing my simulation results. I have datasets created that keep track of every value I am interested in. They live in Main (see below)
My aim is to do a parameter variation experiment. In every run, I change the value for p_nDrones (see below)
After the experiment, I would like to store all the datasets in one excel sheet.
However, when I do the parameter variation experiment and afterwards check the log of the dataset (datasets_log), the changed values do not even show up (2 is the value I did set up in the normal simulation).
Now my question. Do I need to create another type of dataset if I want to track the values that are produced in the experiments? Why are they not stored after executing the experiment?
I really would appreciate if someone could share the best way to set up this export of experiment results. I would like to store the whole time series for every dataset.
Thank you!
Best option would be to write the outputs to some external file at the end of each model run.
If you want to use Excel, which I personally would not advise, even though it has a nice excelFile.writeDataSet() function, you can.
I would rather write the data to a text file as you will have much for control over the writing, the file itself, it is thread-safe, and useable in many many more platforms than Microsoft Excel.
See my example below:
Setup parameters in your model that you will write the data to at the end of the model of type TextFile. Here I used the model on destroy code to write out the data from the data sets.
Here you can immediately see the benefit of using the text file! You can add the number of drones we are simulating (or scenario name or any other parameter) in a column, whereas with Excel this would be a pain...
Now you can pass your specific text file to the model to use by adding it to the parameter variation page, providing it to the model through the parameters.
You will see that I also set up some headers for the text file in the Initial Experiment setup part, and then at the very end of the experiment, I close the text files in the After experiment section so that the text files can be used.
Here is the result if you simply right-click on the text files and open them in Excel. (Excel will always have a purpose, even if it is just to open text files ;-) )

Why Modification Indices CheckBox unable to check?

I want to do a 2 order CFA with AMOS. My data set does not contain any missing values. When I try to run the model, it says "The model is probably unidentified....".
So I tried to increase the modification level by checking the modification indices checkbox.
After I checked it and run again, it gives me the following error.
Error ScreenShot:
Can anyone help me with this?

How can I know what component failed?

When using the On SubJob Error trigger, I would like to know what component failed inside the subjob. I have read that you can check the error message of each component and select the one that is not null. But I feel this practice is bad. Is there any variable that stores the identity of the component that failed?
I may be wrong, but I'm afraid there isn't. This is because globalVar elements are component-scoped (ie they are get/set by components themselves), not subjob-scoped (this would mean being set by Talend itself, or something). When the subjobError signal is triggered, you loose any component-based data coming from tFileInputDelimited. For this design reason, I don't think you will be able to solve your problem without iterating inside the globalMap searhcing for the error strings here and there.
Alternatively you can use tLogCatcher, which has a 'origin' column, to spot the offending component and eventually route to different recoverable subjobs depending on which component went to exception. This is not a design I trust too much, actually, because tLogCatcher is job-scoped, while OnSubjobError is directly linked to a specific subjob only. But It could work in simple cases

Is it possible to change the correlation in the middle of the workflow?

I have put an InitializeCorrelation activity in the beginning of the workflow and then I want to correlate on a different keys, so I've put another InitializeCorrelation activity with a different keys but I am getting this error:
The execution of an InstancePersistenceCommand was interrupted because the instance key 'a765c209-5adc-4f03-9dd2-1af5e33aab3b' was not associated to an instance. This can occur because the instance or key has been cleaned up, or because the key is invalid. The key may be invalid if the message it was generated from was sent at the wrong time or contained incorrect correlation data.
So, is it possible to change the correlation after the workflow started or not?
To answer the question explicitly, yes, you can change the data the correlation based on. You can do it in not only within a sequence but you can use different correlation data within each branch of a Parallel activity. Correlation can be initialized using an InitializeCorrelation or a SendReply activity as described here: http://msdn.microsoft.com/en-us/library/ee358755(v=vs.100).aspx.
As the Workflow Designer is not the strongest part of Visual Studio (XPath queries are never checked, sometimes even build errors are not reflected on activities, etc.), usually it is not always obvious what the problem is. So, I suggest the followings:
initialize a CorrelationHandle only once with the correlation type Query correlation for a specicific correlation data
initialize a new CorrelationHandle instance for a different correlation data
once a CorrelationHandle is initialized, it can be used later for multiple times for different Receive activities (Receive.CorrelatesOn, Receive.CorrelatesWith)
if correlation does not work it can be because of wrong XPath queries. These are not refreshed automatically if the OperationName or parameters' name are changed. It is advisable to regenerate them after renaming
it can be a good idea to turn off workflow persistance and NLB while you are testing - to let yourself concentrate on correlation-related problems
Have a look into Instances table in database where you store persisted instaces. One of the entries would probably have the suspended state, there is also a column with some error description. What have caused this error ? have you done some changes in workflow and deployed it ?