Unable to find out the variables and equations of BLT Matrix in Open Modelica - modelica

I tried getting out the BLT matrix of the Modelica>Thermal>Fluidflow>Onemass using the bltmatrixdump flag.
I see the BLT matrix with two axes. Equations and Variables. Equations are numbered 1 to 35 and also variables.
But I am not able to find out the actual variable name from the numbers.
I tried looking at the OneMass_info.json file but the number of variables and equations there are much higher than the BLT matrix and somehow the numbers do not match with the numbers found on the BLT matrix.
is there any way to fish out only those equations and variables shown in the BLT matrix?

I would suggest to use the Transformational Debugger from OMEdit to analyze the matching instead. In the Variables Browser you can select model variables and see in which equation the variable is defined. So the information is the same as in the BLT dump, but actually readable. I'm not sure if the indices correspond to the html dump produced by -d=bltmatrixdump.
If you need something machine readable it is also possible to parse the MODELNAME_info.json yourself. That is basically what the Debugger is doing.
If you really need the BLT matrix itself you can use debug flag
-d=dumpeqninorder to get the equations in order and which variable is solved in that equation.

Related

How to set up a large set of first order non-linear differential equations in Matlab?

I am relatively new to writing my own code for Matlab though I have used the program a decent amount. Right now I am attempting to code a series of first order non-linear differential equations. They are all in one of two forms like the equations here:
Eventually I will need a set of 30 differential equations.
What I was hoping to do was create a function that could make the differential equation for each component of a certain form, combine them into a single system (essentially a matrix with 1 column and a row for each component), and then solve using a Matlab solver like ODE45, the dsolve function, or something like that to solve the system.
I have not yet found a way to make a function set up this large of a system that works with either dsolve or ODE45. The results always either gave me an empty sym or an error that the initial conditions were not compatible with the system or some other error. So what I am wondering is if there is another way to go about setting up a system that is this large and has nonlinear differential equations.
I do not want someone else's code; I just want an idea for how to go about setting this up in Matlab because nothing I have tried has worked so far.

Using output of Matlab protected file as input in Matlab

I am aware that .p files are Matlab protected files. So, I am not trying to access them. However, I was wondering if I could use their output onto the Matlab shell as input to a Matlab program.
What I mean is the following: I have to simulate a dynamic system in Matlab using a controller. Afterwards, I need to assess its performance. This is done by the .p file. Now, the controller behaviour is defined by six distinct variables. I pretty much know their range. So, what I did was create an optimization to find the optimal coefficients. However, when I run the .p file I see that the coefficients I obtained as optimal are in fact not optimal, i.e. my cost function is biased in some way.
So, what I would like to do is to use the output of the .p file (there are always six strings with only two numerical values - so they would be easy to extract if it were a text file) to run a new optimization so that I can understand what I did wrong in my original cost function.
The alternative is finding the parameters starting from my values by trial and error, but considering there are six variables I would prefer a more mathematically pure approach.
Basically, the question is how I can read the output onto the command prompt of a Matlab .p function, and use it as input in a Matlab function.
Thanks for the help!

vgxset command: Q parameter for resulting model object?

Matlab's command for defining a vector time series model is vgxset, the formalism for which can be accessed by the command "doc vgxset". It says that the model parameter Q is "[a]n n-by-n symmetric innovations covariance matrix". No description of what it is for. I assumed that it was the covariance of the noise sources that show up in the equations for each times series in the archetypal representation of a vector time series, e.g., http://faculty.chicagobooth.edu/john.cochrane/research/papers/time_series_book.pdf.
I could be off about something (I often am), but this doesn't seem to match results from actually issuing the command to estimate a model's parameters. You can access the code that illustrates such estimation via the command "doc vgxvarx":
load Data_VARMA22
[EstSpec, EstStdErrors] = vgxvarx(vgxar(Spec), Y, [], Y0);
The object EstSpec contains the model, and the Q matrix is:
0.0518 0.0071
0.0071 0.0286
I would have expected that a covariance matrix as ones on the diagonal. Obviously, I misunderstand and/or mis-guessed at the purpose of Q. However, if you actually pull up the code for vgxset ("edit vgxset"), the comments explicitly describe Q as an "[i]nnovations covariance matrix".
I have 3 questions:
(1) What exactly is Q?
(2)Is there a Matlab reference document that I've failed to locate for fundamental parameters like this?
(3)If it isn't the covariance matrix for the noise sources, how does one actually supply actual noise source covariances to the model?
Please note that this question is specifically about Matlab's command for setting up the model, and as such, does not belong in the more concept-oriented Cross Validated Stack Exchange forum. I have posted this to:
(1) vgxset command: Q parameter for resulting model object?
(2) http://groups.google.com/forum/#!topic/comp.soft-sys.matlab/tg59h1wkRCw
I will try to iterate to an answer, but being so many branches of discussion, i prefer to access directly onto this format. Whatever mean, this is a constructive process, as the purpose of this forum is...
Some previous "clarifications":
The Output Covariance from EstSpec.Q after and before running the command vgxvarx are quite similar. Thus the command is outputting what he is shiningly expecting from itself.
As an Output Covariance -or whatever other meaning for the Q parameter- is almost never to be a "mask" of the parameters to use, -i.e. an identity or a sparse zero-one matrix input parameter-. If you can assign it as a diagonal multiplied by some scalar univariate scalar is a different history. This is a covariance, plainly, just as in other MATLAB commands.
Hence:
(2) Is there a Matlab reference document that I've failed to locate for fundamental parameters like this?
No, Matlab ussualy don't give further explanations for "non popular" commands. Yes, this is, under some measure, "not popular", so i would not be impressed if the answer for this question is no.
Of course, the doctoral method is to check the provided references, on this case, those provided under doc vartovec. Which i dunno the hell where to find without order the books seeking the proper library or seeking the overall internet on five minutes...
Thus the obscure method is always better... check the code for the function by doing edit vgxvarx. Check the commented Section % Step 7 - Solve for parameters (Line 515, Matlab R2014b). There are calculations for a Q matrix through a function mvregress. At this point, both of us know, this is the core function.
This mvregress function (Line 62, Matlab R2014b) receives an input parameter called Covar0, which is described as a *D-by-D matrix to be used as the initial estimate for SIGMA*.
This antecedent leads to the answer for (1).
(1) What exactly is Q?
The MATLAB code has dozens of switch -both as options and auto-triggered- so i am actually not sure of which algorithm are you interested on, or based on your data, which ones are actually "triggered" :). Please read the previous answer, and place a Debug Point on the mvregress function:
Covar=Covar+CovAdj; %Line 433, Matlab R2014b
and/or at:
Covar = (Covar + Resid'*Resid) / Count; % Line 439, Matlab R2014b
Having that, the exact meaning of Q, and as indicated by the mvregress help, would be an "Initial Matrix for the Estimate of the Output Covariance Matrix". The average is simply given by averaging the Counts...
But, for the provided data, making:
Spec.Q=[1 0.1;0.1 1];
and then running vgxvarx, the parameter Covar never got initialized!.
Which for the presented unfortunate case, leads to a simply "Unused Parameter".
(3) If it isn't the covariance matrix for the noise sources, how does one actually supply actual noise source covariances to the model?
I've lost tons of manhours trying to gather the correct information from pre-built Matlab commands. Thus, my suggestion here, is to stick onto the concepts of system identification, and I would put my faith under one of the following alternatives:
Keep believing, and dig a bit and debug inside the mvregress function, and check if some of the EstMethods -i.e. cwls ecm mvn under Line 195- leads to a proper filling of the Covar0 parameter,
Stick to the vgxvarx command, but let the Q parameter go, and diagonalize | normalize the data properly, in order to let the algorithm identify the data as a Identically Distributed Gaussian Noise,
Send vgxvarx to the hell, and use arx. I am not sure about the current stability of vgxvarx, but i am quite sure arx should be more "stable" on this regard...
Good Luck,
hypfco.
EDIT
A huge and comprehensive comment, i have nothing much to add.
Indeed, it is quite probable the vgxvarx was run on the Matlab data sample. Hence the results lay explained,
I tried to use the Q parameter on the vgxvarx, with no success by now. If any working code is found, it would be interesting to include it,
The implementation of the noise transformation over the data should be really simple, of the form:
Y1=(Y-Y0)*L
with L the left triangular cholesky for the inverse calculated covariance of Y, and Y0 the mean,
I think the MA part is as critical as the AR part. Unless you have very good reasons, you usually cannot say you explained your data in a gaussian way.
From your very last comment, I really suggest you to move onto a better, more established command for doing AR, MA, ARMA and such flavours. I am pretty sure they handle the MV case...
Again, Matlab don't impress me on that behaviour...
Cheers...
hypfco

Looking up the algorithm for commands: MATLAB

Is it possible to see how MATLAB implements some of its commands? For example, the "hess" command on MATLAB reduces a matrix to its upper Hessenberg form. I want to see how it is written in MATLAB because every time I do it by hand, it is not the same as the one MATLAB spits out but according to WolframAlpha, I am doing it correctly.

User defined Jacobian pattern in MATLAB's lsqnonlin being ignored

I am using MATLAB's lsqnonlin function, and I am attempting to set a user-defined Jacboian pattern via the option JacobPattern. I set a preference for the trust-region-reflective algorithm to be used, and the output from lsqnonlin indicates that this was indeed the algorithm used by the solver (required for the use of the JacobPattern option).
The problem I am finding is that if my JacobPattern is too sparse (e.g. just a few rows of ones in a 500x500 Jacobian), it is being ignored by the solver and the full Jacobian is being computed instead.
This behaviour is not documented; can anyone shed any further light on it? I would like to be able to force the solver to use my JacobPattern no matter how absurdly sparse it is, or how shallow a gradient is found with it.
Update:
I have done some more experiments, and it appears the Jacobian is only recomputed if there are any all-zero rows in the Jacobian pattern. Any number of all-zero columns are ok, as long as at there is at least one '1' in each row. Although this helps to avoid the problem, the question still remains --- why does the solver require each dependent variable to have an associated gradient? In any case, I would expect the ignoring of a user-defined option to be at least worthy of a warning...
My guess is the following:
If you take a look at what the jacobian actually means, you'll see that all-zero rows mean that the corresponding function (part of the vector function defined) is independent of any variable. It is thus completely pointless adding it to the optimization.
As for purposefully handing a wrong Jacobian to the algorithm,
why would you want to do that?