normxcorr2 vs. normxcorr2_general in Matlab - matlab

There is a file called normxcorr2_general on MathWorks here that the author claims always gives correct answers while Matlab's built-in normxcorr2 gives incorrect answers when the two input matrices are close in size. After doing some testing, it is clear that the two functions do give significantly different outputs when the inputs are the same size.
Is normxcorr2_general actually more accurate? I don't have much experience in Matlab and I'm having trouble figuring that out from reading through the function script.
Edit: To clarify, if I understand it correctly then these functions are both implementing equation number (2) in this paper about computing normalized cross-correlations.

Related

Matlab: Fit a custom function to xy-data with given x-y errors

I have been looking for a Matlab function that can do a nonlinear total least square fit, basically fit a custom function to data which has errors in all dimensions. The easiest case being x and y data-points with different given standard deviations in x and y, for every single point. This is a very common scenario in all natural sciences and just because most people only know how to do a least square fit with errors in y does not mean it wouldn't be extremely useful. I know the problem is far more complicated than a simple y-error, this is probably why most (not even physicists like myself) learned how to properly do this with multidimensional errors.
I would expect that a software like matlab could do it but unless I'm bad at reading the otherwise mostly useful help pages I think even a 'full' Matlab license doesn't provide such fitting functionality. Other tools like Origin, Igor, Scipy use the freely available fortran package "ODRPACK95", for instance. There are few contributions about total least square or deming fits on the file exchange, but they're for linear fits only, which is of little use to me.
I'd be happy for any hint that can help me out
kind regards
First I should point out that I haven't practiced MATLAB much since I graduated last year (also as a Physicist). That being said, I remember using
lsqcurvefit()
in MATLAB to perform non-linear curve fits. Now, this may, or may not work depending on what you mean by custom function? I'm assuming you want to fit some known expression similar to one of these,
y = A*sin(x)+B
y = A*e^(B*x) + C
It is extremely difficult to perform a fit without knowning the form, e.g. as above. Ultimately, all mathematical functions can be approximated by polynomials for small enough intervals. This is something you might want to consider, as MATLAB does have lots of tools for doing polynomial regression.
In the end, I would acutally reccomend you to write your own fit-function. There are tons of examples for this online. The idea is to know the true solution's form as above, and guess on the parameters, A,B,C.... Create an error- (or cost-) function, which produces an quantitative error (deviation) between your data and the guessed solution. The problem is then reduced to minimizing the error, for which MATLAB has lots of built-in functionality.

Are there any softwares that implemented the multiple output gauss process?

I am trying to implement bayesian optimization using gauss process regression, and I want to try the multiple output GP firstly.
There are many softwares that implemented GP, like the fitrgp function in MATLAB and the ooDACE toolbox.
But I didn't find any available softwares that implementd the so called multiple output GP, that is, the Gauss Process Model that predict vector valued functions.
So, Are there any softwares that implemented the multiple output gauss process that I can use directly?
I am not sure my answer will help you as you seem to search matlab libraries.
However, you can do co-kriging in R with gstat. See http://www.css.cornell.edu/faculty/dgr2/teach/R/R_ck.pdf or https://github.com/cran/gstat/blob/master/demo/cokriging.R for more details about usage.
The lack of tools to do cokriging is partly due to the relative difficulty to use it. You need more assumptions than for simple kriging: in particular, modelling the dependence between in of the cokriged outputs via a cross-covariance function (https://stsda.kaust.edu.sa/Documents/2012.AGS.JASA.pdf). The covariance matrix is much bigger and you still need to make sure that it is positive definite, which can become quite hard depending on your covariance functions...

vgxset command: Q parameter for resulting model object?

Matlab's command for defining a vector time series model is vgxset, the formalism for which can be accessed by the command "doc vgxset". It says that the model parameter Q is "[a]n n-by-n symmetric innovations covariance matrix". No description of what it is for. I assumed that it was the covariance of the noise sources that show up in the equations for each times series in the archetypal representation of a vector time series, e.g., http://faculty.chicagobooth.edu/john.cochrane/research/papers/time_series_book.pdf.
I could be off about something (I often am), but this doesn't seem to match results from actually issuing the command to estimate a model's parameters. You can access the code that illustrates such estimation via the command "doc vgxvarx":
load Data_VARMA22
[EstSpec, EstStdErrors] = vgxvarx(vgxar(Spec), Y, [], Y0);
The object EstSpec contains the model, and the Q matrix is:
0.0518 0.0071
0.0071 0.0286
I would have expected that a covariance matrix as ones on the diagonal. Obviously, I misunderstand and/or mis-guessed at the purpose of Q. However, if you actually pull up the code for vgxset ("edit vgxset"), the comments explicitly describe Q as an "[i]nnovations covariance matrix".
I have 3 questions:
(1) What exactly is Q?
(2)Is there a Matlab reference document that I've failed to locate for fundamental parameters like this?
(3)If it isn't the covariance matrix for the noise sources, how does one actually supply actual noise source covariances to the model?
Please note that this question is specifically about Matlab's command for setting up the model, and as such, does not belong in the more concept-oriented Cross Validated Stack Exchange forum. I have posted this to:
(1) vgxset command: Q parameter for resulting model object?
(2) http://groups.google.com/forum/#!topic/comp.soft-sys.matlab/tg59h1wkRCw
I will try to iterate to an answer, but being so many branches of discussion, i prefer to access directly onto this format. Whatever mean, this is a constructive process, as the purpose of this forum is...
Some previous "clarifications":
The Output Covariance from EstSpec.Q after and before running the command vgxvarx are quite similar. Thus the command is outputting what he is shiningly expecting from itself.
As an Output Covariance -or whatever other meaning for the Q parameter- is almost never to be a "mask" of the parameters to use, -i.e. an identity or a sparse zero-one matrix input parameter-. If you can assign it as a diagonal multiplied by some scalar univariate scalar is a different history. This is a covariance, plainly, just as in other MATLAB commands.
Hence:
(2) Is there a Matlab reference document that I've failed to locate for fundamental parameters like this?
No, Matlab ussualy don't give further explanations for "non popular" commands. Yes, this is, under some measure, "not popular", so i would not be impressed if the answer for this question is no.
Of course, the doctoral method is to check the provided references, on this case, those provided under doc vartovec. Which i dunno the hell where to find without order the books seeking the proper library or seeking the overall internet on five minutes...
Thus the obscure method is always better... check the code for the function by doing edit vgxvarx. Check the commented Section % Step 7 - Solve for parameters (Line 515, Matlab R2014b). There are calculations for a Q matrix through a function mvregress. At this point, both of us know, this is the core function.
This mvregress function (Line 62, Matlab R2014b) receives an input parameter called Covar0, which is described as a *D-by-D matrix to be used as the initial estimate for SIGMA*.
This antecedent leads to the answer for (1).
(1) What exactly is Q?
The MATLAB code has dozens of switch -both as options and auto-triggered- so i am actually not sure of which algorithm are you interested on, or based on your data, which ones are actually "triggered" :). Please read the previous answer, and place a Debug Point on the mvregress function:
Covar=Covar+CovAdj; %Line 433, Matlab R2014b
and/or at:
Covar = (Covar + Resid'*Resid) / Count; % Line 439, Matlab R2014b
Having that, the exact meaning of Q, and as indicated by the mvregress help, would be an "Initial Matrix for the Estimate of the Output Covariance Matrix". The average is simply given by averaging the Counts...
But, for the provided data, making:
Spec.Q=[1 0.1;0.1 1];
and then running vgxvarx, the parameter Covar never got initialized!.
Which for the presented unfortunate case, leads to a simply "Unused Parameter".
(3) If it isn't the covariance matrix for the noise sources, how does one actually supply actual noise source covariances to the model?
I've lost tons of manhours trying to gather the correct information from pre-built Matlab commands. Thus, my suggestion here, is to stick onto the concepts of system identification, and I would put my faith under one of the following alternatives:
Keep believing, and dig a bit and debug inside the mvregress function, and check if some of the EstMethods -i.e. cwls ecm mvn under Line 195- leads to a proper filling of the Covar0 parameter,
Stick to the vgxvarx command, but let the Q parameter go, and diagonalize | normalize the data properly, in order to let the algorithm identify the data as a Identically Distributed Gaussian Noise,
Send vgxvarx to the hell, and use arx. I am not sure about the current stability of vgxvarx, but i am quite sure arx should be more "stable" on this regard...
Good Luck,
hypfco.
EDIT
A huge and comprehensive comment, i have nothing much to add.
Indeed, it is quite probable the vgxvarx was run on the Matlab data sample. Hence the results lay explained,
I tried to use the Q parameter on the vgxvarx, with no success by now. If any working code is found, it would be interesting to include it,
The implementation of the noise transformation over the data should be really simple, of the form:
Y1=(Y-Y0)*L
with L the left triangular cholesky for the inverse calculated covariance of Y, and Y0 the mean,
I think the MA part is as critical as the AR part. Unless you have very good reasons, you usually cannot say you explained your data in a gaussian way.
From your very last comment, I really suggest you to move onto a better, more established command for doing AR, MA, ARMA and such flavours. I am pretty sure they handle the MV case...
Again, Matlab don't impress me on that behaviour...
Cheers...
hypfco

How to Supply the Jacobian to Fsolve?

pow=fsolve(#eqns,pop);
This is the code I am using to solve a 2x2 non-linear system of equations, defined in the function eqns.m.
pop is a 2x1 initialisation vector pretty close to the solution. When I run it, the output says
No solution found.fsolve stopped because the relative size of the current step is less than the default value of the step size tolerance squared, but the vector of function values is not near zero as measured by the default value of the function tolerance.<stopping criteria details>
Any way out? I tried moving the initial point further away from the solution intentionally, still it is not working. How do I set the tolerance or some other parameter? Some posts gave me the impression that supplying the jacobian to matlab can be helpful, but how do I do that? Please note that I need the solution in the form of a code which I can put in a function file to be called repeatedly. I believe the interactive optimtool toolbox would not help here. Any help please?
Also from the documentation, the fsolve can employ three different algorithms. Is any of them more helpful than the others for certain problem structures? Where can I get a comparative study of them, suitable for some non-expert in optimisation?

Matlab activation function for values 0 and 1

I am working on an artificial neural network. I want to implement it in Matlab, but I am unable to find a proper activation function. I need a step function because my output is either 0 or 1. is there any function in Matlab that can be used for this kind of output. Also, I want the reverse function of the same activation function. logsig and tansig are not working for me.
Both tansig and logsig are part of the Neural Network Toolbox as the online documentation makes clear. So, if which tansig returns nothing, then you don't have that toolbox (or at least don't have a version current enough to contain that function). However, both of these functions are extremely simple, and the documentation even gives you the formulae under the "Algorithms" section: tansig, logsig. Both can be implemented as a one line anonymous function if you wanted.
If your question is actually about how to produce a Heaviside step function, Matlab has heaviside (it's part of the Symbolic Math toolbox but a pure numeric version is included – type edit heaviside to see the simple code). However, note that using such a non-differentiable function is problematic for some types of neural networks as this StackOverflow question and answer addresses.
Heaviside did not work for me.. i finally normalized my data between 1 and -1 and then applied tansig.
Thanks