extrapolating a 2D matrix to predict a future output - matlab

I have a 2D 2401*266 matrix K which corresponds to x values (t: stored in a 1*266 array) and y values(z: stored in a 1*2401 array).
I want to extrapolate the matrix K to predict some future values (corresponding to t(1,267:279). So far I have extended t so that it is now a 1*279 matrix using a for loop:
for tq = 267:279
t(1,tq) = t(1,tq-1)+0.0333333333;
end
However I am stumped on how to extrapolate K without fitting a polynomial to each individual row?
I feel like there must be a more efficient way than this??

There are countless of extrapolation methods in the literature, "fitting a polynomial to each row" would be just one of them, not necessarily invalid, not sure why you mention that you do no wan't to do it. For 2D data perhaps fitting a surface would lead to better results though.
However, if you want an easy, simple way (that might or might not work with your problem), you can always use the function interp2, for interpolation. If you chose spline or makima as interpolation functions, it will also extrapolate for any query point outside the domain of K.

Related

Nonlinear curve fitting of a matrix function in python

I have the following problem. I have a N x N real matrix called Z(x; t), where x and t might be vectors in general. I have N_s observations (x_k, Z_k), k=1,..., N_s and I'd like to find the vector of parameters t that better approximates the data in the least square sense, which means I want t that minimizes
S(t) = \sum_{k=1}^{N_s} \sum_{i=1}^{N} \sum_{j=1}^N (Z_{k, i j} - Z(x_k; t))^2
This is in general a non-linear fitting of a matrix function. I'm only finding examples in which one has to fit scalar functions which are not immediately generalizable to a matrix function (nor a vector function). I tried using the scipy.optimize.leastsq function, the package symfit and lmfit, but still I don't manage to find a solution. Eventually, I'm ending up writing my own code...any help is appreciated!
You can do curve-fitting with multi-dimensional data. As far as I am aware, none of the low-level algorithms explicitly support multidimensional data, but they do minimize a one-dimensional array in the least-squares sense. And the fitting methods do not really care about the "independent variable(s)" x except in that they help you calculate the array to be minimized - perhaps to calculate a model function to match to y data.
That is to say: if you can write a function that would take the parameter values and calculate the matrix to be minimized, just flatten that 2-d (on n-d) array to one dimension. The fit will not mind.

Using matlab to obtain the vector fields and the angles made by the vector field on a closed curve?

Here is the given system I want to plot and obtain the vector field and the angles they make with the x axis. I want to find the index of a closed curve.
I know how to do this theoretically by choosing convenient points and see how the vector looks like at that point. Also I can always use
to compute the angles. However I am having trouble trying to code it. Please don't mark me down if the question is unclear. I am asking it the way I understand it. I am new to matlab. Can someone point me in the right direction please?
This is a pretty hard challenge for someone new to matlab, I would recommend taking on some smaller challenges first to get you used to matlab's conventions.
That said, Matlab is all about numerical solutions so, unless you want to go down the symbolic maths route (and in that case I would probably opt for Mathematica instead), your first task is to decide on the limits and granularity of your simulated space, then define them so you can apply your system of equations to it.
There are lots of ways of doing this - some more efficient - but for ease of understanding I propose this:
Define the axes individually first
xpts = -10:0.1:10;
ypts = -10:0.1:10;
tpts = 0:0.01:10;
The a:b:c syntax gives you the lower limit (a), the upper limit (c) and the spacing (b), so you'll get 201 points for the x. You could use the linspace notation if that suits you better, look it up by typing doc linspace into the matlab console.
Now you can create a grid of your coordinate points. You actually end up with three 3d matrices, one holding the x-coords of your space and the others holding the y and t. They look redundant, but it's worth it because you can use matrix operations on them.
[XX, YY, TT] = meshgrid(xpts, ypts, tpts);
From here on you can perform whatever operations you like on those matrices. So to compute x^2.y you could do
x2y = XX.^2 .* YY;
remembering that you'll get a 3d matrix out of it and all the slices in the third dimension (corresponding to t) will be the same.
Some notes
Matlab has a good builtin help system. You can type 'help functionname' to get a quick reminder in the console or 'doc functionname' to open the help browser for details and examples. They really are very good, they'll help enormously.
I used XX and YY because that's just my preference, but I avoid single-letter variable names as a general rule. You don't have to.
Matrix multiplication is the default so if you try to do XX*YY you won't get the answer you expect! To do element-wise multiplication use the .* operator instead. This will do a11 = b11*c11, a12 = b12*c12, ...
To raise each element of the matrix to a given power use .^rather than ^ for similar reasons. Likewise division.
You have to make sure your matrices are the correct size for your operations. To do elementwise operations on matrices they have to be the same size. To do matrix operations they have to follow the matrix rules on sizing, as will the output. You will find the size() function handy for debugging.
Plotting vector fields can be done with quiver. To plot the components separately you have more options: surf, contour and others. Look up the help docs and they will link to similar types. The plot family are mainly about lines so they aren't much help for fields without creative use of the markers, colours and alpha.
To plot the curve, or any other contour, you don't have to test the values of a matrix - it won't work well anyway because of the granularity - you can use the contour plot with specific contour values.
Solving systems of dynamic equations is completely possible, but you will be doing a numeric simulation and your results will again be subject to the granularity of your grid. If you have closed form solutions, like your phi expression, they may be easier to work with conceptually but harder to get working in matlab.
This kind of problem is tractable in matlab but it involves some non-basic uses which are pretty hard to follow until you've got your head round Matlab's syntax. I would advise to start with a 2d grid instead
[XX, YY] = meshgrid(xpts, ypts);
and compute some functions of that like x^2.y or x^2 - y^2. Get used to plotting them using quiver or plotting the coordinates separately in intensity maps or surfaces.

How to extrapolate matrix valued functions on Matlab?

I have a matrix valued function which I'm trying to find its limit as x goes to 1.
So, in this example, I have three matrices v1-3, representing respectively the sampled values at [0.85, 0.9, 0.99]. What I do now, which is quite inefficient, is the following:
for i=1:101
for j = 1:160
v_splined = spline([0.85,0.9,0.99], [v1(i,j), v2(i,j), v3(i,j)], [1]);
end
end
There must be a better more efficient way to do this. Especially when soon enough I'll face the situation where v's will be 4-5 dimensional vectors.
Thanks!
Disclaimer: Naively extrapolating is risky business, do so at your own risk
Here's what I would say
Using a spline to extrapolate is risky business and not generally recommended. Do you know anything about the behavior of your function near x=1?
In the case where you only have 3 points you're probably better off using a 2nd order polynomial (a parabola) rather than fitting a spline through the three points. (unless you have a good reason not to do this.)
If you want to use a parabola (or higher order interpolating polynomial when you have more points), you can vectorize your code and use Lagrange or Newton polynomials to perform the extrapolation which will probably give you a nice speed up.
Using interpolating polynomials will also generalize easily to higher order polynomials with more points given. However, this will make extrapolation even more risky since high-order interpolating polynomials tend to oscillate severely near the ends of the domain.
If you want to use Lagrange polynomials to form a parabola, your result is given by:
v_splined = v1*(1-.9)*(1-.99)/( (.85-.9)*(.85-.99) ) ...
+v2*(1-.85)*(1-.99)/( (.9-.85)*(.9-.99) ) ...
+v3*(1-.85)*(1-.9)/( (.99-.85)*(.99-.9) );
I left this un-simplified so you can see how it comes from the Lagrange polynomials, but obviously simplifying is easy. Also note that this eliminates the need for loops.

How to vectorize the intersection kernel function in MATLAB?

I need to pre-compute the histogram intersection kernel matrices for using LIBSVM in MATLAB.
Assume x, y are two vectors. The kernel function is K(x, y) = sum(min(x, y)). In order to be efficient, the best practice in most cases is to vectorize the operations.
What I want to do is like calculate the kernel matrices like calculating the euclidean distance between two matrices, like pdist2(A, B, 'euclidean'). After defining function 'intKernel', I could calculate the intersection kernel by calling pdist2(A, B, intKernel).
I know the function 'pdist2' may be an option. But I have no idea how to write the self-defined distance function. While, I do not know how to code the intersection kernel between vector(1-by-M) and matrix(M-by-N) in one condense expression.
'repmat' may not be feasible, because the matrix is really large, let us say, 20000-by-360000.
Any help would be appreciated.
Regards,
Peiyun
I think pdist2 is a good option, so I help you to define your distance function.
According to the doc, the self-defined distance function must have 2 inputs: first one is a 1-by-N vector; second one is a M-by-N matrix (be careful of the order!).
To avoid the use of repmat which is indeed memory-consumant, you can use bsxfun to apply some basic operations on data with expansion over singleton dimensions. In your case, you can do the following thing:
distance_kernel = #(x,Y) sum(bsxfun(#min,x,Y),2);
Summation is done over the columns to get a column vector as output.
Then just call pdist2 and you are done.

Polynomial data fitting in a special way in MATLAB

I have some data let's say the following vector:
[1.2 2.13 3.45 4.59 4.79]
And I want to get a polynomial function, say f to fit this data. Thus, I want to go with something like polyfit. However, what polyfit does is minimizing the sum of least square errors. But, what I want is to have
f(1)=1.2
f(2)=2.13
f(3)=3.45
f(4)=4.59
f(5)=4.79
That is to say, I want to manipulate the fitting algorithm so that it will give me the exact points that I already gave as well as some fitted values where exact values are not given.
How can I do that?
I think everyone is missing the point. You said that "That is to say, I want to manipulate the fitting algorithm so that I will give me the exact points as well as some fitted values where exact fits are not present. How can I do that?"
To me, this means you wish an exact (interpolatory) fit for a listed set, and for some other points, you want to do a least squares fit.
You COULD do that using LSQLIN, by setting a set of equality constraints on the points to be fit exactly, and then allowing the rest of the points to be fit in a least squares sense.
The problem is, this will require a high order polynomial. To be able to fit 5 points exactly, plus some others, the order of the polynomial will be quite a bit higher. And high order polynomials, especially those with constrained points, will do nasty things. But feel free to do what you will, just as long as you also expect a poor result.
Edit: I should add that a better choice is to use a least squares spline, which is something you CAN constrain to pass through a given set of points, while fitting other points in a least squares sense, and still not do something wild and crazy as a result.
Polyfit does what you want. An N-1 degree polynomial can fit N points exactly, thus, when it minimizes the sum of squared error, it gets 0 (which is what you want).
y=[1.2 2.13 3.45 4.59 4.79];
x=[1:5];
coeffs = polyfit(x,y,4);
Will get you a polynomial that goes through all of your points.
What you ask is known as Lagrange Interpolation . There is a MATLAB file exchange available. http://www.mathworks.com/matlabcentral/fileexchange/899-lagrange-polynomial-interpolation
However, you should note that least squares polynomial fitting is generally preferred to Lagrange Interpolation since the data you have in principle will have noise in it and Lagrange Interpolation will fit the noise as well as the data you have. So if you know that your data actually represents M dimensional polynomial and have N data, where N>>M, then you will have a order N polynomial with Lagrange.
You have options.
Use polyfit, just give it enough leeway to perform an exact fit. That is:
values = [1.2 2.13 3.45 4.59 4.79];
p = polyfit(1:length(values), values, length(values)-1);
Now
polyval(p,2) %returns 2.13
Use interpolation / extrapolation
values = [1.2 2.13 3.45 4.59 4.79];
xInterp = 0:0.1:6;
valueInterp = interp1(1:length(values), values, xInterp ,'linear','extrap');
Interpolation provides a lot of options for smoothing, extrapolation etc. For example, try:
valueInterp = interp1(1:length(values), values, xInterp ,'spline','extrap');