MATLAB. How can we get a 2d multinomial by interpolation - matlab

We are given a n x n data. Approximate the funxtion.
For example. If we are given 2(1,1), 3(1,2), 4(2,1), 5(2,2)
Then we have to interpolate the 2 D- ultinomial as $0*x*y+y+2*x-1$.

This problem can be formulated as a set of linear equations, which are trivial to solve using mldivide function.
Let me illustrate this using the example you included.
% from the given example
in = [...
1,1;
1,2;
2,1;
2,2];
out = [...
2;
3;
4;
5];
% compute the variable terms in the polynomial
x = in(:,1);
y = in(:,2);
xy = x .* y;
c = ones(size(out)); % constant
% compute the coefficients of the polynomial
p = [xy,y,x,c] \ out;
% result: [0; 1; 2; -1]
The statementp = [xy,y,x,c] \ out computes the optimal coefficients (least-square error) when the problem is overconstrained (i.e. no solution exists to exactly satisfy all equations). But if there are only as many equations as there are variables (like in this example, there are 4 equations due to 4 input-output pairs and there are 4 coefficients that need to be estimated), then the coefficients can be computed simply by p = inv([xy,y,x,c]) * out.

Related

Solve matrix DAE system in matlab

The equations can be found here. As you can see it is set of 8 scalar equations closed to 3 matrix ones. In order to let Matlab know that equations are matrix - wise, I declare variable time dependent vector functions as:
syms t p1(t) p2(t) p3(t)
p(t) = symfun([p1(t);p2(t);p3(t)], t);
p = formula(p(t)); % allows indexing for vector p
% same goes for w(t) and m(t)...
Known matrices are declared as follows:
A = sym('A%d%d',[3 3]);
Fq = sym('Fq%d%d',[2 3]);
Im = diag(sym('Im%d%d',[1 3]));
The system is now ready to be modeled according to guide:
eqs = [diff(p) == A*w + Fq'*m,...
diff(w) == -Im*p,...
Fq*w == 0];
vars = [p; w; m];
At this point, when I try to reduce index (since it equals 2), I receive following error:
[DAEs,DAEvars] = reduceDAEIndex(eqs,vars);
Error using sym/reduceDAEIndex (line 95)
Expecting as many equations as variables.
The error would not arise if we had declared all variables as scalars:
syms A Im Fq real p(t) w(t) m(t)
Quoting symfun documentation (tips section):
Symbolic functions are always scalars, therefore, you cannot index into a function.
However it is hard for me to believe that it's not possible to solve these equations matrix - wise. Obviously, one can expand it to 8 scalar equations, but the multi body system concerned here is very simple and the aim is to be able to solve complex ones - hence the question: is it possible to solve matrix DAE in Matlab, and if so - what has to be fixed in order for this to work?
Ps. I have another issue with Matlab DAE solver: input variables (known coefficient functions) for my model are time variant. As far as example is concerned, they are constant in all domain, however for my problem they change in time. This problem has been brought out here. I would be grateful if you referred to it, should you have any solution.
Finally, I managed to find correct syntax for this problem. I made a mistake of treating matrix variables (such as A, Fq) as a single entity. Below I present code that utilizes matrix approach and solves this particular DAE:
% Define symbolic variables.
q = sym('q%d',[3 1]); % state variables
a = sym('a'); k = sym('k'); % constant parameters
t = sym('t','real'); % independent variable
% Define system variables and group them in vectors:
p1(t) = sym('p1(t)'); p2(t) = sym('p2(t)'); p3(t) = sym('p3(t)');
w1(t) = sym('w1(t)'); w2(t) = sym('w2(t)'); w3(t) = sym('w3(t)');
m1(t) = sym('m1(t)'); m2(t) = sym('m2(t)');
pvect = [p1(t); p2(t); p3(t)];
wvect = [w1(t); w2(t); w3(t)];
mvect = [m1(t); m2(t)];
% Define matrices:
mass = diag(sym('ms%d',[1 3]));
Fq = [0 -1 a;
0 0 1];
A = [1 0 0;
0 1 a;
0 a -q(1)*a] * k;
% Define sets of equations and aggregate them into one set:
set1 = diff(pvect,t) == A*wvect + Fq'*mvect;
set2 = mass*diff(wvect,t) == -pvect;
set3 = Fq*wvect == 0;
eqs = [set1; set2; set3];
% Close all system variables in one vector:
vars = [pvect; wvect; mvect];
% Reduce index of the system and remove redundnat equations:
[DAEs,DAEvars] = reduceDAEIndex(eqs,vars);
[DAEs,DAEvars] = reduceRedundancies(DAEs,DAEvars);
[M,F] = massMatrixForm(DAEs,DAEvars);
We receive very simple 2x2 ODE for two variables p1(t) and w1(t). Keep in mind that after reducing redundancies we got rid of all elements from state vector q. This means that all left variables (k and mass(1,1)) are not time dependent. If there had been time dependency of some variables within the system, the case would have been much harder to solve.
% Replace symbolic variables with numeric ones:
M = odeFunction(M, DAEvars,mass(1,1));
F = odeFunction(F, DAEvars, k);
k = 2000; numericMass = 4;
F = #(t, Y) F(t, Y, k);
M = #(t, Y) M(t, Y, numericMass);
% set the solver:
opt = odeset('Mass', M); % Mass matrix of the system
TIME = [1; 0]; % Time boundaries of the simulation (backwards in time)
y0 = [1 0]'; % Initial conditions for left variables p1(t) and w1(t)
% Call the solver
[T, solution] = ode15s(F, TIME, y0, opt);
% Plot results
plot(T,solution(:,1),T,solution(:,2))

How to generate frequency response given b,a coefficients of the system?

I have the following system, specified by the set of coefficients:
b = [1 2 3];
a = [1 .5 .25];
In the Z-Domain, such function will have the following transfer function:
H(Z) = Y(Z)/X(Z)
So the frequency response will be just the unit circle, where:
H(e^jw) = Y(e^jw)/X(e^jw)
Do I just substitute in the e^jw for 'Z' in my transfer function to obtain the frequency response of the system mathematically, on paper? Seems a bit ridiculous from my (a student's) point of view.
Have you tried freqz()? It returns the frequency response vector, h, and the corresponding angular frequency vector, w, for the digital filter with numerator and denominator polynomial coefficients stored in b and a, respectively.
In your case, simply follow the help:
[h,w]=freqz(b,a);
You do sub in e^jw for Z. This isn't ridiculous. Then you just sweep w from -pi to pi. Your freq response will be the absolute value of the result.
As Alessiox mentioned, freqz is the command you want to use in matlab.
I would indeed be as simple as substituting exp(j*w) in your transfer function. There are of course different ways to implement this with Matlab. For the purpose of illustration, I will be assuming bs are the coefficients of the x sequence and as are the coefficients of the y sequence, such that the b are in the numerator and the as are in the denominator:
A direct evaluation with Matlab could be done with:
b = [1 2 3];
a = [1 .5 .25];
N = 513; % number of points at which to evaluate the transfer function
w = linspace(0,2*pi,N);
num = 0;
for i=1:length(b)
num = num + b(i) * exp(-j*i*w);
end
den = 0;
for i=1:length(a)
den = den + a(i) * exp(-j*i*w);
end
H = num ./ den;
This would be equivalent to the following which makes use of the builtin polyval:
N = 513; % number of points at which to evaluate the transfer function
w = linspace(0,2*pi,N);
H = polyval(fliplr(b),exp(-j*w))./polyval(fliplr(a),exp(-j*w));
Also, this is really evaluating the transfer function at discrete equally spaced angular frequencies w = 2*pi*k/N which corresponds to the Discrete Fourier Transform (DFT). As such it could also be done with:
N = 512;
H = fft(b,N) ./ fft(a,N);
Incidentally this is what freqz does, so you could also get the same result with:
N = 512;
H = freqz(b,a,N,'whole');

USE DIFFERENTIAL MATRIX OPERATOR TO SOLVE ODE

We were asked to define our own differential operators on MATLAB, and I did it following a series of steps, and then we should use the differential operators to solve a boundary value problem:
-y'' + 2y' - y = x, y(0) = y(1) =0
my code was as follows, it was used to compute this (first and second derivative)
h = 2;
x = 2:h:50;
y = x.^2 ;
n=length(x);
uppershift = 1;
U = diag(ones(n-abs(uppershift),1),uppershift);
lowershift = -1;
L= diag(ones(n-abs(lowershift),1),lowershift);
% the code above creates the upper and lower shift matrix
D = ((U-L))/(2*h); %first differential operator
D2 = (full (gallery('tridiag',n)))/ -(h^2); %second differential operator
d1= D*y.'
d2= ((D2)*y.')
then I changed it to this after posting it here and getting one response that encouraged the usage of Identity Matrix, however I still seem to be getting no where.
h = 2;
n=10;
uppershift = 1;
U = diag(ones(n-abs(uppershift),1),uppershift);
lowershift = -1;
L= diag(ones(n-abs(lowershift),1),lowershift);
D = ((U-L))/(2*h); %first differential operator
D2 = (full (gallery('tridiag',n)))/ -(h^2); %second differential operator
I= eye(n);
eqn=(-D2 + 2*D - I)*y == x
solve(eqn,y)
I am not sure how to proceed with this, like should I define y and x, or what exactly? I am clueless!
Because this is a numerical approximation to the solution of the ODE, you are seeking to find a numerical vector that is representative of the solution to this ODE from time x=0 to x=1. This means that your boundary conditions make it so that the solution is only valid between 0 and 1.
Also this is now the reverse problem. In the previous post we did together, you know what the input vector was, and doing a matrix-vector multiplication produced the output derivative operation on that input vector. Now, you are given the output of the derivative and you are now seeking what the original input was. This now involves solving a linear system of equations.
Essentially, your problem is now this:
YX = F
Y are the coefficients from the matrix derivative operators that you derived, which is a n x n matrix, X would be the solution to the ODE, which is a n x 1 vector and F would be the function you are associating the ODE with, also a n x 1 vector. In our case, that would be x. To find Y, you've pretty much done that already in your code. You simply take each matrix operator (first and second derivative) and you add them together with the proper signs and scales to respect the left-hand side of the ODE. BTW, your first derivative and second derivative matrices are correct. What's left is adding the -y term to the mix, and that is accomplished by -eye(n) as you have found out in your code.
Once you formulate your Y and F, you can use the mldivide or \ operator and solve for X and get the solution to this linear system via:
X = Y \ F;
The above essentially solves the linear system of equations formed by Y and F and will be stored in X.
The first thing you need to do is define a vector of points going from x=0 to x=1. linspace is probably the most suitable where you can specify how many points we want. Let's assume 100 points for now:
x = linspace(0,1,100);
Therefore, h in our case is just 1/100. In general, if you want to solve from the starting point x = a up to the end point x = b, the step size h is defined as h = (b - a)/n where n is the total number of points you want to solve for in the ODE.
Now, we have to include the boundary conditions. This simply means that we know the beginning and ending of the solution of the ODE. This means that y(0) = y(1) = 0. As such, we make sure that the first row of Y has only the first column set to 1 and the last row of Y has only the last column set to 1, and we'll set the output position in F to both be 0. This symbolizes that we already know the solution at these points.
Therefore, your final code to solve is just:
%// Setup
a = 0; b = 1; n = 100;
x = linspace(a,b,n);
h = (b-a)/n;
%// Your code
uppershift = 1;
U = diag(ones(n-abs(uppershift),1),uppershift);
lowershift = -1;
L= diag(ones(n-abs(lowershift),1),lowershift);
D = ((U-L))/(2*h); %first differential operator
D2 = (full (gallery('tridiag',n)))/ -(h^2);
%// New code - Create differential equation matrix
Y = (-D2 + 2*D - eye(n));
%// Set boundary conditions on system
Y(1,:) = 0; Y(1,1) = 1;
Y(end,:) = 0; Y(end,end) = 1;
%// New code - Create F vector and set boundary conditions
F = x.';
F(1) = 0; F(end) = 0;
%// Solve system
X = Y \ F;
X should now contain your numerical approximation to the ODE in steps of h = 1/100 starting from x=0 up to x=1.
Now let's see what this looks like:
figure;
plot(x, X);
title('Solution to ODE');
xlabel('x'); ylabel('y');
You can see that y(0) = y(1) = 0 as per the boundary conditions.
Hope this helps, and good luck!

Computing an ODE in Matlab

Given a system of the form y' = A*y(t) with solution y(t) = e^(tA)*y(0), where e^A is the matrix exponential (i.e. sum from n=0 to infinity of A^n/n!), how would I use matlab to compute the solution given the values of matrix A and the initial values for y?
That is, given A = [-2.1, 1.6; -3.1, 2.6], y(0) = [1;2], how would I solve for y(t) = [y1; y2] on t = [0:5] in matlab?
I try to use something like
t = 0:5
[y1; y2] = expm(A.*t).*[1;2]
and I'm finding errors in computing the multiplication due to dimensions not agreeing.
Please note that matrix exponential is defined for square matrices. Your attempt to multiply the attenuation coefs with the time vector doesn't give you what you'd want (which should be a 3D matrix that should be exponentiated slice by slice).
One of the simple ways would be this:
A = [-2.1, 1.6; -3.1, 2.6];
t = 0:5;
n = numel(t); %'number of samples'
y = NaN(2, n);
y(:,1) = [1;2];
for k =2:n
y(:,k) = expm(t(k)*A) * y(:,1);
end;
figure();
plot(t, y(1,:), t, y(2,:));
Please note that in MATLAB array are indexed from 1.

using size of scatter points to weight line of best fit in matlab

Is it possible to use the size (s) of the points to 'weight' the line of best fit?
x = [1 2 3 4 5];
y = [2 4 5 3 4];
s = [10 15 20 2 5];
scatter(x,y,s)
hold on
weight = s;
p = polyfit(x,y,1); %how do I take into account the size of the points?
f = polyval(p,x);
plot(x,f,'-r')
Using Marcin's suggestion, you can incorporate lscov into polyfit. As the documentation explains, polynomial fitting is done by computing the Vandermonde matrix V of x, and then executing p = V\y. This is the standard formalism of any least-squares solution, and lends itself to weighted-least-squares in MATLAB through lscov.
Taking your x, y and weight vectors, instead of calling polyfit(x,y,n) you can do the following:
% Construct Vandermonde matrix. This code is taken from polyfit.m
V(:,n+1) = ones(length(x),1,class(x));
for j = n:-1:1
V(:,j) = x.*V(:,j+1);
end
% Solve using weighted-least-squares
p = lscov(V,y,weight);
You can even go one step further, and modify polyfit.m itself to include this functionality, or add another function polyfitw.m if you are not inclined to modify original MATLAB functions. Note however that polyfit has some more optional outputs for structure, computed using QR decomposition as detailed in the documentation. Generalization of these outputs to the weighted case will require some more work.
x = (1:10)';
y = (3 * x + 5) + rand(length(x),1)*5;
w = ones(1,length(y));
A = [x ones(length(x),1)];
p = lscov(A,y,w);
plot(x,y,'.');
hold on
plot(x,p(1)*x + p(2),'-r');