Continuous time system discretization, and matrix exponential, Output truncated - matlab

There is a continuous-time system shown below:
The matlab codes are:
t=0.01;
syms s;
a2=[0 0 -285.7143;0 -0.4533 9.0662;5.2650 -5.2131 -42.5958];
b2=[571.4286;0;82.5714];
c2=[1 0 0];
A2=expm(a2*t);
B2=(int(expm(a2*s),0,t)*b1);
However, when I calculate B1, the computer displays 'output truncated'.
please help me.
thanks a lot.

I don't think it's necessary to use symbolic math for this integral of a matrix-valued function. Instead, you can use integral with the 'ArrayValued' option:
t = 0.01;
a2 = [0 0 -285.7143;
0 -0.4533 9.0662;
5.2650 -5.2131 -42.5958];
integral(#(s)expm(a2*s),0,t,'ArrayValued',true)
This is much faster and returns a result very similar to syms s; double(int(expm(a2*s),s,0,t)) (ignore the minuscule imaginary parts due to numerical error). See also this question.

Related

question about solving system of equations using symbolic math

I want to use symbolic symbol to solve a system of linear equation. So I prepare the following code.
A=[1,2;3,4];
% syms x
x=sym('x_%d',[2 1]);
eqn=A*x==[1;2];
result=solve(eqn,x)
Interestingly, it works, but when I read the variable result, it gives a 1X1 struct with x_1 and x_2 are 1X1 sym. But what I expect get should be 2 real values, why? Could someone explain it? Remark: do not want to use A^-1*[1;2] to obtain the answer.
If you set the output to single variable solve returns a structure
data type that contains all the solutions, to get each solution use
the dot. assignment, like result.x_1 or result.x_2
The code is as follows
A=[1,2;3,4];
% syms x
x=sym('x_%d',[2 1]);
eqn=A*x==[1;2];
result = solve(eqn,x);
result.x_1
% 0
result.x_2
% 1/2
If you want to have result as an array, use multiple output format, like
result(1) for the first variable, result(2) for the second variable
The code is as follows
A=[1,2;3,4];
% syms x
x=sym('x_%d',[2 1]);
eqn=A*x==[1;2];
[result(1), result(2)] = solve(eqn,x);
result
% result = [0 , 1/2]

How do I compute the determinant of a transfer function matrix without having to use "syms"?

I intend to compute the determinant of a transfer matrix and then subject to a nyquist analysis by making the nyquist plot but the problem is that the determinant command doesn't recognizes the transfer matrix. The code is shown below
clc
clear all;
close all;
g11 = tf(12.8,[16.7 1],'InputDelay',1)
g12 = tf(-18.9,[21 1],'InputDelay',3)
g21 = tf(6.6,[10.9 1],'InputDelay',7)
g22 = tf(-19.4,[14.4 1],'InputDelay',3)
G=[g11 g12 ; g21 g22]
[re,im,w] = nyquist(G)
F=2.55;
s=tf('s');
%syms s;
ggc11 = g11*(0.96*(1+3.25*F*s)/(3.25*F^2*s))
ggc12 = g12*(0.534*(1+3.31*F*s)/(3.31*F^2*s))
ggc21 = g21*(0.96*(1+3.25*F*s)/(3.25*F^2*s))
ggc22 = g22*(0.534*(1+3.31*F*s)/(3.31*F^2*s))
GGc=[ggc11 ggc12 ; ggc21 ggc22];
L=eye(2)+ GGc;
W= -1 + det(L)
nyquist(W)
The error that appears is as follows
Undefined function 'det' for input arguments of type 'ss'.
Error in BLT_code (line 30)
W= -1 + det(L)
I would like to avoid the 'syms' command as I would not be able to do the nyquist plot then. Is there any alternative way of computing the nyquist plot of the same ?
I am stuck in the same boat, trying to calculate the determinant of transfer function matrices for the purpose of checking the MIMO Nyquist stability criteria, see MIMO Stability ETH Zurich Lecture slides (pg 10). Unfortunately there does not seem to be a simple MATLAB command for this. I figured it can be evaluated manually.
If you have a TF matrix G(s) of the following form:
G = [g_11 g_12; g_21 g_22];
you can obtain the determinant by evaluating it as per its original definition as
det_G = g_11*g_22 - g_12*g_21;
This will result in a 1x1 TF variable. Of course, this method gets much too complicated for anything above a 2x2 system.

Getting rank deficient warning when using regress function in MATLAB

I have a dataset comprising of 30 independent variables and I tried performing linear regression in MATLAB R2010b using the regress function.
I get a warning stating that my matrix X is rank deficient to within machine precision.
Now, the coefficients I get after executing this function don't match with the experimental one.
Can anyone please suggest me how to perform the regression analysis for this equation which is comprising of 30 variables?
Going with our discussion, the reason why you are getting that warning is because you have what is known as an underdetermined system. Basically, you have a set of constraints where you have more variables that you want to solve for than the data that is available. One example of an underdetermined system is something like:
x + y + z = 1
x + y + 2z = 3
There are an infinite number of combinations of (x,y,z) that can solve the above system. For example, (x, y, z) = (1, −2, 2), (2, −3, 2), and (3, −4, 2). What rank deficient means in your case is that there is more than one set of regression coefficients that would satisfy the governing equation that would describe the relationship between your input variables and output observations. This is probably why the output of regress isn't matching up with your ground truth regression coefficients. Though it isn't the same answer, do know that the output is one possible answer. By running through regress with your data, this is what I get if I define your observation matrix to be X and your output vector to be Y:
>> format long g;
>> B = regress(Y, X);
>> B
B =
0
0
28321.7264417536
0
35241.9719076362
899.386999172398
-95491.6154990829
-2879.96318251964
-31375.7038251919
5993.52959752106
0
18312.6649115112
0
0
8031.4391233753
27923.2569044728
7716.51932560781
-13621.1638587172
36721.8387047613
80622.0849069525
-114048.707780113
-70838.6034825939
-22843.7931997405
5345.06937207617
0
106542.307940305
-14178.0346010715
-20506.8096166108
-2498.51437396558
6783.3107243113
You can see that there are seven regression coefficients that are equal to 0, which corresponds to 30 - 23 = 7. We have 30 variables and 23 constraints to work with. Be advised that this is not the only possible solution. regress essentially computes the least squared error solution such that sum of residuals of Y - X*B has the least amount of error. This essentially simplifies to:
B = X^(*)*Y
X^(*) is what is known as the pseudo-inverse of the matrix. MATLAB has this available, and it is called pinv. Therefore, if we did:
B = pinv(X)*Y
We get:
B =
44741.6923363563
32972.479220139
-31055.2846404536
-22897.9685877566
28888.7558524005
1146.70695371731
-4002.86163441217
9161.6908044046
-22704.9986509788
5526.10730457192
9161.69080479427
2607.08283489226
2591.21062004404
-31631.9969765197
-5357.85253691504
6025.47661106009
5519.89341411127
-7356.00479046122
-15411.5144034056
49827.6984426955
-26352.0537850382
-11144.2988973666
-14835.9087945295
-121.889618144655
-32355.2405829636
53712.1245333841
-1941.40823106236
-10929.3953469692
-3817.40117809984
2732.64066796307
You see that there are no zero coefficients because pinv finds the solution using the L2-norm, which promotes the "spreading" out of the errors (for a lack of a better term). You can verify that these are correct regression coefficients by doing:
>> Y2 = X*B
Y2 =
16.1491563400241
16.1264219600856
16.525331600049
17.3170318001845
16.7481541301999
17.3266932502295
16.5465094100486
16.5184456100487
16.8428701100165
17.0749421099829
16.7393450000517
17.2993993099419
17.3925811702017
17.3347117202356
17.3362798302375
17.3184486799219
17.1169638102517
17.2813552099096
16.8792925100727
17.2557945601102
17.501873690151
17.6490477001922
17.7733493802508
Similarly, if we used the regression coefficients from regress, so B = regress(Y,X); then doing Y2 = X*B, we get:
Y2 =
16.1491563399927
16.1264219599996
16.5253315999987
17.3170317999969
16.7481541299967
17.3266932499992
16.5465094099978
16.5184456099983
16.8428701099975
17.0749421099985
16.7393449999981
17.2993993099983
17.3925811699993
17.3347117199991
17.3362798299967
17.3184486799987
17.1169638100025
17.281355209999
16.8792925099983
17.2557945599979
17.5018736899983
17.6490476999977
17.7733493799981
There are some slight computational differences, which is to be expected. Similarly, we can also find the answer by using mldivide:
B = X \ Y
B =
0
0
28321.726441712
0
35241.9719075889
899.386999170666
-95491.6154989513
-2879.96318251572
-31375.7038251485
5993.52959751295
0
18312.6649114859
0
0
8031.43912336425
27923.2569044349
7716.51932559712
-13621.1638586983
36721.8387047123
80622.0849068411
-114048.707779954
-70838.6034824987
-22843.7931997086
5345.06937206919
0
106542.307940158
-14178.0346010521
-20506.8096165825
-2498.51437396236
6783.31072430201
You can see that this curiously matches up with what regress gives you. That's because \ is a more smarter operator. Depending on how your matrix is structured, it finds the solution to the system by a different method. I'd like to defer you to the post by Amro that talks about what algorithms mldivide uses when examining the properties of the input matrix being operated on:
How to implement Matlab's mldivide (a.k.a. the backslash operator "\")
What you should take away from this answer is that you can certainly go ahead and use those regression coefficients and they will more or less give you the expected output for each value of Y with each set of inputs for X. However, be warned that those coefficients are not unique. This is apparent as you said that you have ground truth coefficients that don't match up with the output of regress. It isn't matching up because it generated another answer that satisfies the constraints you have provided.
There is more than one answer that can describe that relationship if you have an underdetermined system, as you have seen by my experiments shown above.

Matlab gamma function: I get Inf for large values

I am writing my own code for the pdf of the multivariate t-distribution in Matlab.
There is a piece of code that includes the gamma function.
gamma((nu+D)/2) / gamma(nu/2)
The problem is that nu=1000, and so I get Inf from the gamma function.
It seems I will have to use some mathematical property of the gamma
function to rewrite it in a different way.
Thanks for any suggestions
You can use the function gammaln(x), which is the equivalent of log(gamma(x)) but avoids the overflow issue. The function you wrote is equivalent to:
exp(gammaln((nu+D)/2) - gammaln(nu/2))
The number gamma(1000/2) is larger than the maximum number MATLAB support. Thus it shows 'inf'. To see the maximum number in MATLAB, check realmax. For your case, if D is not very large, you will have to rewrite your formula. Let us assume that in your case 'D' is an even number. Then the formula you have will be: nu/2 * (nu/2 -1) * ....* (nu/2 - D/2 + 1).
sum1 = 1
for i = 1:D/2
sum1 = sum1*(nu/2 - i+1);
end
Then sum1 will be the result you want.

Decrete Fourier Transform in Matlab

I am asked to write an fft mix radix in matlab, but before that I want to let to do a discrete Fourier transform in a straight forward way. So I decide to write the code according to the formula defined as defined in wikipedia.
[Sorry I'm not allowed to post images yet]
http://en.wikipedia.org/wiki/Discrete_Fourier_transform
So I wrote my code as follows:
%Brutal Force Descrete Fourier Trnasform
function [] = dft(X)
%Get the size of A
NN=size(X);
N=NN(2);
%====================
%Declaring an array to store the output variable
Y = zeros (1, N)
%=========================================
for k = 0 : (N-1)
st = 0; %the dummy in the summation is zero before we add
for n = 0 : (N-1)
t = X(n+1)*exp(-1i*2*pi*k*n/N);
st = st + t;
end
Y(k+1) = st;
end
Y
%=============================================
However, my code seems to be outputting a result different from the ones from this website:
http://www.random-science-tools.com/maths/FFT.htm
Can you please help me detect where exactly is the problem?
Thank you!
============
Never mind it seems that my code is correct....
By default the calculator in the web link applies a window function to the data before doing the FFT. Could that be the reason for the difference? You can turn windowing off from the drop down menu.
BTW there is an FFT function in Matlab