Matlab vector return to multiple vectors - matlab

num = zeros(1,freq);
den = zeros(1,freq);
for R = 1:freq
[num(R), den(R)]=butter(4, [0.1 0.9]);
end
I thought it was quite trivial but once I run it, I get:
In an assignment A(I) = B, the number of elements in B and I must be the same.
What am I doing wrong?

What you are doing wrong is that both num and den will contain multiple coefficients:
[b,a] = butter(n,Wn) returns the transfer function coefficients of an nth-order lowpass digital Butterworth filter with normalized cutoff frequency Wn.
b,a — Transfer function coefficients
row vectors
As copied from the documentation
The way to get your code working would be to either set num and den to a matrix, or to a cell array:
num = zeros(freq,4);
den = zeros(freq,4);
for R = 1:freq
[num(R,:), den(R,:)]=butter(4, [A(R) B(R)]); % matrix
end
for R = 1:freq
[num{R}, den{R}]=butter(4, [A(R) B(R)]); % cell
end
Probably the matrix is better suited for your purposes.

Related

My example shows SVD is less numerically stable than QR decomposition

I asked this question in Math Stackexchange, but it seems it didn't get enough attention there so I am asking it here. https://math.stackexchange.com/questions/1729946/why-do-we-say-svd-can-handle-singular-matrx-when-doing-least-square-comparison?noredirect=1#comment3530971_1729946
I learned from some tutorials that SVD should be more stable than QR decomposition when solving Least Square problem, and it is able to handle singular matrix. But the following example I wrote in matlab seems to support the opposite conclusion. I don't have a deep understanding of SVD, so if you could look at my questions in the old post in Math StackExchange and explain it to me, I would appreciate a lot.
I use a matrix that have a large condition number(e+13). The result shows SVD get a much larger error(0.8) than QR(e-27)
% we do a linear regression between Y and X
data= [
47.667483331 -122.1070832;
47.667483331001 -122.1070832
];
X = data(:,1);
Y = data(:,2);
X_1 = [ones(length(X),1),X];
%%
%SVD method
[U,D,V] = svd(X_1,'econ');
beta_svd = V*diag(1./diag(D))*U'*Y;
%% QR method(here one can also use "\" operator, which will get the same result as I tested. I just wrote down backward substitution to educate myself)
[Q,R] = qr(X_1)
%now do backward substitution
[nr nc] = size(R)
beta_qr=[]
Y_1 = Q'*Y
for i = nc:-1:1
s = Y_1(i)
for j = m:-1:i+1
s = s - R(i,j)*beta_qr(j)
end
beta_qr(i) = s/R(i,i)
end
svd_error = 0;
qr_error = 0;
for i=1:length(X)
svd_error = svd_error + (Y(i) - beta_svd(1) - beta_svd(2) * X(i))^2;
qr_error = qr_error + (Y(i) - beta_qr(1) - beta_qr(2) * X(i))^2;
end
You SVD-based approach is basically the same as the pinv function in MATLAB (see Pseudo-inverse and SVD). What you are missing though (for numerical reasons) is using a tolerance value such that any singular values less than this tolerance are treated as zero.
If you refer to edit pinv.m, you can see something like the following (I won't post the exact code here because the file is copyrighted to MathWorks):
[U,S,V] = svd(A,'econ');
s = diag(S);
tol = max(size(A)) * eps(norm(s,inf));
% .. use above tolerance to truncate singular values
invS = diag(1./s);
out = V*invS*U';
In fact pinv has a second syntax where you can explicitly specify the tolerance value pinv(A,tol) if the default one is not suitable...
So when solving a least-squares problem of the form minimize norm(A*x-b), you should understand that the pinv and mldivide solutions have different properties:
x = pinv(A)*b is characterized by the fact that norm(x) is smaller than the norm of any other solution.
x = A\b has the fewest possible nonzero components (i.e sparse).
Using your example (note that rcond(A) is very small near machine epsilon):
data = [
47.667483331 -122.1070832;
47.667483331001 -122.1070832
];
A = [ones(size(data,1),1), data(:,1)];
b = data(:,2);
Let's compare the two solutions:
x1 = A\b;
x2 = pinv(A)*b;
First you can see how mldivide returns a solution x1 with one zero component (this is obviously a valid solution because you can solve both equations by multiplying by zero as in b + a*0 = b):
>> sol = [x1 x2]
sol =
-122.1071 -0.0537
0 -2.5605
Next you see how pinv returns a solution x2 with a smaller norm:
>> nrm = [norm(x1) norm(x2)]
nrm =
122.1071 2.5611
Here is the error of both solutions which is acceptably very small:
>> err = [norm(A*x1-b) norm(A*x2-b)]
err =
1.0e-11 *
0 0.1819
Note that use mldivide, linsolve, or qr will give pretty much same results:
>> x3 = linsolve(A,b)
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 2.159326e-16.
x3 =
-122.1071
0
>> [Q,R] = qr(A); x4 = R\(Q'*b)
x4 =
-122.1071
0
SVD can handle rank-deficiency. The diagonal matrix D has a near-zero element in your code and you need use pseudoinverse for SVD, i.e. set the 2nd element of 1./diag(D) to 0 other than the huge value (10^14). You should find SVD and QR have equally good accuracy in your example. For more information, see this document http://www.cs.princeton.edu/courses/archive/fall11/cos323/notes/cos323_f11_lecture09_svd.pdf
Try this SVD version called block SVD - you just set the iterations equal to the accuracy you want - usually 1 is enough. If you want all the factors (this has a default # selected for factor reduction) then edit the line k= to the size(matrix) if I recall my MATLAB correctly
A= randn(100,5000);
A=corr(A);
% A is your correlation matrix
tic
k = 1000; % number of factors to extract
bsize = k +50;
block = randn(size(A,2),bsize);
iter = 2; % could set via tolerance
[block,R] = qr(A*block,0);
for i=1:iter
[block,R] = qr(A*(A'*block),0);
end
M = block'*A;
% Economy size dense SVD.
[U,S] = svd(M,0);
U = block*U(:,1:k);
S = S(1:k,1:k);
% Note SVD of a symmetric matrix is:
% A = U*S*U' since V=U in this case, S=eigenvalues, U=eigenvectors
V=real(U*sqrt(S)); %scaling matrix for simulation
toc
% reduced randomized matrix for simulation
sims = 2000;
randnums = randn(k,sims);
corrrandnums = V*randnums;
est_corr_matrix = corr(corrrandnums');
total_corrmatrix_difference =sum(sum(est_corr_matrix-A))

Matlab - How to improve efficiency of two port matrix calculations?

I'm looking for a way to speed up some simple two port matrix calculations. See the below code example for what I'm doing currently. In essence, I create a [Nx1] frequency vector first. I then loop through the frequency vector and create the [2x2] matrices H1 and H2 (all functions of f). A bit of simple matrix math including a matrix left division '\' later, and I got my result pb as a [Nx1] vector. The problem is the loop - it takes a long time to calculate and I'm looking for way to improve efficiency of the calculations. I tried assembling the problem using [2x2xN] transfer matrices, but the mtimes operation cannot handle 3-D multiplications.
Can anybody please give me an idea how I can approach such a calculation without the need for looping through f?
Many thanks: svenr
% calculate frequency and wave number vector
f = linspace(20,200,400);
w = 2.*pi.*f;
% calculation for each frequency w
for i=1:length(w)
H1(i,1) = {[1, rho*c*k(i)^2 / (crad*pi); 0,1]};
H2(i,1) = {[1, 1i.*w(i).*mp; 0, 1]};
HZin(i,1) = {H1{i,1}*H2{i,1}};
temp_mat = HZin{i,1}*[1; 0];
Zin(i,1) = temp_mat(1,1)/temp_mat(2,1);
temp_mat= H1{i,1}\[1; 1/Zin(i,1)];
pb(i,1) = temp_mat(1,1); Ub(i,:) = temp_mat(2,1);
end
Assuming that length(w) == length(k) returns true , rho , c, crad, mp are all scalars and in the last line is Ub(i,1) = temp_mat(2,1) instead of Ub(i,:) = temp_mat(2,1);
temp = repmat(eyes(2),[1 1 length(w)]);
temp1(1,2,:) = rho*c*(k.^2)/crad/pi;
temp2(1,2,:) = (1i.*w)*mp;
H1 = permute(num2cell(temp1,[1 2]),[3 2 1]);
H2 = permute(num2cell(temp2,[1 2]),[3 2 1]);
HZin = cellfun(#(a,b)(a*b),H1,H2,'UniformOutput',0);
temp_cell = cellfun(#(a,b)(a*b),H1,repmat({[1;0]},length(w),1),'UniformOutput',0);
Zin_cell = cellfun(#(a)(a(1,1)/a(2,1)),temp_cell,'UniformOutput',0);
Zin = cell2mat(Zin);
temp2_cell = cellfun(#(a)({[1;1/a]}),Zin_cell,'UniformOutput',0);
temp3_cell = cellfun(#(a,b)(pinv(a)*b),H1,temp2_cell);
temp4 = cell2mat(temp3_cell);
p(:,1) = temp4(1:2:end-1);
Ub(:,1) = temp4(2:2:end);

construct two dimensional array in matlab

let us consider following equation
x(t)=sum(a(i)*sin(2*pi*f(i)*t+b(i)*cos(2*pi*f(i)*t))
where i=1,2,......m and frequencies f=[f1,f2,.....fm] and t=[t1,t2,....tn]
i want to create matrix by sin(2*pi*f(i)*t) and cos(2*pi*f(i)*t),clearly it would be matrix with dimension NX2*m,i have tried following code
function [amplitudes]=determine_amplitudes(y,f,t,n,m);
X=zeros(n,2*m);
for i=1:n
for k=1:m
if mod(k,2)==1
X(i,k)=sin(2*pi*f(k)*t(i));
else
X(i,k)=cos(2*pi*f(k)*t(i);
end
end
end
end
i used mod operator to determine that if k is odd index,then there should be written sin value,else cosine value,but problem is that i am not sure that given matrix would be with dimension NX2*m,so how to create such matrix so that not exceed index of bounds of frequency array ,recall that frequency array is following f=[f1,f2,..fm],so my problem simple is how to apply m frequency at 2*m position,thanks for help
UPDATE:
let say m=3, and frequencies f=[12.5 13.6 21.7]
then we have following matrix ,also assume
n=4 t=[0.01 0.02 0.03 0.04]
sin(2*pi*f(1)*t(1)) cos(2*pi*f(1)* t(1)) sin(2*pi*f(2)*t(1)) cos(2*pi*f(2)*t(1)) sin(2*pi*f(3)*t(1)) cos(2*pi*f(3)*t(1))
No loop version
[f1,t1] = meshgrid(f',t');
p1 = sin(bsxfun(#times,f1,2*pi*t')); %%// Create sin copy '
p2 = cos(bsxfun(#times,f1,2*pi*t')); %%// Create cos copy '
d1 = [p1;p2];
final_matrix = reshape(d1,size(p1,1),[]);
Naive loop version
final_matrx2 = zeros(n,2*m);
for k1=1:n
for k2=1:2*m
if rem(k2,2)==1
final_matrx2(k1,k2) = sin(2*pi*f(ceil(k2/2))*t(k1));
else
final_matrx2(k1,k2) = cos(2*pi*f(ceil(k2/2))*t(k1));
end
end
end

tensile tests in matlab

The problem says:
Three tensile tests were carried out on an aluminum bar. In each test the strain was measured at the same values of stress. The results were
where the units of strain are mm/m.Use linear regression to estimate the modulus of elasticity of the bar (modulus of elasticity = stress/strain).
I used this program for this problem:
function coeff = polynFit(xData,yData,m)
% Returns the coefficients of the polynomial
% a(1)*x^(m-1) + a(2)*x^(m-2) + ... + a(m)
% that fits the data points in the least squares sense.
% USAGE: coeff = polynFit(xData,yData,m)
% xData = x-coordinates of data points.
% yData = y-coordinates of data points.
A = zeros(m); b = zeros(m,1); s = zeros(2*m-1,1);
for i = 1:length(xData)
temp = yData(i);
for j = 1:m
b(j) = b(j) + temp;
temp = temp*xData(i);
end
temp = 1;
for j = 1:2*m-1
s(j) = s(j) + temp;
temp = temp*xData(i);
end
end
for i = 1:m
for j = 1:m
A(i,j) = s(i+j-1);
end
end
% Rearrange coefficients so that coefficient
% of x^(m-1) is first
coeff = flipdim(gaussPiv(A,b),1);
The problem is solved without a program as follows
MY ATTEMPT
T=[34.5,69,103.5,138];
D1=[.46,.95,1.48,1.93];
D2=[.34,1.02,1.51,2.09];
D3=[.73,1.1,1.62,2.12];
Mod1=T./D1;
Mod2=T./D2;
Mod3=T./D3;
xData=T;
yData1=Mod1;
yData2=Mod2;
yData3=Mod3;
coeff1 = polynFit(xData,yData1,2);
coeff2 = polynFit(xData,yData2,2);
coeff3 = polynFit(xData,yData3,2);
x1=(0:.5:190);
y1=coeff1(2)+coeff1(1)*x1;
subplot(1,3,1);
plot(x1,y1,xData,yData1,'o');
y2=coeff2(2)+coeff2(1)*x1;
subplot(1,3,2);
plot(x1,y2,xData,yData2,'o');
y3=coeff3(2)+coeff3(1)*x1;
subplot(1,3,3);
plot(x1,y3,xData,yData3,'o');
What do I have to do to get this result?
As a general advice:
avoid for loops wherever possible.
avoid using i and j as variable names, as they are Matlab built-in names for the imaginary unit (I really hope that disappears in a future release...)
Due to m being an interpreted language, for-loops can be very slow compared to their compiled alternatives. Matlab is named MATtrix LABoratory, meaning it is highly optimized for matrix/array operations. Usually, when there is an operation that cannot be done without a loop, Matlab has a built-in function for it that runs way way faster than a for-loop in Matlab ever will. For example: computing the mean of elements in an array: mean(x). The sum of all elements in an array: sum(x). The standard deviation of elements in an array: std(x). etc. Matlab's power comes from these built-in functions.
So, your problem. You have a linear regression problem. The easiest way in Matlab to solve this problem is this:
%# your data
stress = [ %# in Pa
34.5 69 103.5 138] * 1e6;
strain = [ %# in m/m
0.46 0.95 1.48 1.93
0.34 1.02 1.51 2.09
0.73 1.10 1.62 2.12]' * 1e-3;
%# make linear array for the data
yy = strain(:);
xx = repmat(stress(:), size(strain,2),1);
%# re-formulate the problem into linear system Ax = b
A = [xx ones(size(xx))];
b = yy;
%# solve the linear system
x = A\b;
%# modulus of elasticity is coefficient
%# NOTE: y-offset is relatively small and can be ignored)
E = 1/x(1)
What you did in the function polynFit is done by A\b, but the \-operator is capable of doing it way faster, way more robust and way more flexible than what you tried to do yourself. I'm not saying you shouldn't try to make these thing yourself (please keep on doing that, you learn a lot from it!), I'm saying that for the "real" results, always use the \-operator (and check your own results against it as well).
The backslash operator (type help \ on the command prompt) is extremely useful in many situations, and I advise you learn it and learn it well.
I leave you with this: here's how I would write your polynFit function:
function coeff = polynFit(X,Y,m)
if numel(X) ~= numel(X)
error('polynFit:size_mismathc',...
'number of elements in matrices X and Y must be equal.');
end
%# bad condition number, rank errors, etc. taken care of by \
coeff = bsxfun(#power, X(:), m:-1:0) \ Y(:);
end
I leave it up to you to figure out how this works.

How to use aryule() in Matlab to extend a number series?

I have a series of numbers. I calculated the "auto-regression" between them using Yule-Walker method.
But now how do I extend the series?
Whole working is as follows:
a) the series I use:
143.85 141.95 141.45 142.30 140.60 140.00 138.40 137.10 138.90 139.85 138.75 139.85 141.30 139.45 140.15 140.80 142.50 143.00 142.35 143.00 142.55 140.50 141.25 140.55 141.45 142.05
b) this data is loaded in to data using:
data = load('c:\\input.txt', '-ascii');
c) the calculation of the coefficients:
ar_coeffs = aryule(data,9);
this gives:
ar_coeffs =
1.0000 -0.9687 -0.0033 -0.0103 0.0137 -0.0129 0.0086 0.0029 -0.0149 0.0310
d) Now using this, how do I calculate the next number in the series?
[any other method of doing this (except using aryule()) is also fine... this is what I did, if you have a better idea, please let me know!]
For a real valued sequence x of length N, and a positive order p:
coeff = aryule(x, p)
returns the AR coefficients of order p of the data x (Note that coeff(1) is a normalizing factor). In other words it models values as a linear combination of the past p values. So to predict the next value, we use the last p values as:
x(N+1) = sum_[k=0:p] ( coeff(k)*x(N-k) )
or in actual MATLAB code:
p = 9;
data = [...]; % the seq you gave
coeffs = aryule(data, p);
nextValue = -coeffs(2:end) * data(end:-1:end-p+1)';
EDIT: If you have access to System Identification Toolbox, then you can use any of a number of functions to estimate AR/ARMAX models (ar/arx/armax) (or even find the order of AR model using selstruc):
m = ar(data, p, 'yw'); % yw for Yule-Walker method
pred = predict(m, data, 1);
coeffs = m.a;
nextValue = pred(end);
subplot(121), plot(data)
subplot(122), plot( cell2mat(pred) )
Your data has a non-zero mean. Doesn't the Yule-Walker model assume the data is the output of a linear filter excited by a zero-mean white noise process?
If you remove the mean, this example using ARYULE and LPC might be what you're looking for. The procedure boils down to:
a = lpc(data,9); % uses Yule-Walker modeling
pred = filter(-a(2:end),1,data);
disp(pred(end)); % the predicted value at time N+1