Trying to compute a specific sum equation without using for loop in MATLAB - matlab

I have a vector x = [x_1 x_2 ... x_n], a vector y = [y_1 y_2 y_3] and a matrix X = [x_11 x_12 ... x_1n; x_21 x_22 ... x_2n; x_31 x_32 ... x_3n].
For i = 1, 2, ..., n, I want to compute the following sum in MATLAB:
sum((x(i) - y*X(:,i))^2)
What I have tried to write is the following MATLAB code:
vv = (x(1) - y*X(:,1))^2; % as an initialization for i=1
for i = 2 : n
vv = vv + (x(i) - y * X(:,i))^2
end
But I am wondering if I can compute that without for loop in order to potentially reduce the computational time especially if n is very high... So are there any other much more optimal possibilities to do that in MATLAB?
Any help will be very appreciated!

You do not need the loop at all,
for i = 2:n
y*X(:,i)
end
is the same as just y*X, so x(i) - yX(:,i) is simply x - yX so basically, its:
vv = sum((x - y * X).^2);
Thanks for #beaker for pointing the mistake.

Related

Fixed point iterative method error MATLAB

I am trying to use the fixed point iteration method with initial approximation x(1)=0 to obtain an approximation to the root of the equation f(x)=3x+sin(x)e^x=0.
The stopping criterion is
|x(k+1)-x(k)|<0.0001
x(1) = 0;
n = 100;
for k = 1:n
f(k) = 3*x(k) +sin(x(k))-exp(x(k));
if (abs(f(k))<0.0001)
break;
end
syms x
diff(f(k));
x(k+1) = x(1)- (f(k))/(diff(f(k)));
end
[x' f']
This is the error I am getting: Error using / Matrix dimensions must
agree. Error in prac2Q2 (line 15)
x(k+1) = x(1)- (f(k))/(diff(f(k)));
I would suggest to calculate the derivative by hand and use that term as denominator or to save the derivative in another variable and use this as the denominator.
Derivative as Variable
f(k) = ...;
df(k) = diff(f(k));
x(k+1) = x(k) - f(k) / df(k);
PS: I cannot test this, because I do not have access to the Symbolic Toolbox right now.
If you're looking for the root of 3*x +sin(x)-exp(x) you want to resolve this equation:
3*x + sin(x) - exp(x) = 0
The easiest way will be to isolate x in one side of the equation:
x = (exp(x) - sin(x))/3 % now iterate until x = (exp(x) - sin(x))/3
Now I would recommand to use an easier fixed point method: x(k+1) = (x(k)+f(x(k)))/2
x = 1 % x0
while 1
y = (exp(x)-sin(x))/3; % we are looking for the root not for a fixed point !!! y = f(x)
x = (x+y)/2 % after a few iterations x == y, so x = (x+y)/2 or x = 2x/2
if abs(x-y) < 1e-10
break
end
end
And you obtain the correct result:
x = 0.36042
No need of symbolic math.

Taylor Series of ln(x) in Matlab

I am trying to compute the taylor series of ln(x) for any value of x.
What I have so far is:
clear
clc
n = input('Enter number of iiterations (n): ' );
x = input('enter value of x (x): ');
y = zeros(1,n);
for i = 0:n
y(i+1)=sum + (-1)^(n+1)*(x-1)^n/n;
end
But this code seems to be broken and I can't figure out why. Any suggestions on how to improve?
This is a one liner in addition to the for-loop answer provided by #farbiondriven
For 0<x<1 :
sumLn = #(x, n)(sum(((-1).^(0:n-1)).*((x-1).^(1:n))./(1:n)));
sumLn(0.5,10)
ans =
-0.6931
>> log(0.5)
ans =
-0.6931
For x>0.5 :
sumLn = #(x, n)(sum( ((x-1)/x).^(1:n) ./ (1:n) ));
sumLn(2,10)
ans =
0.6931
log(2) =
0.6931
Note: The variable x in this formula is bounded as mentioned in this link.
Try this:
clear
clc
n = input('Enter number of iterations (n): ' );
x = input('enter value of x with abs value < 1 (x): ');
y = zeros(1,n+1);
y(1)=0;
for i = 1:n
y(i+1)= y(i) + ((-1)^(i+1)*(x-1)^i/i);
end
txt = sprintf('The output is: %f', y(n+1))
I suggest using built-in function and hopefully there is one. taylor(f,var) approximates f with the Taylor series expansion of f up to the fifth order at the point var = 0.
Specify Expansion Point :
Find the Taylor series expansions at x = 1 for these functions. The default expansion point is 0. To specify a different expansion point, use 'ExpansionPoint':
syms x
taylor(log(x), x, 'ExpansionPoint', 1)
ans =
x - (x - 1)^2/2 + (x - 1)^3/3 - (x - 1)^4/4 + (x - 1)^5/5 - 1
Specify Truncation Order :
The default truncation order is 6.
syms x
f = log(x);
t6 = taylor(f, x);
Use 'Order' to control the truncation order. For example, approximate the same expression up to the orders 8.
syms x
taylor(log(x), x, 'ExpansionPoint', 1, 'Order', 8);

Computing Mahalanobis Distance Between Set of Points and Set of Reference Points

I have an n x p matrix - mX which is composed of n points in R^p.
I have another m x p matrix - mY which is composed of m reference points in R^p.
I would like to create an n x m matrix - mD which is the Mahalanobis Distance matrix.
D(i, j) means the Mahalanobis Distance between point i in mX, mX(i, :) and point j in mY, mY(j, :).
Namely, is computes the following:
mD(i, j) = (mX(i, :) - mY(j, :)) * inv(mC) * (mX(i, :) - mY(j, :)).';
Where mC is the given Mahalanobis Distance PSD Matrix.
It is easy to be done in a loop, is there a way to vectorize it?
Namely, is the a function which its inputs are mX, mY and mC and its output is mD and fully vectorized without using any MATLAB toolbox?
Thank You.
Approach #1
Assuming infinite resources, here's one vectorized solution using bsxfun and matrix-multiplication -
A = reshape(bsxfun(#minus,permute(mX,[1 3 2]),permute(mY,[3 1 2])),[],p);
out = reshape(diag(A*inv(mC)*A.'),n,m);
Approach #2
Here's a comprise solution trying to reduce the loop complexity -
A = reshape(bsxfun(#minus,permute(mX,[1 3 2]),permute(mY,[3 1 2])),[],p);
imC = inv(mC);
out = zeros(n*m,1);
for ii = 1:n*m
out(ii) = A(ii,:)*imC*A(ii,:).';
end
out = reshape(out,n,m);
Sample run -
>> n = 3; m = 4; p = 5;
mX = rand(n,p);
mY = rand(m,p);
mC = rand(p,p);
imC = inv(mC);
>> %// Original solution
for i = 1:n
for j = 1:m
mD(i, j) = (mX(i, :) - mY(j, :)) * inv(mC) * (mX(i, :) - mY(j, :)).'; %//'
end
end
>> mD
mD =
-8.4256 10.032 2.8929 7.1762
-44.748 -4.3851 -13.645 -9.6702
-4.5297 3.2928 0.11132 2.5998
>> %// Approach #1
A = reshape(bsxfun(#minus,permute(mX,[1 3 2]),permute(mY,[3 1 2])),[],p);
out = reshape(diag(A*inv(mC)*A.'),n,m); %//'
>> out
out =
-8.4256 10.032 2.8929 7.1762
-44.748 -4.3851 -13.645 -9.6702
-4.5297 3.2928 0.11132 2.5998
>> %// Approach #2
A = reshape(bsxfun(#minus,permute(mX,[1 3 2]),permute(mY,[3 1 2])),[],p);
imC = inv(mC);
out1 = zeros(n*m,1);
for ii = 1:n*m
out1(ii) = A(ii,:)*imC*A(ii,:).'; %//'
end
out1 = reshape(out1,n,m);
>> out1
out1 =
-8.4256 10.032 2.8929 7.1762
-44.748 -4.3851 -13.645 -9.6702
-4.5297 3.2928 0.11132 2.5998
Instead if you had :
mD(j, i) = (mX(i, :) - mY(j, :)) * inv(mC) * (mX(i, :) - mY(j, :)).';
The solutions would translate to the versions listed next.
Approach #1
A = reshape(bsxfun(#minus,permute(mY,[1 3 2]),permute(mX,[3 1 2])),[],p);
out = reshape(diag(A*inv(mC)*A.'),m,n);
Approach #2
A = reshape(bsxfun(#minus,permute(mY,[1 3 2]),permute(mX,[3 1 2])),[],p);
imC = inv(mC);
out1 = zeros(m*n,1);
for i = 1:n*m
out(i) = A(i,:)*imC*A(i,:).'; %//'
end
out = reshape(out,m,n);
Sample run -
>> n = 3; m = 4; p = 5;
mX = rand(n,p); mY = rand(m,p); mC = rand(p,p); imC = inv(mC);
>> %// Original solution
for i = 1:n
for j = 1:m
mD(j, i) = (mX(i, :) - mY(j, :)) * inv(mC) * (mX(i, :) - mY(j, :)).'; %//'
end
end
>> mD
mD =
0.81755 0.33205 0.82254
1.7086 1.3363 2.4209
0.36495 0.78394 -0.33097
0.17359 0.3889 -1.0624
>> %// Approach #1
A = reshape(bsxfun(#minus,permute(mY,[1 3 2]),permute(mX,[3 1 2])),[],p);
out = reshape(diag(A*inv(mC)*A.'),m,n); %//'
>> out
out =
0.81755 0.33205 0.82254
1.7086 1.3363 2.4209
0.36495 0.78394 -0.33097
0.17359 0.3889 -1.0624
>> %// Approach #2
A = reshape(bsxfun(#minus,permute(mY,[1 3 2]),permute(mX,[3 1 2])),[],p);
imC = inv(mC);
out1 = zeros(m*n,1);
for i = 1:n*m
out1(i) = A(i,:)*imC*A(i,:).'; %//'
end
out1 = reshape(out1,m,n);
>> out1
out1 =
0.81755 0.33205 0.82254
1.7086 1.3363 2.4209
0.36495 0.78394 -0.33097
0.17359 0.3889 -1.0624
Here is one solution that eliminates one loop
function d = mahalanobis(mX, mY)
n = size(mX, 2);
m = size(mY, 2);
data = [mX, mY];
mc = cov(transpose(data));
dist = zeros(n,m);
for i = 1 : n
diff = repmat(mX(:,i), 1, m) - mY;
dist(i,:) = sum((mc\diff).*diff , 1);
end
d = sqrt(dist);
end
You would invoke it as:
d = mahalanobis(transpose(X),transpose(Y))
Reduce to L2
It seems that Mahalanobis Distance can be reduced to ordinary L2 distance if you are allowed to preprocess matrix mC and you are not afraid of numerical differences.
First of all, compute Cholesky decomposition of mC:
mR = chol(mC) % C = R^t * R, where R is upper-triangular
Now we can use these factors to reformulate Mahalanobis Distance:
(Xi-Yj) * inv(C) * (Xi-Yj)^t = || (Xi-Yj) inv(R) ||^2 = ||TXi - TYj||^2
where: TXi = Xi * inv(R)
TYj = Yj * inv(R)
So the idea is to transform points Xi, Yj to TXi, TYj first, and then compute euclidean distances between them. Here is the algorithm outline:
Compute mR - Cholesky factor of covariance matrix mC (takes O(p^3) time).
Invert triangular matrix mR (takes O(p^3) time).
Multiply both mX and mY by inv(mR) on the right (takes O(p^2 (m+n)) time).
Compute squared L2 distances between pairs of points (takes O(m n p) time).
Total time is O(m n p + (m + n) p^2 + p^3) versus original O(m n p^2). It should work faster when 1 << p << n,m. In such case step 4 would takes most of the time and should be vectorized.
Vectorization
I have little experience of MATLAB, but quite a lot of SIMD vectorization on x86 CPUs. In raw computations, it would be enough to vectorize along one sufficiently large array dimension, and make trivial loops for the other dimensions.
If you expect p to be large enough, it may probably be OK to vectorize along coordinates of points, and make two nested loops for i <= n and j <= m. That's similar to what #Daniel posted.
If p is not sufficiently large, you can vectorize along one of the point sequences instead. This would be similar to solution posted by #dpmcmlxxvi: you have to subtract single row of one matrix from all the rows of the second matrix, then compute squared norms of the resulting rows. Repeat n times (
or m times).
As for me, full vectorization (which means rewriting with matrix operations instead of loops in MATLAB) does not sound like a clever performance goal. Most likely partially vectorized solutions would be optimally fast.
I came to the conclusion that vectorizing this problem is not efficient. My best idea for vectorizing this problem would require m x n x p x p working memory, at least if everything is processed at once. This means with n=m=p=152 the code would already require 4GB Ram. At these dimensions, my system can run the loop in less than a second:
mD=zeros(size(mX,1),size(mY,1));
ImC=inv(mC);
for i=1:size(mX,1)
for j=1:size(mY,1)
d=mX(i, :) - mY(j, :);
mD(i, j) = (d) * ImC * (d).';
end
end

Shorten this in Matlab

Let x = [1,...,t] be a vector with t components and A and P arrays. I asked myself whether there is any chance to shorten this, as it looks very cumbersome:
for n = 1:t
for m = 1:n
H(n,m) = A(n,m) + x(n) * P(n,m)
end
end
My suggestion: bsxfun(#times,x,P) + A;
e.g.
A = rand(3);
P = rand(3);
x = rand(3,1);
for n = 1:3
for m = 1:3
H(n,m) = A(n,m) + x(n) * P(n,m);
end
end
H2 = bsxfun(#times,x,P) + A;
%//Check that they're the same
all(H(:) == H2(:))
returns
ans = 1
EDIT:
Amro is right! To make the second loop is dependent on the first use tril:
H2 = tril(bsxfun(#times,x,P) + A);
Are the matrices square btw because that also creates other problems
tril(A + P.*repmat(x',1,t))
EDIT. This is for when x is row vector.
If x is a column vector, then use tril(A + P.*repmat(x,t,1))
If your example code is correct, then H(i,j) = 0 for any j > i, e.g. X(1,2).
For t = 3 for example, you would have.
H =
'A(1,1) + x(1) * P(1,1)' [] []
'A(2,1) + x(2) * P(2,1)' 'A(2,2) + x(2) * P(2,2)' []
'A(3,1) + x(3) * P(3,1)' 'A(3,2) + x(3) * P(3,2)' 'A(3,3) + x(3) * P(3,3)'
Like I pointed out in the comments, unless it was a typo mistake, the second for-loop counter depends on that of the first for-loop...
In case it was intentional, I came up with the following solution:
% some random data
t = 10;
x = (1:t)';
A = rand(t,t);
P = rand(t,t);
% double for-loop
H = zeros(t,t);
for n = 1:t
for m = 1:n
H(n,m) = A(n,m) + x(n) * P(n,m);
end
end
% vectorized using linear-indexing
[a,b] = ndgrid(1:t,1:t);
idx = sub2ind([t t], nonzeros(tril(a)), nonzeros(tril(b)));
xidx = nonzeros(tril(a));
HH = zeros(t);
HH(tril(true(t))) = A(idx) + x(xidx).*P(idx);
% check the results are the same
assert(isequal(H,HH))
I like #Dan's solution better. The only advantage here is that I do not compute unnecessary values (since the upper half of the matrix is zeros), while the other solution computes the full matrix and then cut back the extra stuff.
A good start would be
H = A + x*P
This may not be a working solution, you'll have to check conformability of arrays and vectors, and make sure that you're using the correct multiplication, but this should be enough to point you in the right direction. If you're new to Matlab be aware that vectors can be either 1xn or nx1, ie row and column vectors are different species unlike in so many programming languages. If x isn't what you want on the rhs, you may want its transpose, x' in Matlab.
Matlab is, from one point of view, an array language, explicit loops are often unnecessary and frequently not even a good way to go.
Since the range for second loop is 1:n, you can take the lower triangle parts of matrices A and P for calculation
H = bsxfun(#times,x(:),tril(P)) + tril(A);

Lagrange interpolation method

I use convolution and for loops (too much for loops) for calculating the interpolation using
Lagrange's method , here's the main code :
function[p] = lagrange_interpolation(X,Y)
L = zeros(n);
p = zeros(1,n);
% computing L matrice, so that each row i holds the polynom L_i
% Now we compute li(x) for i=0....n ,and we build the polynomial
for k=1:n
multiplier = 1;
outputConv = ones(1,1);
for index = 1:n
if(index ~= k && X(index) ~= X(k))
outputConv = conv(outputConv,[1,-X(index)]);
multiplier = multiplier * ((X(k) - X(index))^-1);
end
end
polynimialSize = length(outputConv);
for index = 1:polynimialSize
L(k,n - index + 1) = outputConv(polynimialSize - index + 1);
end
L(k,:) = multiplier .* L(k,:);
end
% continues
end
Those are too much for loops for computing the l_i(x) (this is done before the last calculation of P_n(x) = Sigma of y_i * l_i(x)) .
Any suggestions into making it more matlab formal ?
Thanks
Yeah, several suggestions (implemented in version 1 below): if loop can be combined with for above it (just make index skip k via something like jr(jr~=j) below); polynomialSize is always equal length(outputConv) which is always equal n (because you have n datapoints, (n-1)th polynomial with n coefficients), so the last for loop and next line can be also replaced with simple L(k,:) = multiplier * outputConv;
So I replicated the example on http://en.wikipedia.org/wiki/Lagrange_polynomial (and adopted their j-m notation, but for me j goes 1:n and m is 1:n and m~=j), hence my initialization looks like
clear; clc;
X=[-9 -4 -1 7]; %example taken from http://en.wikipedia.org/wiki/Lagrange_polynomial
Y=[ 5 2 -2 9];
n=length(X); %Lagrange basis polinomials are (n-1)th order, have n coefficients
lj = zeros(1,n); %storage for numerator of Lagrange basis polyns - each w/ n coeff
Lj = zeros(n); %matrix of Lagrange basis polyns coeffs (lj(x))
L = zeros(1,n); %the Lagrange polynomial coefficients (L(x))
then v 1.0 looks like
jr=1:n; %j-range: 1<=j<=n
for j=jr %my j is your k
multiplier = 1;
outputConv = 1; %numerator of lj(x)
mr=jr(jr~=j); %m-range: 1<=m<=n, m~=j
for m = mr %my m is your index
outputConv = conv(outputConv,[1 -X(m)]);
multiplier = multiplier * ((X(j) - X(m))^-1);
end
Lj(j,:) = multiplier * outputConv; %jth Lagrange basis polinomial lj(x)
end
L = Y*Lj; %coefficients of Lagrange polinomial L(x)
which can be further simplified if you realize that numerator of l_j(x) is just a polynomial with specific roots - for that there is a nice command in matlab - poly. Similarly the denominator is just that polyn evaluated at X(j) - for that there is polyval. Hence, v 1.9:
jr=1:n; %j-range: 1<=j<=n
for j=jr
mr=jr(jr~=j); %m-range: 1<=m<=n, m~=j
lj=poly(X(mr)); %numerator of lj(x)
mult=1/polyval(lj,X(j)); %denominator of lj(x)
Lj(j,:) = mult * lj; %jth Lagrange basis polinomial lj(x)
end
L = Y*Lj; %coefficients of Lagrange polinomial L(x)
Why version 1.9 and not 2.0? well, there is probably a way to get rid of this last for loop, and write it all in 1 line, but I can't think of it right now - it's a todo for v 2.0 :)
And, for dessert, if you want to get the same picture as wikipedia:
figure(1);clf
x=-10:.1:10;
hold on
plot(x,polyval(Y(1)*Lj(1,:),x),'r','linewidth',2)
plot(x,polyval(Y(2)*Lj(2,:),x),'b','linewidth',2)
plot(x,polyval(Y(3)*Lj(3,:),x),'g','linewidth',2)
plot(x,polyval(Y(4)*Lj(4,:),x),'y','linewidth',2)
plot(x,polyval(L,x),'k','linewidth',2)
plot(X,Y,'ro','linewidth',2,'markersize',10)
hold off
xlim([-10 10])
ylim([-10 10])
set(gca,'XTick',-10:10)
set(gca,'YTick',-10:10)
grid on
produces
enjoy and feel free to reuse/improve
Try:
X=0:1/20:1; Y=cos(X) and create L and apply polyval(L,1).
polyval(L,1)=0.917483227909543
cos(1)=0.540302305868140
Why there is huge difference?