Matlab calculate the product of an expression - matlab

I'm basicaly trying to find the product of an expression that goes like this:
(x-(N-1)/2).....(x+(N-1)/2) for even value of N
x is a value that I will set at the beginning that changes too but that is a different problem...
let's say for the sake of argument that for now x is a constant (ex x=1)
example for N=6
(x-5/2)(x-3/2)(x-1/2)(x+1/2)(x+3/2)*(x+5/2)
the idea was to create a row vector every element of which is each individual term (P(1)=x-5/2) (P(2)=x-3/2)...etc and then calculate its product
N=6;
x=1;
P=ones(1,N);
for k=(-N-1)/2:(N-1)/2
for n=1:N
P(n)=(x-k);
end
end
y=prod(P);
instead this creates a vector that takes only the first value of the epxression and then
repeats the same value at each cell.
there is obviously a fundamental problem with my loop but I just can't see it.
So if anyone can help with that OR suggest a better way to calculate the product I would be grateful.

Use vectorized commands
Why use a loop when you can use vectorized commands like prod?
y = prod(2 * x + [-N + 1 : 2 : N - 1]) / 2;
For convenience, you may want to define an anonymous function for it:
f = #(N,x) reshape(prod(bsxfun(#plus, 2 * x(:), -N + 1 : 2 : N - 1) / 2, 2), size(x));
Note that the function is compatible with a (row or column) vector input x.
Tests in MATLAB's Command Window
>> f(6, [2,2]')
ans =
-14.7656
4.9219
-3.5156
4.9219
-14.7656
>> f(6, [2,2])
ans =
-14.7656 4.9219 -3.5156 4.9219 -14.7656
Benchmark
Here is a comparison of rayreng's approach versus mine. The former emerges as the clear winner... :'( ...at least as N increases.
Varying N, fixed x
Fixed N (= 10), vector x of varying length
Fixed N (= 100), vector x of varying length
Benchmark code
function benchmark
% varying N, fixed x
clear all
n = logspace(2,4,20)';
x = rand(1000,1);
tr = zeros(size(n));
tj = tr;
for k = 1 : numel(n)
% rayreng's approach (poly/polyval)
fr = #() rayreng(n(k), x);
tr(k) = timeit(fr);
% Jubobs's approach (prod/reshape/bsxfun)
fj = #() jubobs(n(k), x);
tj(k) = timeit(fj);
end
figure
hold on
plot(n, tr, 'bo')
plot(n, tj, 'ro')
hold off
xlabel('N')
ylabel('time (s)')
legend('rayreng', 'jubobs')
end
function y = jubobs(N,x)
y = reshape(prod(bsxfun(#plus,...
2 * x(:),...
-N + 1 : 2 : N - 1) / 2,...
2),...
size(x));
end
function y = rayreng(N, x)
p = poly(linspace(-(N-1)/2, (N-1)/2, N));
y = polyval(p, x);
end
function benchmark2
% fixed N, varying x
clear all
n = 100;
nx = round(logspace(2,4,20));
tr = zeros(size(n));
tj = tr;
for k = 1 : numel(nx)
disp(k)
x = rand(nx(k), 1);
% rayreng's approach (poly/polyval)
fr = #() rayreng(n, x);
tr(k) = timeit(fr);
% Jubobs's approach (prod/reshape/bsxfun)
fj = #() jubobs(n, x);
tj(k) = timeit(fj);
end
figure
hold on
plot(nx, tr, 'bo')
plot(nx, tj, 'ro')
hold off
xlabel('number of elements in vector x')
ylabel('time (s)')
legend('rayreng', 'jubobs')
title(['n = ' num2str(n)])
end
function y = jubobs(N,x)
y = reshape(prod(bsxfun(#plus,...
2 * x(:),...
-N + 1 : 2 : N - 1) / 2,...
2),...
size(x));
end
function y = rayreng(N, x)
p = poly(linspace(-(N-1)/2, (N-1)/2, N));
y = polyval(p, x);
end
An alternative
Alternatively, because the terms in your product form an arithmetic progression (each term is greater than the previous one by 1/2), you can use the formula for the product of an arithmetic progression.

I agree with #Jubobs in that you should avoid using for loops for this kind of computation. There are cases where for loops perform fast, but for something as simple as this, avoid using loops if possible.
An alternative approach to what Jubobs has suggested is that you can consider that polynomial equation to be in factored form where each factor denotes a root located at that particular location. You can use poly to convert these factors into a polynomial equation, then use polyval to evaluate the expression at the point you want. First, generate your roots by linspace where the points vary from -(N-1)/2 to (N-1)/2 and there are N of them, then plug this into poly. Finally, for any values of x, put this into polyval with the output of poly. The advantage of this approach is that you can evaluate multiple points of x in a single sweep.
Going with what you have, you would simply do this:
p = poly(linspace(-(N-1)/2, (N-1)/2, N));
out = polyval(p, x);
With your example, supposing that N = 6, this would be the output of the first line:
p =
1.0000 0 -8.7500 0 16.1875 0 -3.5156
As such, this is saying that when we expand out (x-5/2)(x-3/2)(x-1/2)(x+1/2)(x+3/2)(x+5/2), we get:
x^6 - 8.75x^4 + 16.1875x^2 - 3.5156
If we take a look at the roots of this equation, this is what we get:
r = roots(p)
r =
-2.5000
2.5000
-1.5000
1.5000
-0.5000
0.5000
As you can see, each term corresponds to one factor in your polynomial equation, so we do have the right mindset here. Now, all you have to do is use p with your values of x into polyval to obtain your results. For example, if I wanted to evaluate that polynomial from -2 <= x <= 2 where x is an integer, this is the result I get:
polyval(p, -2:2)
ans =
-14.7656 4.9219 -3.5156 4.9219 -14.7656
Therefore, when x = -2, the result is -14.7656 and so on.

Though I would recommend the solution by #Jubobs, it is also good to check what the issue is with your loop.
The first indication that something is wrong, is that you have a nested loop over 2 variables, and only index with one of them to store the result. Probably you just need a single loop.
Here is a loop that you may be interested in that should do roughly what you need:
N=6;
x=1;
k=(-N-1)/2:(N-1)/2
P = ones(size(k));
for n=1:numel(k)
P(n)=(x-k(n));
end
y=prod(P);
I tried to keep the code close to the original, so hopefully it is easy to understand.

Related

vectorize lookup values in table of interval limits

Here is a question about whether we can use vectorization type of operation in matlab to avoid writing for loop.
I have a vector
Q = [0.1,0.3,0.6,1.0]
I generate a uniformly distributed random vector over [0,1)
X = [0.11,0.72,0.32,0.94]
I want to know whether each entry of X is between [0,0.1) or [0.1,0.3) or [0.3,0.6), or [0.6,1.0) and I want to return a vector which contains the index of the maximum element in Q that each entry of X is less than.
I could write a for loop
Y = zeros(length(X),1)
for i = 1:1:length(X)
Y(i) = find(X(i)<Q, 1);
end
Expected result for this example:
Y = [2,4,3,4]
But I wonder if there is a way to avoid writing for loop? (I see many very good answers to my question. Thank you so much! Now if we go one step further, what if my Q is a matrix, such that I want check whether )
Y = zeros(length(X),1)
for i = 1:1:length(X)
Y(i) = find(X(i)<Q(i), 1);
end
Use the second output of max, which acts as a sort of "vectorized find":
[~, Y] = max(bsxfun(#lt, X(:).', Q(:)), [], 1);
How this works:
For each element of X, test if it is less than each element of Q. This is done with bsxfun(#lt, X(:).', Q(:)). Note each column in the result corresponds to an element of X, and each row to an element of Q.
Then, for each element of X, get the index of the first element of Q for which that comparison is true. This is done with [~, Y] = max(..., [], 1). Note that the second output of max returns the index of the first maximizer (along the specified dimension), so in this case it gives the index of the first true in each column.
For your example values,
Q = [0.1, 0.3, 0.6, 1.0];
X = [0.11, 0.72, 0.32, 0.94];
[~, Y] = max(bsxfun(#lt, X(:).', Q(:)), [], 1);
gives
Y =
2 4 3 4
Using bsxfun will help accomplish this. You'll need to read about it. I also added a Q = 0 at the beginning to handle the small X case
X = [0.11,0.72,0.32,0.94 0.01];
Q = [0.1,0.3,0.6,1.0];
Q_extra = [0 Q];
Diff = bsxfun(#minus,X(:)',Q_extra (:)); %vectorized subtraction
logical_matrix = diff(Diff < 0); %find the transition from neg to positive
[X_categories,~] = find(logical_matrix == true); % get indices
% output is 2 4 3 4 1
EDIT: How long does each method take?
I got curious about the difference between each solution:
Test Code Below:
Q = [0,0.1,0.3,0.6,1.0];
X = rand(1,1e3);
tic
Y = zeros(length(X),1);
for i = 1:1:length(X)
Y(i) = find(X(i)<Q, 1);
end
toc
tic
result = arrayfun(#(x)find(x < Q, 1), X);
toc
tic
Q = [0 Q];
Diff = bsxfun(#minus,X(:)',Q(:)); %vectorized subtraction
logical_matrix = diff(Diff < 0); %find the transition from neg to positive
[X_categories,~] = find(logical_matrix == true); % get indices
toc
Run it for yourself, I found that when the size of X was 1e6, bsxfun was much faster, while for smaller arrays the differences were varying and negligible.
SAMPLE: when size X was 1e3
Elapsed time is 0.001582 seconds. % for loop
Elapsed time is 0.007324 seconds. % anonymous function
Elapsed time is 0.000785 seconds. % bsxfun
Octave has a function lookup to do exactly that. It takes a lookup table of sorted values and an array, and returns an array with indices for values in the lookup table.
octave> Q = [0.1 0.3 0.6 1.0];
octave> x = [0.11 0.72 0.32 0.94];
octave> lookup (Q, X)
ans =
1 3 2 3
The only issue is that your lookup table has an implicit zero which be fixed easily with:
octave> lookup ([0 Q], X) # alternatively, just add 1 at the results
ans =
2 4 3 4
You can create an anonymous function to perform the comparison, then apply it to each member of X using arrayfun:
compareFunc = #(x)find(x < Q, 1);
result = arrayfun(compareFunc, X, 'UniformOutput', 1);
The Q array will be stored in the anonymous function ( compareFunc ) when the anonymous function is created.
Or, as one line (Uniform Output is the default behavior of arrayfun):
result = arrayfun(#(x)find(x < Q, 1), X);
Octave does a neat auto-vectorization trick for you if the vectors you have are along different dimensions. If you make Q a column vector, you can do this:
X = [0.11, 0.72, 0.32, 0.94];
Q = [0.1; 0.3; 0.6; 1.0; 2.0; 3.0];
X <= Q
The result is a 6x4 matrix indicating which elements of Q each element of X is less than. I made Q a different length than X just to illustrate this:
0 0 0 0
1 0 0 0
1 0 1 0
1 1 1 1
1 1 1 1
1 1 1 1
Going back to the original example you have, you can do
length(Q) - sum(X <= Q) + 1
to get
2 4 3 4
Notice that I have semicolons instead of commas in the definition of Q. If you want to make it a column vector after defining it, do something like this instead:
length(Q) - sum(X <= Q') + 1
The reason that this works is that Octave implicitly applies bsxfun to an operation on a row and column vector. MATLAB will not do this until R2016b according to #excaza's comment, so in MATLAB you can do this:
length(Q) - sum(bsxfun(#le, X, Q)) + 1
You can play around with this example in IDEOne here.
Inspired by the solution posted by #Mad Physicist, here is my solution.
Q = [0.1,0.3,0.6,1.0]
X = [0.11,0.72,0.32,0.94]
Temp = repmat(X',1,4)<repmat(Q,4,1)
[~, ind]= max( Temp~=0, [], 2 );
The idea is that make the X and Q into the "same shape", then use element wise comparison, then we obtain a logical matrix whose row tells whether a given element in X is less than each of the element in Q, then return the first non-zero index of each row of this logical matrix. I haven't tested how fast this method is comparing to other methods

Create a variable number of terms in an anonymous function that outputs a vector

I'd like to create an anonymous function that does something like this:
n = 5;
x = linspace(-4,4,1000);
f = #(x,a,b,n) a(1)*exp(b(1)^2*x.^2) + a(2)*exp(b(2)^2*x.^2) + ... a(n)*exp(b(n)^2*x.^2);
I can do this as such, without passing explicit parameter n:
f1 = #(x,a,b) a(1)*exp(-b(1)^2*x.^2);
for j = 2:n
f1 = #(x,a,b) f1(x,a,b) + a(j)*exp(b(j)^2*x.^2);
end
but it seems, well, kind of hacky. Does someone have a better solution for this? I'd like to know how someone else would treat this.
Your hacky solution is definitely not the best, as recursive function calls in MATLAB are not very efficient, and you can quickly run into the maximum recursion depth (500 by default).
You can introduce a new dimension along which you can sum up your arrays a and b. Assuming that x, a and b are row vectors:
f = #(x,a,b,n) a(1:n)*exp((b(1:n).^2).'*x.^2)
This will use the first dimension as summing dimension: (b(1:n).^2).' is a column vector, which produces a matrix when multiplied by x (this is a dyadic product, to be precise). The resulting n * length(x) matrix can be multiplied by a(1:n), since the latter is a matrix of size [1,n]. This vector-matrix product will also perform the summation for us.
Mini-proof:
n = 5;
x = linspace(-4,4,1000);
a = rand(1,10);
b = rand(1,10);
y = 0;
for k=1:n
y = y + a(k)*exp(b(k)^2*x.^2);
end
y2 = a(1:n)*exp((b(1:n).^2).'*x.^2); %'
all(abs(y-y2))<1e-10
The last command returns 1, so the two are essentially identical.

Efficient way to apply arrayfun to a matrix (i.e. R^N to R^M)

I have a function that transforms R^N to R^M. For simplicity, lets just let it be the identity function #(z) z where z may be a vector. I want to apply a function to a list of parameters of size K x N and have it map to K x M output.
Here is my attempt:
function out_matrix = array_fun_matrix(f, vals)
for i=1:size(vals,1)
f_val = f(vals(i,:));
if(size(f_val,1) > 1) %Need to stack up rows, so convert as required.
f_val = f_val';
end
out_matrix(i,:) = f_val;
end
end
You can try it with
array_fun_matrix(#(z) z(1)^2 + z(2)^2 + z(3), [0 1 0; 1 1 1; 1 2 1; 2 2 2])
The question: Is there a better and more efficient way to do this with vectorization, etc.? Did I miss a built-in function?
Examples of non-vectorizable functions: There are many, usually involving elaborate sub-steps and numerical solutions. A trivial example is something like looking for the numerical solution to an equation, which in term is using numerical quadrature. i.e. let params = [b c] and solve for the a such that int_0^a ((z + b)^2) dz = c
(I know here you could do some calculus, but the integral here is stripped down). Implementing this example,
find_equilibrium = #(param) fzero(#(a) integral(#(x) (x + param(1)).^2 - param(2), 0, a), 1)
array_fun_matrix(find_equilibrium, [0 1; 0 .8])
You can use the cellfun function, but you'll need to manipulate your data a bit:
function out_matrix = array_fun_matrix(f, vals)
% Convert your data to a cell array:
cellVals = mat2cell(vals, ones(1,size(vals,1)));
% apply the function:
out_cellArray = cellfun(f, cellVals, 'UniformOutput', false);
% Convert back to matrix:
out_matrix = cell2mat(out_cellArray);
end
If you don't like this implementation, you can improve the performance of yours by preallocating the out_matrix:
function out_matrix = array_fun_matrix(f, vals)
firstOutput = f(vals(1,:));
out_matrix = zeros(size(vals,1), length(firstOutput)); % preallocate for speed.
for i=1:size(vals,1)
f_val = f(vals(i,:));
if(size(f_val,1) > 1) %Need to stack up rows, so convert as required.
f_val = f_val';
end
out_matrix(i,:) = f_val;
end
end

Can I do the following fast in Matlab?

I have three matrices in Matlab, A which is n x m, B which is p x m and C which is r x n.
Say we initialize some matrices using:
A = rand(3,4);
B = rand(2,3);
C = rand(5,4);
The following two are equivalent:
>> s=0;
>> for i=1:3
for j=1:4
s = s + A(i,j)*B(:,i)*C(:,j)';
end;
end
>> s
s =
2.6823 2.2440 3.5056 2.0856 2.1551
2.0656 1.7310 2.6550 1.5767 1.6457
>> B*A*C'
ans =
2.6823 2.2440 3.5056 2.0856 2.1551
2.0656 1.7310 2.6550 1.5767 1.6457
The latter being much more efficient.
I can't find any efficient version for the following variant of the loop:
s=0;
for i=1:3
for j=1:4
x = A(i,j)*B(:,i)*C(:,j)';
s = s + x/sum(sum(x));
end;
end
Here, the matrices being added are normalized by the sum of their values after each step.
Any ideas how to make this efficient like the matrix multiplication above? I thought maybe accumarray could help, but not sure how.
You can do it efficiently with bsxfun:
aux1 = bsxfun(#times, permute(B,[1 3 2]), permute(C,[3 1 4 2]));
aux2 = sum(sum(aux1,1),2);
s = sum(sum(bsxfun(#rdivide, aux1, aux2),3),4);
Note that, because of the normalization, the result is independent of A, assuming it doesn't contain any zero entries (if it does the result is undefined).

Lagrange interpolation method

I use convolution and for loops (too much for loops) for calculating the interpolation using
Lagrange's method , here's the main code :
function[p] = lagrange_interpolation(X,Y)
L = zeros(n);
p = zeros(1,n);
% computing L matrice, so that each row i holds the polynom L_i
% Now we compute li(x) for i=0....n ,and we build the polynomial
for k=1:n
multiplier = 1;
outputConv = ones(1,1);
for index = 1:n
if(index ~= k && X(index) ~= X(k))
outputConv = conv(outputConv,[1,-X(index)]);
multiplier = multiplier * ((X(k) - X(index))^-1);
end
end
polynimialSize = length(outputConv);
for index = 1:polynimialSize
L(k,n - index + 1) = outputConv(polynimialSize - index + 1);
end
L(k,:) = multiplier .* L(k,:);
end
% continues
end
Those are too much for loops for computing the l_i(x) (this is done before the last calculation of P_n(x) = Sigma of y_i * l_i(x)) .
Any suggestions into making it more matlab formal ?
Thanks
Yeah, several suggestions (implemented in version 1 below): if loop can be combined with for above it (just make index skip k via something like jr(jr~=j) below); polynomialSize is always equal length(outputConv) which is always equal n (because you have n datapoints, (n-1)th polynomial with n coefficients), so the last for loop and next line can be also replaced with simple L(k,:) = multiplier * outputConv;
So I replicated the example on http://en.wikipedia.org/wiki/Lagrange_polynomial (and adopted their j-m notation, but for me j goes 1:n and m is 1:n and m~=j), hence my initialization looks like
clear; clc;
X=[-9 -4 -1 7]; %example taken from http://en.wikipedia.org/wiki/Lagrange_polynomial
Y=[ 5 2 -2 9];
n=length(X); %Lagrange basis polinomials are (n-1)th order, have n coefficients
lj = zeros(1,n); %storage for numerator of Lagrange basis polyns - each w/ n coeff
Lj = zeros(n); %matrix of Lagrange basis polyns coeffs (lj(x))
L = zeros(1,n); %the Lagrange polynomial coefficients (L(x))
then v 1.0 looks like
jr=1:n; %j-range: 1<=j<=n
for j=jr %my j is your k
multiplier = 1;
outputConv = 1; %numerator of lj(x)
mr=jr(jr~=j); %m-range: 1<=m<=n, m~=j
for m = mr %my m is your index
outputConv = conv(outputConv,[1 -X(m)]);
multiplier = multiplier * ((X(j) - X(m))^-1);
end
Lj(j,:) = multiplier * outputConv; %jth Lagrange basis polinomial lj(x)
end
L = Y*Lj; %coefficients of Lagrange polinomial L(x)
which can be further simplified if you realize that numerator of l_j(x) is just a polynomial with specific roots - for that there is a nice command in matlab - poly. Similarly the denominator is just that polyn evaluated at X(j) - for that there is polyval. Hence, v 1.9:
jr=1:n; %j-range: 1<=j<=n
for j=jr
mr=jr(jr~=j); %m-range: 1<=m<=n, m~=j
lj=poly(X(mr)); %numerator of lj(x)
mult=1/polyval(lj,X(j)); %denominator of lj(x)
Lj(j,:) = mult * lj; %jth Lagrange basis polinomial lj(x)
end
L = Y*Lj; %coefficients of Lagrange polinomial L(x)
Why version 1.9 and not 2.0? well, there is probably a way to get rid of this last for loop, and write it all in 1 line, but I can't think of it right now - it's a todo for v 2.0 :)
And, for dessert, if you want to get the same picture as wikipedia:
figure(1);clf
x=-10:.1:10;
hold on
plot(x,polyval(Y(1)*Lj(1,:),x),'r','linewidth',2)
plot(x,polyval(Y(2)*Lj(2,:),x),'b','linewidth',2)
plot(x,polyval(Y(3)*Lj(3,:),x),'g','linewidth',2)
plot(x,polyval(Y(4)*Lj(4,:),x),'y','linewidth',2)
plot(x,polyval(L,x),'k','linewidth',2)
plot(X,Y,'ro','linewidth',2,'markersize',10)
hold off
xlim([-10 10])
ylim([-10 10])
set(gca,'XTick',-10:10)
set(gca,'YTick',-10:10)
grid on
produces
enjoy and feel free to reuse/improve
Try:
X=0:1/20:1; Y=cos(X) and create L and apply polyval(L,1).
polyval(L,1)=0.917483227909543
cos(1)=0.540302305868140
Why there is huge difference?