Optimizing code, removing "for loop" - matlab

I'm trying to remove outliers from a tick data series, following Brownlees & Gallo 2006 (if you may be interested).
The code works fine but given that I'm working on really long vectors (the biggest has 20m observations and after 20h it was not done computing) I was wondering how to speed it up.
What I did until now is:
I changed the time and date format to numeric double and I saw that it saves quite some time in processing and A LOT OF MEMORY.
I allocated memory for the vectors:
[n] = size(price);
x = price;
score = nan(n,'double'); %using tic and toc I saw that nan requires less time than zeros
trimmed_mean = nan(n,'double');
sd = nan(n,'double');
out_mat = nan(n,'double');
Here is the loop I'd love to remove. I read that vectorizing would speed up a lot, especially using long vectors.
for i = k+1:n
trimmed_mean(i) = trimmean(x(i-k:i-1 & i+1:i+k),10,'round'); %trimmed mean computed on the 'k' closest observations to 'i' (i is excluded)
score(i) = x(i) - trimmed_mean(i);
sd(i) = std(x(i-k:i-1 & i+1:i+k)); %same as the mean
tmp = abs(score(i)) > (alpha .* sd(i) + gamma);
out_mat(i) = tmp*1;
end
Here is what I was trying to do
trimmed_mean=trimmean(regroup_matrix,10,'round',2);
score=bsxfun(#minus,x,trimmed_mean);
sd=std(regroup_matrix,2);
temp = abs(score) > (alpha .* sd + gamma);
out_mat = temp*1;
But given that I'm totally new to Matlab, I don't know how to properly construct the matrix of neighbouring observations. I just think it should be shaped like: regroup_matrix= nan (n,2*k).
EDIT: To be specific, what I am trying to do (and I am not able to) is:
Given a column vector "x" (n,1) for each observation "i" in "x" I want to take the "k" neighbouring observations to "i" (from i-k to i-1 and from i+1 to i+k) and put these observations as rows of a matrix (n, 2*k).
EDIT 2: I made a few changes to the code and I think I am getting closer to the solution. I posted another question specific to what I think is the problem now:
Matlab: Filling up matrix rows using moving intervals from a column vector without a for loop
What I am trying to do now is:
[n] = size(price,1);
x = price;
[j1]=find(x);
matrix_left=zeros(n, k,'double');
matrix_right=zeros(n, k,'double');
toc
matrix_left(j1(k+1:end),:)=x(j1-k:j1-1);
matrix_right(j1(1:end-k),:)=x(j1+1:j1+k);
matrix_group=[matrix_left matrix_right];
trimmed_mean=trimmean(matrix_group,10,'round',2);
score=bsxfun(#minus,x,trimmed_mean);
sd=std(matrix_group,2);
temp = abs(score) > (alpha .* sd + gamma);
outmat = temp*1;
I have problems with the matrix_left and matrix_right creation.
j1, that I am using for indexing is a column vector with the indices of price's observations. The output is simply
j1=[1:1:n]
price is a column vector of double with size(n,1)

For your reshape, you can do the following:
idxArray = bsxfun(#plus,(k:n)',[-k:-1,1:k]);
reshapedArray = x(idxArray);

Thanks to Jonas that showed me the way to go I came up with this:
idxArray_left=bsxfun(#plus,(k+1:n)',[-k:-1]); %matrix with index of left neighbours observations
idxArray_fill_left=bsxfun(#plus,(1:k)',[1:k]); %for observations from 1:k I take the right neighbouring observations, this way when computing mean and standard deviations there will be no problems.
matrix_left=[idxArray_fill_left; idxArray_left]; %Just join the two matrices and I have the complete matrix of left neighbours
idxArray_right=bsxfun(#plus,(1:n-k)',[1:k]); %same thing as left but opposite.
idxArray_fill_right=bsxfun(#plus,(n-k+1:n)',[-k:-1]);
matrix_right=[idxArray_right; idxArray_fill_right];
idx_matrix=[matrix_left matrix_right]; %complete index matrix, joining left and right indices
neigh_matrix=x(idx_matrix); %exactly as proposed by Jonas, I fill up a matrix of observations from 'x', following idx_matrix indexing
trimmed_mean=trimmean(neigh_matrix,10,'round',2);
score=bsxfun(#minus,x,trimmed_mean);
sd=std(neigh_matrix,2);
temp = abs(score) > (alpha .* sd + gamma);
outmat = temp*1;
Again, thanks a lot to Jonas. You really made my day!
Thanks also to everyone that had a look to the question and tried to help!

Related

Why is this the correct way to do a cost function for a neural network?

So after beating my head against the wall for a few hours, I looked online for a solution to my problem, and it worked great. I just want to know what caused the issue with the way I was originally going about it.
here are some more details. The input is a 20x20px image from the MNIST datset, and there are 5000 samples, so X, or A1 is 5000x400. There are 25 nodes in the single hidden layer. The output is a one hot vector of 0-9 digits. y (not Y, which is the one hot encoding of y) is a 5000x1 vector with the value of 1-10.
Here was my original code for the cost function:
Y = zeros(m, num_labels);
for i = 1:m
Y(i, y(i)) = 1;
endfor
H = sigmoid(Theta2*[ones(1,m);sigmoid(Theta1*[ones(m, 1) X]'))
J = (1/m) * sum(sum((-Y*log(H]))' - (1-Y)*log(1-H]))')))
But then I found this:
A1 = [ones(m, 1) X];
Z2 = A1 * Theta1';
A2 = [ones(size(Z2, 1), 1) sigmoid(Z2)];
Z3 = A2*Theta2';
H = A3 = sigmoid(Z3);
J = (1/m)*sum(sum((-Y).*log(H) - (1-Y).*log(1-H), 2));
I see that this may be slightly cleaner, but what functionally causes my original code to get 304.88 and the other to get ~ 0.25? Is it the element wise multiplication?
FYI, this is the same problem as this question if you need the formal equation written out.
Thanks for any help I can get! I really want to understand where I'm going wrong
Transfer from the comments:
With a quick look, in J = (1/m) * sum(sum((-Y*log(H]))' - (1-Y)*log(1-H]))'))) there is definetely something going on with the parenthesis, but probably on how you pasted it here, not with the original code as this would throw an error when you run it. If I understand correctly and Y, H are matrices, then in your 1st version Y*log(H) is matrix multiplication while in the 2nd version Y.*log(H) is an entrywise multiplication (not matrix-multiplication, just c(i,j)=a(i,j)*b(i,j) ).
Update 1:
In regards to your question in the comment.
From the first screenshot, you represent each value yk(i) in the entry Y(i,k) of the Y matrix and each value h(x^(i))k as H(i,k). So basically, for each i,k you want to compute Y(i,k) log(H(i,k)) + (1-Y(i,k)) log(1-H(i,k)). You can do it for all the values together and store the result in matrix C. Then C = Y.*log(H) + (1-Y).*log(1-H) and each C(i,k) has the above mentioned value. This is an operation .* because you want to do the operation for each element (i,k) of each matrix (in contrast to multiplying the matrices which is totally different). Afterwards, to get the sum of all the values inside the 2D dimensional matrix C, you use the octave function sum twice: sum(sum(C)) to sum both columnwise and row-wise (or as # Irreducible suggested, just sum(C(:))).
Note there may be other errors as well.

Plot distances between points matlab

I've made a plot of 10 points
10 10
248,628959661970 66,9462583977501
451,638770451973 939,398361884535
227,712826026548 18,1775336366957
804,449583613070 683,838613746355
986,104241895970 783,736480083219
29,9919502693899 534,137567882728
535,664190667238 885,359450931142
87,0772199008924 899,004898906140
990 990
With the first column as x-coordinates and the other column as y-coordinates
Leading to the following Plot:
Using the following code: scatter(Problem.Points(:,1),Problem.Points(:,2),'.b')
I then also calculated the euclidean distances using Problem.DistanceMatrix = pdist(Problem.Points);
Problem.DistanceMatrix = squareform(Problem.DistanceMatrix);
I replaced the distances by 1*10^6 when they are larger than a certain value.
This lead to the following table:
Then, I would like to plot the lines between the corresponding points, preferably with their distances, but only in case the distance < 1*10^6.
Specifically i want to plot the line [1,2] [1,4] [1,7] [2,4] etc.
My question is, can this be done and how?
Assuming one set of your data is in something called xdata and the other in ydata and then the distances in distances, the following code should accomplish what you want.
hold on
for k = 1:length(xdata)
for j = 1:length(ydata)
if(distances(k,j) < 1e6)
plot([xdata(k) xdata(j)], [ydata(k) ydata(j)]);
end
end
end
You just need to iterate through your matrix and then if the value is less than 1e6, then plot the line between the kth and jth index points. This will however double plot lines, so it will plot from k to j, and also from j to k, but it is quick to code and easy to understand. I got the following plot with this.
This should do the trick:
P = [
10.0000000000000 10.0000000000000;
248.6289596619700 66.9462583977501;
451.6387704519730 939.3983618845350;
227.7128260265480 18.1775336366957;
804.4495836130700 683.8386137463550;
986.1042418959700 783.7364800832190;
29.9919502693899 534.1375678827280;
535.6641906672380 885.3594509311420;
87.0772199008924 899.0048989061400;
990.0000000000000 990.0000000000000
];
P_len = size(P,1);
D = squareform(pdist(P));
D(D > 600) = 1e6;
scatter(P(:,1),P(:,2),'*b');
hold on;
for i = 1:P_len
pi = P(i,:);
for j = 1:P_len
pj = P(j,:);
d = D(i,j);
if ((d > 0) && (d < 1e6))
plot([pi(1) pj(1)],[pi(2) pj(2)],'-r');
end
end
end
hold off;
Final output:
On a side note, the part in which you replaces the distance values trespassing a certain treshold (it looks like it's 600 by looking at your distances matrix) with 1e6 can be avoided by just inserting that threshold into the loop for plotting the lines. I mean... it's not wrong, but I just think it's an unnecessary step.
D = squareform(pdist(P));
% ...
if ((d > 0) && (d < 600))
plot([pi(1) pj(1)],[pi(2) pj(2)],'-r');
end
A friend of mine suggested using gplot
gplot(Problem.AdjM, Problem.Points(:,:), '-o')
With problem.points as the coordinates and Problem.AdjM as the adjacency matrix. The Adjacency matrix was generated like this:
Problem.AdjM=Problem.DistanceMatrix;
Problem.AdjM(Problem.AdjM==1000000)=0;
Problem.AdjM(Problem.AdjM>0)=1;
Since the distances of 1*10^6 was the replacement of a distance that is too large, I put the adjacency there to 0 and all the other to 1.
This lead to the following plot, which was more or less what I wanted:
Since you people have been helping me in such a wonderful way, I just wanted to add this:
I added J. Mel's solution to my code, leading to two exactly the same figures:
Since the figures get the same outcome, both methods should be all right. Furthermore, since Tommasso's and J Mel's outcomes were equal earlier, Tommasso's code must also be correct.
Many thanks to both of you and all other people contributing!

matlab - optimize getting the angle between each vector with all others in a large array

I am trying to get the angle between every vector in a large array (1896378x4 -EDIT: this means I need 1.7981e+12 angles... TOO LARGE, but if there's room to optimize the code, let me know anyways). It's too slow - I haven't seen it finish yet. Here's the steps towards optimizing I've taken:
First, logically what I (think I) want (just use Bt=rand(N,4) for testing):
[ro,col]=size(Bt);
angbtwn = zeros(ro-1); %too long to compute!! total non-zero = ro*(ro-1)/2
count=1;
for ii=1:ro-1
for jj=ii+1:ro
angbtwn(count) = atan2(norm(cross(Bt(ii,1:3),Bt(jj,1:3))), dot(Bt(ii,1:3),Bt(jj,1:3))).*180/pi;
count=count+1;
end
end
So, I though I'd try and vectorize it, and get rid of the non-built-in functions:
[ro,col]=size(Bt);
% angbtwn = zeros(ro-1); %TOO LONG!
for ii=1:ro-1
allAxes=Bt(ii:ro,1:3);
repeachAxis = allAxes(ones(ro-ii+1,1),1:3);
c = [repeachAxis(:,2).*allAxes(:,3)-repeachAxis(:,3).*allAxes(:,2)
repeachAxis(:,3).*allAxes(:,1)-repeachAxis(:,1).*allAxes(:,3)
repeachAxis(:,1).*allAxes(:,2)-repeachAxis(:,2).*allAxes(:,1)];
crossedAxis = reshape(c,size(repeachAxis));
normedAxis = sqrt(sum(crossedAxis.^2,2));
dottedAxis = sum(repeachAxis.*allAxes,2);
angbtwn(1:ro-ii+1,ii) = atan2(normedAxis,dottedAxis)*180/pi;
end
angbtwn(1,:)=[]; %angle btwn vec and itself
%only upper left triangle are values...
Still too long, even to pre-allocate... So I try to do sparse, but not implemented right:
[ro,col]=size(Bt);
%spalloc:
angbtwn = sparse([],[],[],ro,ro,ro*(ro-1)/2);%zeros(ro-1); %cell(ro,1)
for ii=1:ro-1
...same
angbtwn(1:ro-ii+1,ii) = atan2(normedAxis,dottedAxis)*180/pi; %WARNED: indexing = >overhead
% WHAT? Can't index sparse?? what's the point of spalloc then?
end
So if my logic can be improved, or if sparse is really the way to go, and I just can't implement it right, let me know where to improve. THANKS for your help.
Are you trying to get the angle between every pair of vectors in Bt? If Bt has 2 million vectors that's a trillion pairs each (apparently) requiring an inner product to get the angle between. I don't know that any kind of optimization is going to help have this operation finish in a reasonable amount of time in MATLAB on a single machine.
In any case, you can turn this problem into a matrix multiplication between matrices of unit vectors:
N=1000;
Bt=rand(N,4); % for testing. A matrix of N (row) vectors of length 4.
[ro,col]=size(Bt);
magnitude = zeros(N,1); % the magnitude of each row vector.
units = zeros(size(Bt)); % the unit vectors
% Compute the unit vectors for the row vectors
for ii=1:ro
magnitude(ii) = norm(Bt(ii,:));
units(ii,:) = Bt(ii,:) / magnitude(ii);
end
angbtwn = acos(units * units') * 360 / (2*pi);
But you'll run out of memory during the matrix multiplication for largish N.
You might want to use pdist with 'cosine' distance to compute the 1-cos(angbtwn).
Another perk for this approach that it does not compute n^2 values but exaxtly .5*(n-1)*n unique values :)

Rectifying compute_curvature.m error in Toolbox Graph in Matlab

I am currently using the Toolbox Graph on the Matlab File Exchange to calculate curvature on 3D surfaces and find them very helpful (http://www.mathworks.com/matlabcentral/fileexchange/5355). However, the following error message is issued in “compute_curvature” for certain surface descriptions and the code fails to run completely:
> Error in ==> compute_curvature_mod at 75
> dp = sum( normal(:,E(:,1)) .* normal(:,E(:,2)), 1 );
> ??? Index exceeds matrix dimensions.
This happens only sporadically, but there is no obvious reason why the toolbox works perfectly fine for some surfaces and not for others (of a similar topology). I also noticed that someone had asked about this bug back in November 2009 on File Exchange, but that the question had gone unanswered. The post states
"compute_curvature will generate an error on line 75 ("dp = sum(
normal(:,E(:,1)) .* normal(:,E(:,2)), 1 );") for SOME surfaces. The
error stems from E containing indices that are out of range which is
caused by line 48 ("A = sparse(double(i),double(j),s,n,n);") where A's
values eventually entirely make up the E matrix. The problem occurs
when the i and j vectors create the same ordered pair twice in which
case the sparse function adds the two s vector elements together for
that matrix location resulting in a value that is too large to be used
as an index on line 75. For example, if i = [1 1] and j = [2 2] and s
= [3 4] then A(1,2) will equal 3 + 4 = 7.
The i and j vectors are created here:
i = [face(1,:) face(2,:) face(3,:)];
j = [face(2,:) face(3,:) face(1,:)];
Just wanted to add that the error I mentioned is caused by the
flipping of the sign of the surface normal of just one face by
rearranging the order of the vertices in the face matrix"
I have tried debugging the code myself but have not had any luck. I am wondering if anyone here has solved the problem or could give me insight – I need the code to be sufficiently general-purpose in order to calculate curvature for a variety of surfaces, not just for a select few.
The November 2009 bug report on File Exchange traces the problem back to the behavior of sparse:
S = SPARSE(i,j,s,m,n,nzmax) uses the rows of [i,j,s] to generate an
m-by-n sparse matrix with space allocated for nzmax nonzeros. The
two integer index vectors, i and j, and the real or complex entries
vector, s, all have the same length, nnz, which is the number of
nonzeros in the resulting sparse matrix S . Any elements of s
which have duplicate values of i and j are added together.
The lines of code where the problem originates are here:
i = [face(1,:) face(2,:) face(3,:)];
j = [face(2,:) face(3,:) face(1,:)];
s = [1:m 1:m 1:m];
A = sparse(i,j,s,n,n);
Based on this information removal of the repeat indices, presumably using unique or similar, might solve the problem:
[B,I,J] = unique([i.' j.'],'rows');
i = B(:,1).';
j = B(:,2).';
s = s(I);
The full solution may look something like this:
i = [face(1,:) face(2,:) face(3,:)];
j = [face(2,:) face(3,:) face(1,:)];
s = [1:m 1:m 1:m];
[B,I,J] = unique([i.' j.'],'rows');
i = B(:,1).';
j = B(:,2).';
s = s(I);
A = sparse(i,j,s,n,n);
Since I do not have a detailed understanding of the algorithm it is hard to tell whether the removal of entries will have a negative effect.

How can I speed up this call to quantile in Matlab?

I have a MATLAB routine with one rather obvious bottleneck. I've profiled the function, with the result that 2/3 of the computing time is used in the function levels:
The function levels takes a matrix of floats and splits each column into nLevels buckets, returning a matrix of the same size as the input, with each entry replaced by the number of the bucket it falls into.
To do this I use the quantile function to get the bucket limits, and a loop to assign the entries to buckets. Here's my implementation:
function [Y q] = levels(X,nLevels)
% "Assign each of the elements of X to an integer-valued level"
p = linspace(0, 1.0, nLevels+1);
q = quantile(X,p);
if isvector(q)
q=transpose(q);
end
Y = zeros(size(X));
for i = 1:nLevels
% "The variables g and l indicate the entries that are respectively greater than
% or less than the relevant bucket limits. The line Y(g & l) = i is assigning the
% value i to any element that falls in this bucket."
if i ~= nLevels % "The default; doesnt include upper bound"
g = bsxfun(#ge,X,q(i,:));
l = bsxfun(#lt,X,q(i+1,:));
else % "For the final level we include the upper bound"
g = bsxfun(#ge,X,q(i,:));
l = bsxfun(#le,X,q(i+1,:));
end
Y(g & l) = i;
end
Is there anything I can do to speed this up? Can the code be vectorized?
If I understand correctly, you want to know how many items fell in each bucket.
Use:
n = hist(Y,nbins)
Though I am not sure that it will help in the speedup. It is just cleaner this way.
Edit : Following the comment:
You can use the second output parameter of histc
[n,bin] = histc(...) also returns an index matrix bin. If x is a vector, n(k) = >sum(bin==k). bin is zero for out of range values. If x is an M-by-N matrix, then
How About this
function [Y q] = levels(X,nLevels)
p = linspace(0, 1.0, nLevels+1);
q = quantile(X,p);
Y = zeros(size(X));
for i = 1:numel(q)-1
Y = Y+ X>=q(i);
end
This results in the following:
>>X = [3 1 4 6 7 2];
>>[Y, q] = levels(X,2)
Y =
1 1 2 2 2 1
q =
1 3.5 7
You could also modify the logic line to ensure values are less than the start of the next bin. However, I don't think it is necessary.
I think you shoud use histc
[~,Y] = histc(X,q)
As you can see in matlab's doc:
Description
n = histc(x,edges) counts the number of values in vector x that fall
between the elements in the edges vector (which must contain
monotonically nondecreasing values). n is a length(edges) vector
containing these counts. No elements of x can be complex.
I made a couple of refinements (including one inspired by Aero Engy in another answer) that have resulted in some improvements. To test them out, I created a random matrix of a million rows and 100 columns to run the improved functions on:
>> x = randn(1000000,100);
First, I ran my unmodified code, with the following results:
Note that of the 40 seconds, around 14 of them are spent computing the quantiles - I can't expect to improve this part of the routine (I assume that Mathworks have already optimized it, though I guess that to assume makes an...)
Next, I modified the routine to the following, which should be faster and has the advantage of being fewer lines as well!
function [Y q] = levels(X,nLevels)
p = linspace(0, 1.0, nLevels+1);
q = quantile(X,p);
if isvector(q), q = transpose(q); end
Y = ones(size(X));
for i = 2:nLevels
Y = Y + bsxfun(#ge,X,q(i,:));
end
The profiling results with this code are:
So it is 15 seconds faster, which represents a 150% speedup of the portion of code that is mine, rather than MathWorks.
Finally, following a suggestion of Andrey (again in another answer) I modified the code to use the second output of the histc function, which assigns entries to bins. It doesn't treat the columns independently, so I had to loop over the columns manually, but it seems to be performing really well. Here's the code:
function [Y q] = levels(X,nLevels)
p = linspace(0,1,nLevels+1);
q = quantile(X,p);
if isvector(q), q = transpose(q); end
q(end,:) = 2 * q(end,:);
Y = zeros(size(X));
for k = 1:size(X,2)
[junk Y(:,k)] = histc(X(:,k),q(:,k));
end
And the profiling results:
We now spend only 4.3 seconds in codes outside the quantile function, which is around a 500% speedup over what I wrote originally. I've spent a bit of time writing this answer because I think it's turned into a nice example of how you can use the MATLAB profiler and StackExchange in combination to get much better performance from your code.
I'm happy with this result, although of course I'll continue to be pleased to hear other answers. At this stage the main performance increase will come from increasing the performance of the part of the code that currently calls quantile. I can't see how to do this immediately, but maybe someone else here can. Thanks again!
You can sort the columns and divide+round the inverse indexes:
function Y = levels(X,nLevels)
% "Assign each of the elements of X to an integer-valued level"
[S,IX]=sort(X);
[grid1,grid2]=ndgrid(1:size(IX,1),1:size(IX,2));
invIX=zeros(size(X));
invIX(sub2ind(size(X),IX(:),grid2(:)))=grid1;
Y=ceil(invIX/size(X,1)*nLevels);
Or you can use tiedrank:
function Y = levels(X,nLevels)
% "Assign each of the elements of X to an integer-valued level"
R=tiedrank(X);
Y=ceil(R/size(X,1)*nLevels);
Surprisingly, both these solutions are slightly slower than the quantile+histc solution.