how to solve a very large overdetermined system of linear equations? - matlab

I am doing a project about image processing, and I need to solve the following set of equations:
Nx+Nz*( z(x+1,y)-z(x,y) )=0
Ny+Nz*( z(x+1,y)-z(x,y) )=0
and equations of the boundary (bottom and right side of the image):
Nx+Nz*( z(x,y)-z(x-1,y) )=0
Ny+Nz*( z(x,y)-z(x,y-1) )=0
where Nx,Ny,Nz are the surface normal vectors at the corresponding coordinates and are already determined. Now the problem is that since (x,y) are the coordinates on an image, which typically has a size of say x=300 and y=200. So there are 300x200=60000 unknowns. I rewrite the equations in the form of Mz=b, where M has a size of 120,000x60000, and both z and b are vectors of length 60000. When I solve it using the function in python scipy.linalg.lstsq, I run into memory errors.
I notice that M is very sparse as it only has two non-zero entries of either 1 or -1 in each row. However, I don't know how I can utilize it to solve the problem. Any ideas how I can solve it more efficiently in matlab or python?
In python I find a library that has the lsmr method (as mentioned by someone in the comment). Other than using this algorithm to solve the equation Mx=b, I want to know if I need to store M and B in sparse format as well. Now I just create a very large array with all entries zero in the beginning, then I use a for loop to loop over each pixel and change the corresponding entries to 1 or -1. Then I apply lsmr to solve Mx=b directly. Does it help if I construct the matrix M and b in any one of the sparse format? Right now most of the time spent is in solving Mx=b. Construct the array M,b and doing all previous tasks take negligible time compared to solving Mx=b.
thanks
edit: this is the python code I use to generate matrix M and b. They should be 'correct', but not sure if there are other better ways of rewriting the system of linear equations.
eq_no = 0
for pix in range(total_pix):
row, col = y[pix], x[pix]
if index_array[row,col] >= 0: # confirm (x,y) is inside boundary
# check x-direction
if index_array[row,col+1] >= 0: # (x+1,y) is inside boundary
M[eq_no,pix] = -1
M[eq_no,pix+1] = 1
b[eq_no,0] = -normal_array[row,col,0]/normal_array[row,col,2]
eq_no += 1
# check y-direction
next_pix = index_array[row+1,col]
if next_pix >= 0:
M[eq_no , pix] = -1
M[eq_no , next_pix] = 1
b[eq_no,0] = -normal_array[row,col,1]/normal_array[row,col,2]
eq_no += 1

Related

FFT of a real symmetric vector is not real and symmetric

I am having a hard time understanding what should be a simple concept. I have constructed a vector in MATLAB that is real and symmetric. When I take the FFT in MATLAB, the result has a significant imaginary component, even though the symmetry rules of the Fourier transform say that the FT of a real symmetric function should also be real and symmetric. My example code:
N = 1 + 2^8;
k = linspace(-1,1,N);
V = exp(-abs(k));
Vf1 = fft(fftshift(V));
Vf2 = fft(ifftshift(V));
Vf3 = ifft(fftshift(V));
Vf4 = ifft(ifftshift(V));
Vf5 = fft(V);
Vf6 = ifft(V);
disp([isreal(Vf1) isreal(Vf2) isreal(Vf3) isreal(Vf4) isreal(Vf5) isreal(Vf6)])
Result:
0 0 0 0 0 0
No combinations of (i)fft or (i)fftshift result in a real symmetric vector. I've tried with both even and odd N (N = 2^8 vs. N = 1+2^8).
I did try looking at k+flip(k) and there are some residuals on the order of eps(1), but the residuals are also symmetric and the imaginary part of the FFT is not coming out as fuzz on the order of eps(1), but rather with magnitude comparable to the real part.
What blindingly obvious thing am I missing?
Blindingly obvious thing I was missing:
The FFT is not an integral over all space, so it assumes a periodic signal. Above, I am duplicating the last point in the period when I choose an even N, and so there is no way to shift it around to put the zero frequency at the beginning without fractional indexing, which does not exist.
A word about my choice of k. It is not arbitrary. The actual problem I am trying to solve is to generate a model FTIR interferogram which I will then FFT to get a spectrum. k is the distance that the interferometer travels which gets transformed to frequency in wavenumbers. In the real problem there will be various scaling factors so that the generating function V will yield physically meaningful numbers.
It's
Vf = fftshift(fft(ifftshift(V)));
That is, you need ifftshift in time-domain so that samples are interpreted as those of a symmetric function, and then fftshift in frequency-domain to again make symmetry apparent.
This only works for N odd. For N even, the concept of a symmetric function does not make sense: there is no way to shift the signal so that it is symmetric with respect to the origin (the origin would need to be "between two samples", which is impossible).
For your example V, the above code gives Vf real and symmetric. The following figure has been generated with semilogy(Vf), so that small as well as large values can be seen. (Of course, you could modify the horizontal axis so that the graph is centered at 0 frequency as it should; but anyway the graph is seen to be symmetric.)
#Yvon is absolutely right with his comment about symmetry. Your input signal looks symmetrical, but it isn't because symmetry is related to origin 0.
Using linspace in Matlab for constructing signals is mostly a bad choice.
Trying to repair the results with fftshift is a bad idea too.
Use instead:
k = 2*(0:N-1)/N - 1;
and you will get the result you expect.
However the imaginary part of the transformed values will not be perfectly zero.
There is some numerical noise.
>> max(abs(imag(Vf5)))
ans =
2.5535e-15
Answer to Yvon's question:
Why? >> N = 1+2^4 N = 17 >> x=linspace(-1,1,N) x = -1.0000 -0.8750 -0.7500 -0.6250 -0.5000 -0.3750 -0.2500 -0.1250 0 0.1250 0.2500 0.3750 0.5000 0.6250 0.7500 0.8750 1.0000 >> y=2*(0:N-1)/N-1 y = -1.0000 -0.8824 -0.7647 -0.6471 -0.5294 -0.4118 -0.2941 -0.1765 -0.0588 0.0588 0.1765 0.2941 0.4118 0.5294 0.6471 0.7647 0.8824 – Yvon 1
Your example is not a symmetric (even) function, but an antisymmetric (odd) function. However, this makes no difference.
For a antisymmetric function of length N the following statement is true:
f[i] == -f[-i] == -f[N-i]
The index i runs from 0 to N-1.
Let us see was happens with i=2. Remember, count starts with 0 and ends with 16.
x[2] = -0.75
-x[N-2] == -x[17-2] == -x[15] = (-1) 0.875 = -0.875
x[2] != -x[N-2]
y[2] = -0.7647
-y[N-2] == -y[15] = (-1) 0.7647
y[2] == y[N-2]
The problem is, that the origin of Matlab vectors start at 1.
Modulo (periodic) vectors start with origin 0.
This difference leads to many misunderstandings.
Another way of explanation why linspace(-1,+1,N) is not correct:
Imagine you have a vector which holds a single period of a periodic function,
for instance a Cosinus function. This single period is one of a infinite number of periods.
The first value of your Cosinus vector must not be same as the last value of your vector.
However,that is exactly what linspace(-1,+1,N) does.
Doing so, results in a sequence where the last value of period 1 is the same value as the first sample of the following period 2. That is not what you want.
To avoid this mistake use t = 2*(0:N-1)/N - 1. The distance t[i+1]-t[i] is 2/N and the last value has to be t[N-1] = 1 - 2/N and not 1.
Answer to Yvon's second comment
Whatever you put in an input vector of a DFT/FFT, by theory it is interpreted as a periodic function.
But that is not the point.
DFT performs an integration.
fft(m) = Sum_(k=0)^(N-1) (x(k) exp(-i 2 pi m k/N )
The first value x(k=0) describes the amplitude of the first integration interval of length 1/N. The second value x(k=1) describes the amplitude of the second integration interval of length 1/N. And so on.
The very last integration interval of the symmetric function ends with same value as the first sample. This means, the starting point of the last integration interval is k=N-1 = 1-1/N. Your input vector holds the starting points of the integration intervals.
Therefore, the last point of symmetry k=N is a point of the function, but it is not a starting point of an integration interval and so it is not a member of the input vector.
You have a problem when implementing the concept "symmetry". A purely real, even (or "symmetric") function has a Fourier transform function that is also real and even. "Even" is the symmetry with respect to the y-axis, or the t=0 line.
When implementing a signal in Matlab, however, you always start from t=0. That is, there is no way to "define" the signal starting from before the origin of time.
Searching the Internet for a while lead me to this -
Correct use of fftshift and ifftshift at input to fft and ifft.
As Luis has pointed out, you need to perform ifftshift before feeding the signal into fft. The reason has never been documented in Matlab, but only in that thread. For historical reasons, outputs AND inputs of fft and ifft are swapped. That is, instead of ordered from -N/2 to N/2-1 (the natural order), the signal in time or frequency domain is ordered from 0 to N/2-1 and then -N/2 to -1. That means, the correct way to code is fft( ifftshift(V) ), but most people ignore this at most times. Why it's got silently ignored rather than raising huge problems is that most concerns have been put on the amplitude of signal, not phase. Since circular shifting does not affect amplitude spectrum, this is not a problem (even for the Matlab guys who have written the documentations).
To check the amplitude equality -
Vf2 = fft(ifftshift(V));
Vf5 = fft(V);
Va2 = abs(fftshift(Vf2));
Va5 = abs(fftshift(Vf5));
>> min(abs(Va2-Va5)<1e-10)
ans =
1
To see how badly wrong in phase -
Vp2 = angle(fftshift(Vf2));
Vp5 = angle(fftshift(Vf5));
Anyway, as I wrote in the comment, after copy&pasting your code into a fresh and clean Matlab, it gives 0 1 0 1 0 0.
To your question about N=even and N=odd, my opinion is when N=even, the signal is not symmetric, since there are unequal number of points on either side of the time origin.
Just add the following line after "k = linspace(-1,1,N);"
k(end)=[];
it will remove the last element of the array. This is defined to be symmetric array.
also consider that isreal(complex(1,0)) is false!!!
The isreal function just checks for the memory storage format. so 1+0i is not real in the above example.
You have define your function in order to check for real numbers (like this)
myisreal=#(x) all((abs(imag(x))<1e-6*abs(real(x)))|(abs(x)<1e-8));
Finally your source code should become something like this:
N = 1 + 2^8;
k = linspace(-1,1,N);
k(end)=[];
V = exp(-abs(k));
Vf1 = fft(fftshift(V));
Vf2 = fft(ifftshift(V));
Vf3 = ifft(fftshift(V));
Vf4 = ifft(ifftshift(V));
Vf5 = fft(V);
Vf6 = ifft(V);
myisreal=#(x) all((abs(imag(x))<1e-6*abs(real(x)))|(abs(x)<1e-8));
disp([myisreal(Vf1) myisreal(Vf2) myisreal(Vf3) myisreal(Vf4) myisreal(Vf5) myisreal(Vf6)]);

Optimizing code, removing "for loop"

I'm trying to remove outliers from a tick data series, following Brownlees & Gallo 2006 (if you may be interested).
The code works fine but given that I'm working on really long vectors (the biggest has 20m observations and after 20h it was not done computing) I was wondering how to speed it up.
What I did until now is:
I changed the time and date format to numeric double and I saw that it saves quite some time in processing and A LOT OF MEMORY.
I allocated memory for the vectors:
[n] = size(price);
x = price;
score = nan(n,'double'); %using tic and toc I saw that nan requires less time than zeros
trimmed_mean = nan(n,'double');
sd = nan(n,'double');
out_mat = nan(n,'double');
Here is the loop I'd love to remove. I read that vectorizing would speed up a lot, especially using long vectors.
for i = k+1:n
trimmed_mean(i) = trimmean(x(i-k:i-1 & i+1:i+k),10,'round'); %trimmed mean computed on the 'k' closest observations to 'i' (i is excluded)
score(i) = x(i) - trimmed_mean(i);
sd(i) = std(x(i-k:i-1 & i+1:i+k)); %same as the mean
tmp = abs(score(i)) > (alpha .* sd(i) + gamma);
out_mat(i) = tmp*1;
end
Here is what I was trying to do
trimmed_mean=trimmean(regroup_matrix,10,'round',2);
score=bsxfun(#minus,x,trimmed_mean);
sd=std(regroup_matrix,2);
temp = abs(score) > (alpha .* sd + gamma);
out_mat = temp*1;
But given that I'm totally new to Matlab, I don't know how to properly construct the matrix of neighbouring observations. I just think it should be shaped like: regroup_matrix= nan (n,2*k).
EDIT: To be specific, what I am trying to do (and I am not able to) is:
Given a column vector "x" (n,1) for each observation "i" in "x" I want to take the "k" neighbouring observations to "i" (from i-k to i-1 and from i+1 to i+k) and put these observations as rows of a matrix (n, 2*k).
EDIT 2: I made a few changes to the code and I think I am getting closer to the solution. I posted another question specific to what I think is the problem now:
Matlab: Filling up matrix rows using moving intervals from a column vector without a for loop
What I am trying to do now is:
[n] = size(price,1);
x = price;
[j1]=find(x);
matrix_left=zeros(n, k,'double');
matrix_right=zeros(n, k,'double');
toc
matrix_left(j1(k+1:end),:)=x(j1-k:j1-1);
matrix_right(j1(1:end-k),:)=x(j1+1:j1+k);
matrix_group=[matrix_left matrix_right];
trimmed_mean=trimmean(matrix_group,10,'round',2);
score=bsxfun(#minus,x,trimmed_mean);
sd=std(matrix_group,2);
temp = abs(score) > (alpha .* sd + gamma);
outmat = temp*1;
I have problems with the matrix_left and matrix_right creation.
j1, that I am using for indexing is a column vector with the indices of price's observations. The output is simply
j1=[1:1:n]
price is a column vector of double with size(n,1)
For your reshape, you can do the following:
idxArray = bsxfun(#plus,(k:n)',[-k:-1,1:k]);
reshapedArray = x(idxArray);
Thanks to Jonas that showed me the way to go I came up with this:
idxArray_left=bsxfun(#plus,(k+1:n)',[-k:-1]); %matrix with index of left neighbours observations
idxArray_fill_left=bsxfun(#plus,(1:k)',[1:k]); %for observations from 1:k I take the right neighbouring observations, this way when computing mean and standard deviations there will be no problems.
matrix_left=[idxArray_fill_left; idxArray_left]; %Just join the two matrices and I have the complete matrix of left neighbours
idxArray_right=bsxfun(#plus,(1:n-k)',[1:k]); %same thing as left but opposite.
idxArray_fill_right=bsxfun(#plus,(n-k+1:n)',[-k:-1]);
matrix_right=[idxArray_right; idxArray_fill_right];
idx_matrix=[matrix_left matrix_right]; %complete index matrix, joining left and right indices
neigh_matrix=x(idx_matrix); %exactly as proposed by Jonas, I fill up a matrix of observations from 'x', following idx_matrix indexing
trimmed_mean=trimmean(neigh_matrix,10,'round',2);
score=bsxfun(#minus,x,trimmed_mean);
sd=std(neigh_matrix,2);
temp = abs(score) > (alpha .* sd + gamma);
outmat = temp*1;
Again, thanks a lot to Jonas. You really made my day!
Thanks also to everyone that had a look to the question and tried to help!

matlab - optimize getting the angle between each vector with all others in a large array

I am trying to get the angle between every vector in a large array (1896378x4 -EDIT: this means I need 1.7981e+12 angles... TOO LARGE, but if there's room to optimize the code, let me know anyways). It's too slow - I haven't seen it finish yet. Here's the steps towards optimizing I've taken:
First, logically what I (think I) want (just use Bt=rand(N,4) for testing):
[ro,col]=size(Bt);
angbtwn = zeros(ro-1); %too long to compute!! total non-zero = ro*(ro-1)/2
count=1;
for ii=1:ro-1
for jj=ii+1:ro
angbtwn(count) = atan2(norm(cross(Bt(ii,1:3),Bt(jj,1:3))), dot(Bt(ii,1:3),Bt(jj,1:3))).*180/pi;
count=count+1;
end
end
So, I though I'd try and vectorize it, and get rid of the non-built-in functions:
[ro,col]=size(Bt);
% angbtwn = zeros(ro-1); %TOO LONG!
for ii=1:ro-1
allAxes=Bt(ii:ro,1:3);
repeachAxis = allAxes(ones(ro-ii+1,1),1:3);
c = [repeachAxis(:,2).*allAxes(:,3)-repeachAxis(:,3).*allAxes(:,2)
repeachAxis(:,3).*allAxes(:,1)-repeachAxis(:,1).*allAxes(:,3)
repeachAxis(:,1).*allAxes(:,2)-repeachAxis(:,2).*allAxes(:,1)];
crossedAxis = reshape(c,size(repeachAxis));
normedAxis = sqrt(sum(crossedAxis.^2,2));
dottedAxis = sum(repeachAxis.*allAxes,2);
angbtwn(1:ro-ii+1,ii) = atan2(normedAxis,dottedAxis)*180/pi;
end
angbtwn(1,:)=[]; %angle btwn vec and itself
%only upper left triangle are values...
Still too long, even to pre-allocate... So I try to do sparse, but not implemented right:
[ro,col]=size(Bt);
%spalloc:
angbtwn = sparse([],[],[],ro,ro,ro*(ro-1)/2);%zeros(ro-1); %cell(ro,1)
for ii=1:ro-1
...same
angbtwn(1:ro-ii+1,ii) = atan2(normedAxis,dottedAxis)*180/pi; %WARNED: indexing = >overhead
% WHAT? Can't index sparse?? what's the point of spalloc then?
end
So if my logic can be improved, or if sparse is really the way to go, and I just can't implement it right, let me know where to improve. THANKS for your help.
Are you trying to get the angle between every pair of vectors in Bt? If Bt has 2 million vectors that's a trillion pairs each (apparently) requiring an inner product to get the angle between. I don't know that any kind of optimization is going to help have this operation finish in a reasonable amount of time in MATLAB on a single machine.
In any case, you can turn this problem into a matrix multiplication between matrices of unit vectors:
N=1000;
Bt=rand(N,4); % for testing. A matrix of N (row) vectors of length 4.
[ro,col]=size(Bt);
magnitude = zeros(N,1); % the magnitude of each row vector.
units = zeros(size(Bt)); % the unit vectors
% Compute the unit vectors for the row vectors
for ii=1:ro
magnitude(ii) = norm(Bt(ii,:));
units(ii,:) = Bt(ii,:) / magnitude(ii);
end
angbtwn = acos(units * units') * 360 / (2*pi);
But you'll run out of memory during the matrix multiplication for largish N.
You might want to use pdist with 'cosine' distance to compute the 1-cos(angbtwn).
Another perk for this approach that it does not compute n^2 values but exaxtly .5*(n-1)*n unique values :)

Solving a system of equations using Python/Scipy for a set of measurements

I have an physical instrument of measurement (force platform with load cells) which gives me three values, A, B and C. It happens, though, that these values - that should be orthogonal - actually are somewhat coupled, due to physical characteristics of the measuring device, which causes cross-talk between applied and returned values of force and torque.
Then, it is recommended that a calibration matrix be used to transform the measured values into a better estimate of the actual values, like this:
The problem is that it is necessary to perform a SET of measurements, so that different measured(Fz, Mx, My) and actual(Fz, Mx, My) are least-squared to get some C matrix that works best for the system as a whole.
I can solve Ax = B problems with scipy.linalg.lststq, or even scipy.linalg.solve (giving an exact solution) for ONE measurement, but how should I proceed to consider a set of different measurements, each one with its own equation giving a potentially different 3x3 matrix?
Any help is much appreciated, thanks for reading.
I posted a similar question containing just the mathematical part of this at math.stackexchange.com, and this answer solved the problem:
math.stackexchange.com/a/232124/27435
In case anyone have a similar problem in the future, here is the almost literal Scipy implementation of that answer (first lines are initialization boilerplate code):
import numpy
import scipy.linalg
### Origin of the coordinate system: upper left corner!
"""
1----------2
| |
| |
4----------3
"""
platform_width = 600
platform_height = 400
# positions of each load cell (one per corner)
loadcell_positions = numpy.array([[0, 0],
[platform_width, 0],
[platform_width, platform_height],
[0, platform_height]])
platform_origin = numpy.array([platform_width, platform_height]) * 0.5
# applying a known force at known positions and taking the measurements
measurements_per_axis = 5
total_load = 50
results = []
for x in numpy.linspace(0, platform_width, measurements_per_axis):
for y in numpy.linspace(0, platform_height, measurements_per_axis):
position = numpy.array([x,y])
for loadpos in loadcell_positions:
moments = platform_origin-loadpos * total_load
load = numpy.array([total_load])
result = numpy.hstack([load, moments])
results.append(result)
results = numpy.array(results)
noise = numpy.random.rand(*results.shape) - 0.5
measurements = results + noise
# now expand ("stuff") the 3x3 matrix to get a linearly independent 3x3 matrix
expands = []
for n in xrange(measurements.shape[0]):
k = results[n,:]
m = measurements[n,:]
expand = numpy.zeros((3,9))
expand[0,0:3] = m
expand[1,3:6] = m
expand[2,6:9] = m
expands.append(expand)
expands = numpy.vstack(expands)
# perform the actual regression
C = scipy.linalg.lstsq(expands, measurements.reshape((-1,1)))
C = numpy.array(C[0]).reshape((3,3))
# the result with pure noise (not actual coupling) should be
# very close to a 3x3 identity matrix (and is!)
print C
Hope this helps someone!

Rectifying compute_curvature.m error in Toolbox Graph in Matlab

I am currently using the Toolbox Graph on the Matlab File Exchange to calculate curvature on 3D surfaces and find them very helpful (http://www.mathworks.com/matlabcentral/fileexchange/5355). However, the following error message is issued in “compute_curvature” for certain surface descriptions and the code fails to run completely:
> Error in ==> compute_curvature_mod at 75
> dp = sum( normal(:,E(:,1)) .* normal(:,E(:,2)), 1 );
> ??? Index exceeds matrix dimensions.
This happens only sporadically, but there is no obvious reason why the toolbox works perfectly fine for some surfaces and not for others (of a similar topology). I also noticed that someone had asked about this bug back in November 2009 on File Exchange, but that the question had gone unanswered. The post states
"compute_curvature will generate an error on line 75 ("dp = sum(
normal(:,E(:,1)) .* normal(:,E(:,2)), 1 );") for SOME surfaces. The
error stems from E containing indices that are out of range which is
caused by line 48 ("A = sparse(double(i),double(j),s,n,n);") where A's
values eventually entirely make up the E matrix. The problem occurs
when the i and j vectors create the same ordered pair twice in which
case the sparse function adds the two s vector elements together for
that matrix location resulting in a value that is too large to be used
as an index on line 75. For example, if i = [1 1] and j = [2 2] and s
= [3 4] then A(1,2) will equal 3 + 4 = 7.
The i and j vectors are created here:
i = [face(1,:) face(2,:) face(3,:)];
j = [face(2,:) face(3,:) face(1,:)];
Just wanted to add that the error I mentioned is caused by the
flipping of the sign of the surface normal of just one face by
rearranging the order of the vertices in the face matrix"
I have tried debugging the code myself but have not had any luck. I am wondering if anyone here has solved the problem or could give me insight – I need the code to be sufficiently general-purpose in order to calculate curvature for a variety of surfaces, not just for a select few.
The November 2009 bug report on File Exchange traces the problem back to the behavior of sparse:
S = SPARSE(i,j,s,m,n,nzmax) uses the rows of [i,j,s] to generate an
m-by-n sparse matrix with space allocated for nzmax nonzeros. The
two integer index vectors, i and j, and the real or complex entries
vector, s, all have the same length, nnz, which is the number of
nonzeros in the resulting sparse matrix S . Any elements of s
which have duplicate values of i and j are added together.
The lines of code where the problem originates are here:
i = [face(1,:) face(2,:) face(3,:)];
j = [face(2,:) face(3,:) face(1,:)];
s = [1:m 1:m 1:m];
A = sparse(i,j,s,n,n);
Based on this information removal of the repeat indices, presumably using unique or similar, might solve the problem:
[B,I,J] = unique([i.' j.'],'rows');
i = B(:,1).';
j = B(:,2).';
s = s(I);
The full solution may look something like this:
i = [face(1,:) face(2,:) face(3,:)];
j = [face(2,:) face(3,:) face(1,:)];
s = [1:m 1:m 1:m];
[B,I,J] = unique([i.' j.'],'rows');
i = B(:,1).';
j = B(:,2).';
s = s(I);
A = sparse(i,j,s,n,n);
Since I do not have a detailed understanding of the algorithm it is hard to tell whether the removal of entries will have a negative effect.