Why matrix inverse computed by updation and pseudo-inverse is not matching? Matrix 'X' & 'r' are attached with the link shown below [duplicate] - matlab

I had a matrix D which is m*n and i am calculating the pseudo inverse using the formula inv(D'*D)*D' but it is not generating the same result as pinv(D).I need the term inv(D'*D) which i require for incremental operation. My all accuracy depends upon inv(D'*D) which is not correct. Is there any alternate way to get inv(D'*D) accurately? can any one help me please?
% D is 3x4 matrix that i had copied from one blog just for demonstration purpose. Actually original one of mine also had same problem bu its size is too large that i can't post it here.
D = -[1/sqrt(2) 1 1/sqrt(2) 0;0 1/sqrt(2) 1 1/sqrt(2);-1/sqrt(2) 0 1/sqrt(2) 1];
B1 = pinv(D)
B2 = D'*inv(D*D')
B1 =
-0.353553390593274 0.000000000000000 0.353553390593274
-0.375000000000000 -0.176776695296637 0.125000000000000
-0.176776695296637 -0.250000000000000 -0.176776695296637
0.125000000000000 -0.176776695296637 -0.375000000000000
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND =
1.904842e-017.
B2 =
-0.250000000000000 0 0.500000000000000
-0.500000000000000 0 0
0.250000000000000 -0.500000000000000 0
0 0 -0.750000000000000
I need inv(D'D) to do incremental operation. Actually in my problem at step 1, each time a new row will be added to the last position of D and in step 2 first row of the D will be removed. So i want to find final D inverse using the inverse which i calculated before these two steps. More precisely have a look here:
B = inv(D'*D); % if i can calculate it accurately then further work is as follows
D1 = [D;Lr]; %Lr is last row to be added
BLr = B-((B*Lr'*Lr*B)/(1+Lr*B*Lr')); % Row addition formula
Fr = D1(1,:); % First row to be removed
D2 = removerows(D1,1);
BFr = BLr+ ((BLr*Fr'*Fr*BLr)/(1-Fr*BLr*Fr')); % row deletion formula
B = BFr;
Y = BFr*D2;

The formulae (D^T D)^-1 D^T or D^T (D D^T)^-1 you are using for the Moore-Penrose pseudoinverse are only valid if D has full column or full row rank, respectively.
This is not true in your case, as the warning "Matrix is close to singular" shows.
The matlab pinv command works for arbitrary D, even if the matrix has neither full row or full column rank.

Try running cond(D) on your matrix and see what the condition number is. The higher the number, the more ill-conditioned your matrix is. Similarly, you can run cond(D'*D). A matrix can be full rank and still be ill-conditioned. On paper, an ill-conditioned matrix is still invertible. However, when you attempt to directly invert an ill-conditioned matrix on a computer, small precision errors caused by quantization and other effects can cause wildly undpredictable results in the solution.
For the above stated reason, there is usually a better way (more numerically stable) to achieve what you are after than to compute the inverse directly. Many of these involve matrix decomposotion techniques such as SVD. If you help us understand why you need inv(D'*D) it would be easier to point you in the direction of the appropriate alternative. For example, if you just need the pseudo-inverse, go ahead and use pinv(), even though it differs from your result using inv(). The pinv() function and the \ (mldivide) backslash operator are much more numerically stable tools than inv().

See the official documentation at http://www.mathworks.com/help/matlab/ref/pinv.html .
If A x ~ b, the solution x = pinv(A) * b produces the minimum-norm solution, but x = A\b doesn't. See the numerical example at the link above.

Related

3D matrix Indexing using 2D matrix

Could anyone shed some light on how this for loop can be replaced by a single command in MATLAB?
for i = 1 : size(w,3)
x=w(:,:,i);
w1(i,:)=x(B(i),:);
end
clear x
Here, w is a 3D (x by y by z) matrix and B (1 by z) is a vector containing rows pertaining to each layer in w. This for loop takes about 150 seconds to execute when w is 500000 layers deep. I tried using,
Q = w(B,:,:);
Q = reshape(Q(1,:),[500000,2])';
This creates a matrix Q of size 500000 X 2 X 500000 and MATLAB threw me an error saying memory out of bound. Any help would be appreciated!
You are creating intermediate variables (such as x) and using a for loop. The core idea of the following approach is to first pre-populate the indices used and then use linear indexing to access all the elements at once. Then, we can reshape to get the desired result.
ind = [B(1)*ones(size(w,2),1) (1:size(w,2)).' 1*ones(size(w,2),1)];
ind = [ind; [B(2)*ones(size(w,2),1) (1:size(w,2)).' 2*ones(size(w,2),1)]];
ind = [ind; [B(3)*ones(size(w,2),1) (1:size(w,2)).' 3*ones(size(w,2),1)]];
lin_ind = sub2ind(size(w), ind(:,1), ind(:,2), ind(:,3));
w1 = reshape(w(lin_ind),size(w,2),size(w,3)).'
On my system, this matches with w1 computed with the loop given in your question. Note that you may need to use a for loop to pre-populate the indices. I wrote three expressions since I was experimenting with small matrices. Actually, the first three lines can be written in such a way that you don't need loops at all and it still works with any size. I will leave that up to you.

manual calculation of pseudo inverse is not same as pinv in matlab

I had a matrix D which is m*n and i am calculating the pseudo inverse using the formula inv(D'*D)*D' but it is not generating the same result as pinv(D).I need the term inv(D'*D) which i require for incremental operation. My all accuracy depends upon inv(D'*D) which is not correct. Is there any alternate way to get inv(D'*D) accurately? can any one help me please?
% D is 3x4 matrix that i had copied from one blog just for demonstration purpose. Actually original one of mine also had same problem bu its size is too large that i can't post it here.
D = -[1/sqrt(2) 1 1/sqrt(2) 0;0 1/sqrt(2) 1 1/sqrt(2);-1/sqrt(2) 0 1/sqrt(2) 1];
B1 = pinv(D)
B2 = D'*inv(D*D')
B1 =
-0.353553390593274 0.000000000000000 0.353553390593274
-0.375000000000000 -0.176776695296637 0.125000000000000
-0.176776695296637 -0.250000000000000 -0.176776695296637
0.125000000000000 -0.176776695296637 -0.375000000000000
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND =
1.904842e-017.
B2 =
-0.250000000000000 0 0.500000000000000
-0.500000000000000 0 0
0.250000000000000 -0.500000000000000 0
0 0 -0.750000000000000
I need inv(D'D) to do incremental operation. Actually in my problem at step 1, each time a new row will be added to the last position of D and in step 2 first row of the D will be removed. So i want to find final D inverse using the inverse which i calculated before these two steps. More precisely have a look here:
B = inv(D'*D); % if i can calculate it accurately then further work is as follows
D1 = [D;Lr]; %Lr is last row to be added
BLr = B-((B*Lr'*Lr*B)/(1+Lr*B*Lr')); % Row addition formula
Fr = D1(1,:); % First row to be removed
D2 = removerows(D1,1);
BFr = BLr+ ((BLr*Fr'*Fr*BLr)/(1-Fr*BLr*Fr')); % row deletion formula
B = BFr;
Y = BFr*D2;
The formulae (D^T D)^-1 D^T or D^T (D D^T)^-1 you are using for the Moore-Penrose pseudoinverse are only valid if D has full column or full row rank, respectively.
This is not true in your case, as the warning "Matrix is close to singular" shows.
The matlab pinv command works for arbitrary D, even if the matrix has neither full row or full column rank.
Try running cond(D) on your matrix and see what the condition number is. The higher the number, the more ill-conditioned your matrix is. Similarly, you can run cond(D'*D). A matrix can be full rank and still be ill-conditioned. On paper, an ill-conditioned matrix is still invertible. However, when you attempt to directly invert an ill-conditioned matrix on a computer, small precision errors caused by quantization and other effects can cause wildly undpredictable results in the solution.
For the above stated reason, there is usually a better way (more numerically stable) to achieve what you are after than to compute the inverse directly. Many of these involve matrix decomposotion techniques such as SVD. If you help us understand why you need inv(D'*D) it would be easier to point you in the direction of the appropriate alternative. For example, if you just need the pseudo-inverse, go ahead and use pinv(), even though it differs from your result using inv(). The pinv() function and the \ (mldivide) backslash operator are much more numerically stable tools than inv().
See the official documentation at http://www.mathworks.com/help/matlab/ref/pinv.html .
If A x ~ b, the solution x = pinv(A) * b produces the minimum-norm solution, but x = A\b doesn't. See the numerical example at the link above.

Matlab: Comparing two vectors with different length and different values?

Lets say I have two vectors A and B with different lengths Length(A) is not equal to Length(B) and the Values in Vector A, are not the same as in Vector B. I want to compare each value of B with Values of A (Compare means if Value B(i) is almost the same value of A(1:end) for example B(i)-Tolerance<A(i)<B(i)+Tolerance.
How Can I do this without using for loop since the data is huge?
I know ismember(F), intersect,repmat,find but non of those function can really help me
You may try a solution along these lines:
tol = 0.1;
N = 1000000;
a = randn(1, N)*1000; % create a randomly
b = a + tol*rand(1, N); % b is "tol" away from a
a_bin = floor(a/tol);
b_bin = floor(b/tol);
result = ismember(b_bin, a_bin) | ...
ismember(b_bin, a_bin-1) | ...
ismember(b_bin, a_bin+1);
find(result==0) % should be empty matrix.
The idea is to discretize the a and b variables to bins of size tol. Then, you ask whether b is found in the same bin as any element from a, or in the bin to the left of it, or in the bin to the right of it.
Advantages: I believe ismember is clever inside, first sorting the elements of a and then performing sublinear (log(N)) search per element b. This is unlike approaches which explicitly construct differences of each element in b with elements from a, meaning the complexity is linear in the number of elements in a.
Comparison: for N=100000 this runs 0.04s on my machine, compared to 20s using linear search (timed using Alan's nice and concise tf = arrayfun(#(bi) any(abs(a - bi) < tol), b); solution).
Disadvantages: this leads to that the actual tolerance is anything between tol and 1.5*tol. Depends on your task whether you can live with that (if the only concern is floating point comparison, you can).
Note: whether this is a viable approach depends on the ranges of a and b, and value of tol. If a and b can be very big and tol is very small, the a_bin and b_bin will not be able to resolve individual bins (then you would have to work with integral types, again checking carefully that their ranges suffice). The solution with loops is a safer one, but if you really need speed, you can invest into optimizing the presented idea. Another option, of course, would be to write a mex extension.
It sounds like what you are trying to do is have an ismember function for use on real valued data.
That is, check for each value B(i) in your vector B whether B(i) is within the tolerance threshold T of at least one value in your vector A
This works out something like the following:
tf = false(1, length(b)); %//the result vector, true if that element of b is in a
t = 0.01; %// the tolerance threshold
for i = 1:length(b)
%// is the absolute difference between the
%//element of a and b less that the threshold?
matches = abs(a - b(i)) < t;
%// if b(i) matches any of the elements of a
tf(i) = any(matches);
end
Or, in short:
t = 0.01;
tf = arrayfun(#(bi) any(abs(a - bi) < t), b);
Regarding avoiding the for loop: while this might benefit from vectorization, you may also want to consider looking at parallelisation if your data is that huge. In that case having a for loop as in my first example can be handy since you can easily do a basic version of parallel processing by changing the for to parfor.
Here is a fully vectorized solution. Note that I would actually recommend the solution given by #Alan, as mine is not likely to work for big datasets.
[X Y]=meshgrid(A,B)
M=abs(X-Y)<tolerance
Now the logical index of elements in a that are within the tolerance can be obtained with any(M) and the index for B is found by any(M,2)
bsxfun to the rescue
>> M = abs( bsxfun(#minus, A, B' ) ); %//' difference
>> M < tolerance
Another way to do what you want is with a logical expression.
Since A and B are vectors of different sizes you can't simply subtract and look for values that are smaller than the tolerance, but you can do the following:
Lmat = sparse((abs(repmat(A,[numel(B) 1])-repmat(B',[1 numel(A)])))<tolerance);
and you will get a sparse logical matrix with as many ones in it as equal elements (within tolerance). You could then count how many of those elements you have by writing:
Nequal = sum(sum(Lmat));
You could also get the indexes of the corresponding elements by writing:
[r,c] = find(Lmat);
then the following code will be true (for all j in numel(r)):
B(r(j))==A(c(j))
Finally, you should note that this way you get multiple counts in case there are duplicate entries in A or in B. It may be advisable to use the unique function first. For example:
A_new = unique(A);

Matlab fast neighborhood operation

I have a Problem. I have a Matrix A with integer values between 0 and 5.
for example like:
x=randi(5,10,10)
Now I want to call a filter, size 3x3, which gives me the the most common value
I have tried 2 solutions:
fun = #(z) mode(z(:));
y1 = nlfilter(x,[3 3],fun);
which takes very long...
and
y2 = colfilt(x,[3 3],'sliding',#mode);
which also takes long.
I have some really big matrices and both solutions take a long time.
Is there any faster way?
+1 to #Floris for the excellent suggestion to use hist. It's very fast. You can do a bit better though. hist is based on histc, which can be used instead. histc is a compiled function, i.e., not written in Matlab, which is why the solution is much faster.
Here's a small function that attempts to generalize what #Floris did (also that solution returns a vector rather than the desired matrix) and achieve what you're doing with nlfilter and colfilt. It doesn't require that the input have particular dimensions and uses im2col to efficiently rearrange the data. In fact, the the first three lines and the call to im2col are virtually identical to what colfit does in your case.
function a=intmodefilt(a,nhood)
[ma,na] = size(a);
aa(ma+nhood(1)-1,na+nhood(2)-1) = 0;
aa(floor((nhood(1)-1)/2)+(1:ma),floor((nhood(2)-1)/2)+(1:na)) = a;
[~,a(:)] = max(histc(im2col(aa,nhood,'sliding'),min(a(:))-1:max(a(:))));
a = a-1;
Usage:
x = randi(5,10,10);
y3 = intmodefilt(x,[3 3]);
For large arrays, this is over 75 times faster than colfilt on my machine. Replacing hist with histc is responsible for a factor of two speedup. There is of course no input checking so the function assumes that a is all integers, etc.
Lastly, note that randi(IMAX,N,N) returns values in the range 1:IMAX, not 0:IMAX as you seem to state.
One suggestion would be to reshape your array so each 3x3 block becomes a column vector. If your initial array dimensions are divisible by 3, this is simple. If they don't, you need to work a little bit harder. And you need to repeat this nine times, starting at different offsets into the matrix - I will leave that as an exercise.
Here is some code that shows the basic idea (using only functions available in FreeMat - I don't have Matlab on my machine at home...):
N = 100;
A = randi(0,5*ones(3*N,3*N));
B = reshape(permute(reshape(A,[3 N 3 N]),[1 3 2 4]), [ 9 N*N]);
hh = hist(B, 0:5); % histogram of each 3x3 block: bin with largest value is the mode
[mm mi] = max(hh); % mi will contain bin with largest value
figure; hist(B(:),0:5); title 'histogram of B'; % flat, as expected
figure; hist(mi-1, 0:5); title 'histogram of mi' % not flat?...
Here are the plots:
The strange thing, when you run this code, is that the distribution of mi is not flat, but skewed towards smaller values. When you inspect the histograms, you will see that is because you will frequently have more than one bin with the "max" value in it. In that case, you get the first bin with the max number. This is obviously going to skew your results badly; something to think about. A much better filter might be a median filter - the one that has equal numbers of neighboring pixels above and below. That has a unique solution (while mode can have up to four values, for nine pixels - namely, four bins with two values each).
Something to think about.
Can't show you a mex example today (wrong computer); but there are ample good examples on the Mathworks website (and all over the web) that are quite easy to follow. See for example http://www.shawnlankton.com/2008/03/getting-started-with-mex-a-short-tutorial/

How to generate random matlab vector with these constraints

I'm having trouble creating a random vector V in Matlab subject to the following set of constraints: (given parameters N,D, L, and theta)
The vector V must be N units long
The elements must have an average of theta
No 2 successive elements may differ by more than +/-10
D == sum(L*cosd(V-theta))
I'm having the most problems with the last one. Any ideas?
Edit
Solutions in other languages or equation form are equally acceptable. Matlab is just a convenient prototyping tool for me, but the final algorithm will be in java.
Edit
From the comments and initial answers I want to add some clarifications and initial thoughts.
I am not seeking a 'truly random' solution from any standard distribution. I want a pseudo randomly generated sequence of values that satisfy the constraints given a parameter set.
The system I'm trying to approximate is a chain of N links of link length L where the end of the chain is D away from the other end in the direction of theta.
My initial insight here is that theta can be removed from consideration until the end, since (2) in essence adds theta to every element of a 0 mean vector V (shifting the mean to theta) and (4) simply removes that mean again. So, if you can find a solution for theta=0, the problem is solved for all theta.
As requested, here is a reasonable range of parameters (not hard constraints, but typical values):
5<N<200
3<D<150
L==1
0 < theta < 360
I would start by creating a "valid" vector. That should be possible - say calculate it for every entry to have the same value.
Once you got that vector I would apply some transformations to "shuffle" it. "Rejection sampling" is the keyword - if the shuffle would violate one of your rules you just don't do it.
As transformations I come up with:
switch two entries
modify the value of one entry and modify a second one to keep the 4th condition (Theoretically you could just shuffle two till the condition is fulfilled - but the chance that happens is quite low)
But maybe you can find some more.
Do this reasonable often and you get a "valid" random vector. Theoretically you should be able to get all valid vectors - practically you could try to construct several "start" vectors so it won't take that long.
Here's a way of doing it. It is clear that not all combinations of theta, N, L and D are valid. It is also clear that you're trying to simulate random objects that are quite complex. You will probably have a hard time showing anything useful with respect to these vectors.
The series you're trying to simulate seems similar to the Wiener process. So I started with that, you can start with anything that is random yet reasonable. I then use that as a starting point for an optimization that tries to satisfy 2,3 and 4. The closer your initial value to a valid vector (satisfying all your conditions) the better the convergence.
function series = generate_series(D, L, N,theta)
s(1) = theta;
for i=2:N,
s(i) = s(i-1) + randn(1,1);
end
f = #(x)objective(x,D,L,N,theta)
q = optimset('Display','iter','TolFun',1e-10,'MaxFunEvals',Inf,'MaxIter',Inf)
[sf,val] = fminunc(f,s,q);
val
series = sf;
function value= objective(s,D,L,N,theta)
a = abs(mean(s)-theta);
b = abs(D-sum(L*cos(s-theta)));
c = 0;
for i=2:N,
u =abs(s(i)-s(i-1)) ;
if u>10,
c = c + u;
end
end
value = a^2 + b^2+ c^2;
It seems like you're trying to simulate something very complex/strange (a path of a given curvature?), see questions by other commenters. Still you will have to use your domain knowledge to connect D and L with a reasonable mu and sigma for the Wiener to act as initialization.
So based on your new requirements, it seems like what you're actually looking for is an ordered list of random angles, with a maximum change in angle of 10 degrees (which I first convert to radians), such that the distance and direction from start to end and link length and number of links are specified?
Simulate an initial guess. It will not hold with the D and theta constraints (i.e. specified D and specified theta)
angles = zeros(N, 1)
for link = 2:N
angles (link) = theta(link - 1) + (rand() - 0.5)*(10*pi/180)
end
Use genetic algorithm (or another optimization) to adjust the angles based on the following cost function:
dx = sum(L*cos(angle));
dy = sum(L*sin(angle));
D = sqrt(dx^2 + dy^2);
theta = atan2(dy/dx);
the cost is now just the difference between the vector given by my D and theta above and the vector given by the specified D and theta (i.e. the inputs).
You will still have to enforce the max change of 10 degrees rule, perhaps that should just make the cost function enormous if it is violated? Perhaps there is a cleaner way to specify sequence constraints in optimization algorithms (I don't know how).
I feel like if you can find the right optimization with the right parameters this should be able to simulate your problem.
You don't give us a lot of detail to work with, so I'll assume the following:
random numbers are to be drawn from [-127+theta +127-theta]
all random numbers will be drawn from a uniform distribution
all random numbers will be of type int8
Then, for the first 3 requirements, you can use this:
N = 1e4;
theta = 40;
diffVal = 10;
g = #() randi([intmin('int8')+theta intmax('int8')-theta], 'int8') + theta;
V = [g(); zeros(N-1,1, 'int8')];
for ii = 2:N
V(ii) = g();
while abs(V(ii)-V(ii-1)) >= diffVal
V(ii) = g();
end
end
inline the anonymous function for more speed.
Now, the last requirement,
D == sum(L*cos(V-theta))
is a bit of a strange one...cos(V-theta) is a specific way to re-scale the data to the [-1 +1] interval, which the multiplication with L will then scale to [-L +L]. On first sight, you'd expect the sum to average out to 0.
However, the expected value of cos(x) when x is a random variable from a uniform distribution in [0 2*pi] is 2/pi (see here for example). Ignoring for the moment the fact that our limits are different from [0 2*pi], the expected value of sum(L*cos(V-theta)) would simply reduce to the constant value of 2*N*L/pi.
How you can force this to equal some other constant D is beyond me...can you perhaps elaborate on that a bit more?