Weighted Lucas Kanade - Gaussian Function MATLAB - matlab

I implemented the Basic Lucas Kanade Optical Flow algorithm in Matlab.
I used the algorithm from Wikipedia.
Since I want to improve this Basic optical flow algorithm, I tried adding a weightening function which makes certain Pixels in the beighbourhood more important or less important (see also Wikipedia).
I basically calculated the following for every Pixel in the beighbourhood and the Center Pixel itself.
for: Center Pixel and every neighbourhood-pixel
sigma = 10;
weight(s) = (1/(2*pi*sigma^2)) * exp(-((first-x)^2+(second-y)^2)/(2*sigma^2))
x,y ist the Center Point pixel, it always stays the same.
first,second is the current neighbourhood-pixel
Since I am using a 5x5 neighbourhood, (first-x) or (second-y) will always be one of these: "0,1,-1,2,-2"
I then apply the weight-values in each part of the sum.
Problem: With Sigma = 10 I don't get a better result for the optical flow than without the weightening function.
With smaller Sigmas it's not better. Afterall there is no difference between the Output vectors with or without the gaussian function
Is there a way to improve this Gaussian function to actually make the vectors more acurate than without weightening?
Thank you sooo much.

I'm not sure how you apply the values, but it usually should make a little difference.
For a better optical flow you could :
presmooth the images with a gaussian
use a spatiotemporal Lucas-Kanade method
or use a more advanced algorithm

Related

how to apply inverse optical flow vector on an image?

I have a small project on Moving object detection in moving camera in which i have to use negative optical flow vector to minimize ego motion compensation. I have a video and some particular consecutive frames in which average of negative optical flow vector has to be computed. I have already calculated Optical flow between say, (k-1)th and kth frame. Also, I have calculated average of optical flow vector V=[u,v], where v is the average of horizontal optical flow and u is the average of vertical flow. Now, I have to apply inverse of optical flow vector i.e., -V to the (k-1)th frame. I'm new to matlab and i don't know much about it. Please help
I have tried this code segment to do so but the results aren't as expected
function I1=reverseOF(I,V)
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
[m,n]=size(rgb2gray(I));
for i=1:m
for j=1:n
v1=[j i];
v2=-V;
v3=v1.*v2;
R(floor(1+abs(v3(1,2))),floor(1+abs(v3(1,2))))=R(i,j);
G(floor(1+abs(v3(1,2))),floor(1+abs(v3(1,2))))=G(i,j);
B(floor(1+abs(v3(1,2))),floor(1+abs(v3(1,2))))=B(i,j);
I1(floor(1+abs(v3(1,2))),floor(1+abs(v3(1,2))))=I(i,j);
end
end
I1=cat(3,R,G,B);
enter code here
I have used abs() function because otherwise some error was occuring like "attempted to access negative location; index must be a positive or logical".
Image A and Image B are the images that i have used to estimate the optical flow.enter image description here
This is the result that i am obtaining after applying the above function.
enter image description here
Unfortunately, you cant do this easily. This is a quite advanced research problem, because obtaining the inverse of a vector field on a mesh grid is not an easy problem, actually its quite hard.
Notice that your vector field (optical flow) start in a mesh grid, but it doesnt end in a mesh grid, it ends in random subpixel positions. If you just invert this field, doing -V is not enough! The result wont be the inverse!
This is a open research problem, look for example at this 2010 paper that addresses exactly this issue, and proposes a method to create "pseudoinverses".
Suppose you have that inverse, because you computed it somehow. Your code is quite bad for it, and the solutions (abs!) are showing (no offense) that you are not really understanding what you are doing. For a known vector field {Vx,Vy}, size equals to the image size (if its not, you can figure out easily how to interpolate it unsig interp2 ) the code would look something like:
newimg=zeros(size(I));
[ix,iy]=meshgrid(1:size(I,1),1:size(I,2));
newimg(:,:,1)=interp2(I(:,:,1),ix+Vx,iy+Vy); % this is your whole loop.
newimg(:,:,2)=interp2(I(:,:,3),ix+Vx,iy+Vy); % this is your whole loop.
newimg(:,:,3)=interp2(I(:,:,2),ix+Vx,iy+Vy); % this is your whole loop.

How to compute distance and estimate quality of heterogeneous grids in Matlab?

I want to evaluate the grid quality where all coordinates differ in the real case.
Signal is of a ECG signal where average life-time is 75 years.
My task is to evaluate its age at the moment of measurement, which is an inverse problem.
I think 2D approximation of the 3D case is hard (done here by Abo-Zahhad) with with 3-leads (2 on chest and one at left leg - MIT-BIT arrhythmia database):
where f is a piecewise continuous function in R^2, \epsilon is the error matrix and A is a 2D matrix.
Now, I evaluate the average grid distance in x-axis (time) and average grid distance in y-axis (energy).
I think this can be done by Matlab's Image Analysis toolbox.
However, I am not sure how complete the toolbox's approaches are.
I think a transform approach must be used in the setting of uneven and noncontinuous grids. One approach is exact linear time euclidean distance transforms of grid line sampled shapes by Joakim Lindblad et all.
The method presents a distance transform (DT) which assigns to each image point its smallest distance to a selected subset of image points.
This kind of approach is often a basis of algorithms for many methods in image analysis.
I tested unsuccessfully the case with bwdist (Distance transform of binary image) with chessboard (returns empty square matrix), cityblock, euclidean and quasi-euclidean where the last three options return full matrix.
Another pseudocode
% https://stackoverflow.com/a/29956008/54964
%// retrieve picture
imgRGB = imread('dummy.png');
%// detect lines
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%// show extracted signal
imshow(imgRGB2)
where I think the approach will not work here because the grids are not necessarily continuous and not necessary ideal.
pdist based on the Lumbreras' answer
In the real examples, all coordinates differ such that pdist hamming and jaccard are always 1 with real data.
The options euclidean, cytoblock, minkowski, chebychev, mahalanobis, cosine, correlation, and spearman offer some descriptions of the data.
However, these options make me now little sense in such full matrices.
I want to estimate how long the signal can live.
Sources
J. Müller, and S. Siltanen. Linear and nonlinear inverse problems with practical applications.
EIT with the D-bar method: discontinuous heart-and-lungs phantom. http://wiki.helsinki.fi/display/mathstatHenkilokunta/EIT+with+the+D-bar+method%3A+discontinuous+heart-and-lungs+phantom Visited 29-Feb 2016.
There is a function in Matlab defined as pdist which computes the pairwisedistance between all row elements in a matrix and enables you to choose the type of distance you want to use (Euclidean, cityblock, correlation). Are you after something like this? Not sure I understood your question!
cheers!
Simply, do not do it in the post-processing. Those artifacts of the body can be about about raster images, about the viewer and/or ... Do quality assurance in the signal generation/processing step.
It is much easier to evaluate the original signal than its views.

Implement slightly different kernel smoothing density in matlab

I've been trying to implement this equation into matlab:
http://www.mathworks.com/matlabcentral/answers/uploaded_files/46747/Capture.PNG
Omega is some known region of the image u1.
and p1,i is:
http://www.mathworks.com/matlabcentral/answers/uploaded_files/46748/Capture1.PNG
Pay attention: u1 is an image (column ordered), an p1,i is supposed to be calculated in each pixel of this image.
Now, I've calculated p1,i in this way:
[f1,xi1] = ksdensity(u1(:),1:255);
p1_u1 = reshape(f1(floor(u1(:))+1),M,N);
Now my problem is to calculate the former equation.
I've tried with for loop but its taking forever..
Any other suggestions? Maybe there's a way using ksdensity and change the value inside the integral?
Thanks!

fftshift before calculating fourier transform: Matlab

I am looking at some FFT code in a matlab project and the FFT and inverse FFT is computed this way:
% Here image is a 2D image.
image_fft = fftshift(image,1);
image_fft = fftshift(image_fft,2);
image_fft = fft(image_fft,[],1);
image_fft = fft(image_fft,[],2);
image_fft = fftshift(image_fft,1);
image_fft = fftshift(image_fft,2);
% Some processing and then same sequence of fftshift, ifft and fftshift to move to
% time domain
I tried to find some information online but am having trouble wondering why the fftshift needs to be done before computing the FFT.
Another question I have is whether this is something really Matlab specific. For example, I am planning to port this code to C++ and use KISS FFT. Do I need to be vary of this?
The reason why people like to swap prior to the DFT is because it makes the center pixel of the image the one with zero-phase shift. It often makes algorithms that depend on phase easier to think about and implement. It is not matlab specific and if you want to port an exact version of the code to another language, you'll need to perform the quadswap beforehand too.
EDIT:
Let me give an example that I hope will clear things up. Let's say that our image is the sum of a bunch of sinc functions with varying locations throughout the image. In the frequency domain, each of these sinc functions is a rect function with the same amplitude but with a different linear phase component that determines the location of the sinc in the image domain. By swapping the image prior to taking the DFT, we make the linear phase component of the frequency domain representation of the center pixel be zero. Moreover, the linear phase components of the other sinc functions will now be a function of their distance from the center pixel. If we didn't swap the image beforehand, then the linear phase components of the rect functions would be a function of their distance from the pixel in the top-left of the image. This would be non-intutive and would involve the same kind of phase wrapping considerations that one sees with equating the frequencies in the range (pi,2pi) rad/sample with more the intuitive (-pi,0) rad/sample.
For images, it's better to use fft2. It's matlab's convention to arrange 2D ffts with the DC in the corners. Presumably because of the row/array conventions. fftshift allows for a more intuitive display of the FFT with the DC in the center.
I don't fully understand what the piece of code you copied is about, here is an example of fft and inverse fft of an image using matlab.
And a more detailed tutorial here.

Matlab image centroid simulation

I was given this task, I am a noob and need some pointers to get started with centroid calculation in Matlab:
Instead of an image first I was asked to simulate a Gaussian distribution(2 dimensional), add noise(random noise) and plot the intensities, now the position of the centroid changes due to noise and I need to bring it back to its original position by
-clipping level to get rid of the noise, noise reduction by clipping or smoothing, sliding average (lpf) (averaging filter 3-5 samples ), calculating the means or using Convolution filter kernel - which does matrix operations which represent the 2-D images
Since you are a noob, even if we wrote down the answer verbatim you probably won't understand how it works. So instead I'll do what you asked, give you pointers and you'll have to read the related documentation :
a) to produce a 2-d Gaussian use meshgrid or ndgrid
b) to add noise to the image look into rand ,randn or randi, depending what exactly you need.
c) to plot the intensities use imagesc
d) to find the centroid there are several ways, try to further search SO, you'll find many discussions. Also you can check TMW File exchange for different implementations for that.