Combine convolve filter in matlab - matlab

Is there a way to take the low pass and high pass filters in the following code and combine them into a single kernel and apply one conv2() function?
note: length(lfilter) = 21, length(hfilter) = 81.
what we are basically doing in the last step is saying to remove the large objects from the image (after already removing the very small objects with a Gaussian blur).
properties (Constant)
minStar = 2; % min star radius
maxStar = 8; % max star radius
threshold = 12;
end
function filter2(this)
normalize = #(x) x/sum(x);
lfilter = normalize(exp(-((-ceil(5*this.minStar):ceil(5*this.minStar))/(2*this.minStar)).^2));
hfilter = normalize(exp(-((-ceil(5*this.maxStar):ceil(5*this.maxStar))/(2*this.maxStar)).^2));
this.low = conv2(lfilter',lfilter,this.raw,'same');
this.high = conv2(hfilter',hfilter,this.raw,'same');
this.filtered = this.low - this.high;
this.foreground = this.filtered > this.threshold;
end

Since the convolution operator is associative:
conv( a, conv(b,c) ) == conv( conv(a,b), c )
you should be able to combine the two kernels into one just by convolving them with each other.
In your case something like this should work:
new_kernel = conv2(lfilter',lfilter, conv2(hfilter',hfilter), 'same');
Convolution is commutative as well so the order in which you perform the convolutions shouldn't matter.
EDIT: as I explain in the comment below, the asker's method of performing four 1D convolutions ends up being faster than a single 2D convolution.

I just got the answer in the matlab forums. http://www.mathworks.com/matlabcentral/answers/169713-combine-convolution-filters-bandpass-into-a-single-kernel
The gist is that you have to use padding to fill in both sides of the shorter filter, and then you can just combine the vectors.
Convolution is a linear operation so yes, you can combine the two filtering operations into one. Just make the filters the same size and add/subtract them. For example:
lfilter = normalize(exp(-((-ceil(5*minStar):ceil(5*minmax))/(2*this.minStar)).^2));
hfilter = normalize(exp(-((-ceil(5*maxStar):ceil(5*minmax))/(2*this.maxStar)).^2));
padlength = (length(hfilter) - length(lfilter))/2;
lfilter = padarray(lfilter, [0 padlength]);
lhfilter = lfilter - hfilter;
this.filtered = conv2(lhfilter',lhfilter,this.raw,'same');

Related

How to check convolution theorem in MATLAB? My result is wrong

I am trying to check convolution theorem in MATLAB. I have a signal called sine_big_T. Then I have a filter called W. W and sine_big_T have the same length.
The convolution theorem says that fft(sine_big_T.*W) should be same as the convolution of fft(sine_big_T) with fft(W).
I am quite confused about this theorem. fft(sine_big_T.*W) will give me a array with length length(sine_big_T). However, conv(fft(sine_big_T), fft(W)) gives me a array with length length(sine_big_T) + length(W) - 2. I have tried the commend conv(fft(sine_big_T), fft(W), 'same'), but the result is still much different from fft(sine_big_T.*W).
T = 128;
big_T = 8*T;
small_T = T/8;
sine_big_T = zeros(1,129);
sine_small_T = zeros(1,129);
W = zeros(1,129);
for i = 0:T
sine_big_T(1, i+1) = sin(2*pi/big_T*i);
W(1, i + 1) = 1 - cos(2*pi/T * i);
end
figure
plot(1:129,fft(sine_big_T.*W));
I_fft = fft(sine_big_T);
W_fft = fft(W);
test = conv(I_fft, W_fft,'same');
figure
plot(1:length(I_fft), test)
From the theorem, the two plots should look same. But the result is not even close. I think the way I use conv is not correct. What is the right way to verify the theorem?
Using conv with 'same' is correct. You are seeing two things:
fft defines the origin in the first array element, not the middle of the domain. This does not play well with conv. Use this instead:
test = ifftshift( conv( fftshift(I_fft), fftshift(W_fft), 'same' ) );
The fftshift function shifts the origin to the middle of the array, where it is good for conv with 'same', and ifftshift shifts the origin back to the first element.
Normalization. The FFT is typically normalized so that multiplication in the frequency domain is convolution in the spatial domain. Since you are computing convolution in the frequency domain, the normalization is off. To correct the normalization, plot
plot(1:129,fft(sine_big_T.*W)*length(W));

Normalization 3D Image according to Slices in MATLAB

I have a matrix which is 256X192X80. I want to normalize all slices (80 represents the slices) without using for loop.
The way I'm doing with for is below: (im_dom_raw is our matrix)
normalized_raw = zeros(size(im_dom_raw));
for a=1:80
slice_raw = im_dom_raw(:,:,a);
slice_raw = slice_raw-min(slice_raw(:));
slice_raw = slice_raw/(max(slice_raw(:)));
normalized_raw(:,:,a) = slice_raw;
end
The code below implements your normalization approach without using loops. Its based on bsxfun.
% Shift all values to the positive side
slices_raw = bsxfun(#minus,im_dom_raw,min(min(im_dom_raw)));
% Normalize all values with respect to the slice maximum (With input from #Daniel)
normalized_raw2 = bsxfun(#mrdivide,slices_raw,max(max(slices_raw)));
% A slightly faster approach would be
%normalized_raw2 = bsxfun(#times,slices_raw,max(max(slices_raw)).^-1);
% ... but it will differ with your approach due to numerical approximation
% Comparison to your previous loop based implementation
sum(abs(normalized_raw(:)-normalized_raw2(:)))
The last line of code outputs
ans =
0
Which (thanks to #Daniel) means that both approaches yield exact same results.

Vectorizing multiple gaussian calculations

I am attempting to vectorize some of my code that adds the intensity of many Gaussian distributions over an image. I currently loop over the function 'gaussIt2D' for each Gaussian, which is vectorized for a single 2D gaussian:
windowSize=10;
imSize=[512,512];
%pointsR is an nx2 array of coordinates [x1,y1;x2,y2;...;xn,yn]
pointsR=rand(100,2)*511+1;
%sigmaR is the standard deviation of the gaussian being created
sigmaR = 1;
outputImage=zeros(imSize);
for n=1:size(pointsR,1)
rangeX = floor(pointsR(n,1)-windowSize):ceil(pointsR(n,1)+ windowSize);
rangeX = rangeX(rangeX > 0 & rangeX <= imSize(1));
rangeY = floor(pointsR(n,2)-windowSize):ceil(pointsR(n,2)+windowSize);
rangeY = rangeY(rangeY > 0 & rangeY <= imSize(2));
outputImage(rangeX,rangeY) = outputImage(rangeX,rangeY)+gaussIt2D(rangeX(1),rangeX(end),rangeY(1),rangeY(end),[sigmaR,pointsR(n,1),pointsR(n,2)]);
end
function [result] = gaussIt2D(xInit,xFinal,yInit,yFinal,sigma,xCenter,yCenter)
%Returns gaussian intenisty values for the region defined by [xInit:xFinal,yInit:yFinal] using the gaussian properties sigma,centerX,centerY
[gridX,gridY]=ndgrid(xInit:xFinal,yInit:yFinal);
result=exp( -( (gridX-xCenter).^2 + (gridY-yCenter).^2 ) ./ (2*sigma.^2) );
end
I am trying to further vectorize this process by allowing the gaussIt2D function to accept a vector of x and y values and a vector of x and y centers and do all of them together. My thought process so far has been to try to stack the grids and replicate the centers and do the element-wise gaussian calculations. For (a simplified) example if:
xInits = [1,2,3];
xFinals = [2,3,4];
xCenters = [1.2,2.8,3.1];
yInits = [1,2,3];
yFinals = [2,3,4];
yCenters = [1.5,2.4,3.6];
Then I was thinking to create grids and centers following the form:
gridX = [1,2
1,2
2,3
2,3
3,4
3,4]
xCenters = [1.2,1.2
1.2,1.2
2.8,2.8
2.8,2.8
3.1,3.1
3.1,3.1]
This could then be used in the same gaussian equation used in the original function. However, generating these arrays is tripping me up. What I have right now is:
function [result]=gaussIt2DVectorized(xInits,xFinals,yInits,yFinals,sigmas,xCenters,yCenters)
%Incomplete
%Returns gaussian intenisty values for the region defined by
%[xInit:xFinal,yInit:yFinal] using the values array:[sigma,centerX,centerY]
[gridX,gridY]=arrayfun('ndgrid',xInits:xFinals,yInits:yFinals);
xCenters = repelem(xCenters,numel(xInits(1):xFinals(1)), numel(yInits(1):yFinals(1)));
yCenters = repelem(yCenters,numel(xInits(1):xFinals(1)), numel(yInits(1):yFinals(1)));
result=exp( -( (gridX-xCenters).^2 + (gridY-yCenters).^2 ) ./ (2*sigmas^2) );
end
This doesn't actually work though, and the I also anticipate difficulty accounting for ranges (ie xInit:xFinal) of different lengths.
Any help, tips, or alternate methods would be appriciated.
Thanks.
Since you cannot be sure that the grids will all be the same size, it's probably best to store them in a cell array rather than stacking them in a matrix. With cells arrays, you can still run your calculation without looping using cellfun.
For example:
function [result] = gaussIt2D_better(xInits,xFinals,yInits,yFinals,sigmas,xCenters,yCenters)
[gridsX, gridsY] = arrayfun(#(x) ndgrid(xInits(x):xFinals(x), yInits(x):yFinals(x)),1:length(xInits),'UniformOutput',0);
f=#(gridX, gridY, xCenter, yCenter, sigma) exp( -( (gridX-xCenter).^2 + (gridY-yCenter).^2 ) ./ (2*sigma.^2) );
result=cellfun(f, gridsX, gridsY, num2cell(xCenters), num2cell(yCenters), num2cell(sigmas), 'UniformOutput',0);
end
Note that in this example, the value returned is a cell array with the same length as the input vectors, one result for each.

Matlab xcorr giving different values for different implementations of cross-correlation

I am trying to perform a cross-correlation but noticed that performing this in two different ways results in slightly different results.
I have a vector with some spikes ('dual_spikes') and I want to cross-correlate this with 'dips' (using xcorr in Matlab).
I noticed a difference when I perform this in two different ways:
perform an xcorr as normal with 'dual_spikes'
perform an xcorr with each individual spike, add them together, and normalise.
I do not know why there should be a difference. Use the following function below for illustration.
function [] = xcorr_fault()
dual_spikes = [zeros(1,200),ones(1,200),zeros(1,400),ones(1,100),zeros(1,100)];
dips = 1-[zeros(1,400),ones(1,1),zeros(1,599)];
plot(dips)
single_spike_1 = [zeros(1,200),ones(1,200),zeros(1,600)];
single_spike_2 = [zeros(1,800),ones(1,100),zeros(1,100)];
xcorr_dual = xcorr_div(dual_spikes,dips);
xcorr_single1 = xcorr_div(single_spike_1,dips);
xcorr_single2 = xcorr_div(single_spike_2,dips);
xcorr_single_all = (xcorr_single1+xcorr_single2)/max(xcorr_single1+xcorr_single2);
xcorr_dual_norm = xcorr_dual/max(xcorr_dual);
figure(1)
clf
hold all
plot(xcorr_dual_norm)
plot(xcorr_single_all)
legend('Single xcorr','xcorr with individual spikes')
function [xcorr_norm] = xcorr_div(lines,signal)
xcorr_signal = xcorr(signal,lines,'none');
xcorr_signal(xcorr_signal<1e-13) = NaN;
xcorr_bg = xcorr(ones(1,length(signal)),lines,'none');
xcorr_norm = xcorr_signal ./ xcorr_bg;
xcorr_norm(isnan(xcorr_norm)) = 1;
Note, the xcorr signal must have a 'background' (bg) divided so only the dips are found. This happens in 'xcorr_div'.
Your function xcorr_div computes cross-correlation, then divides the result with the correlation with a uniform signal. The result is some sort of normalized cross-correlation (not the standard definition) that is not linear. Thus, you should not expect that the sum of the result is the result of the sum.
If you want to be able to get the same result both ways, output both xcorr_signal and xcorr_norm from xcorr_div, then do the sum on those two outputs then divide.

Matlab fourier descriptors what's wrong?

I am using Gonzalez frdescp function to get Fourier descriptors of a boundary. I use this code, and I get two totally different sets of numbers describing two identical but different in scale shapes.
So what is wrong?
im = imread('c:\classes\a1.png');
im = im2bw(im);
b = bwboundaries(im);
f = frdescp(b{1}); // fourier descriptors for the boundary of the first object ( my pic only contains one object anyway )
// Normalization
f = f(2:20); // getting the first 20 & deleting the dc component
f = abs(f) ;
f = f/f(1);
Why do I get different descriptors for identical - but different in scale - two circles?
The problem is that the frdescp code (I used this code, that should be the same as referred by you) is written also in order to center the Fourier descriptors.
If you want to describe your shape in a correct way, it is mandatory to mantain some descriptors that are symmetric with respect to the one representing the DC component.
The following image summarize the concept:
In order to solve your problem (and others like yours), I wrote the following two functions:
function descriptors = fourierdescriptor( boundary )
%I assume that the boundary is a N x 2 matrix
%Also, N must be an even number
np = size(boundary, 1);
s = boundary(:, 1) + i*boundary(:, 2);
descriptors = fft(s);
descriptors = [descriptors((1+(np/2)):end); descriptors(1:np/2)];
end
function significativedescriptors = getsignificativedescriptors( alldescriptors, num )
%num is the number of significative descriptors (in your example, is was 20)
%In the following, I assume that num and size(alldescriptors,1) are even numbers
dim = size(alldescriptors, 1);
if num >= dim
significativedescriptors = alldescriptors;
else
a = (dim/2 - num/2) + 1;
b = dim/2 + num/2;
significativedescriptors = alldescriptors(a : b);
end
end
Know, you can use the above functions as follows:
im = imread('test.jpg');
im = im2bw(im);
b = bwboundaries(im);
b = b{1};
%force the number of boundary points to be even
if mod(size(b,1), 2) ~= 0
b = [b; b(end, :)];
end
%define the number of significative descriptors I want to extract (it must be even)
numdescr = 20;
%Now, you can extract all fourier descriptors...
f = fourierdescriptor(b);
%...and get only the most significative:
f_sign = getsignificativedescriptors(f, numdescr);
I just went through the same problem with you.
According to this link, if you want invariant to scaling, make the comparison ratio-like, for example by dividing every Fourier coefficient by the DC-coefficient. f*1 = f1/f[0], f*[2]/f[0], and so on. Thus, you need to use the DC-coefficient where the f(1) in your code is not the actual DC-coefficient after your step "f = f(2:20); % getting the first 20 & deleting the dc component". I think the problem can be solved by keeping the value of the DC-coefficient first, the code after adjusted should be like follows:
% Normalization
DC = f(1);
f = f(2:20); % getting the first 20 & deleting the dc component
f = abs(f) ; % use magnitudes to be invariant to translation & rotation
f = f/DC; % divide the Fourier coefficients by the DC-coefficient to be invariant to scale