Substract neighbours from matrix - matlab

OK, so I suspect that this might be rather easy, but cannot find any good help online except for bits and pieces.
I have a NxN-matrix that I want to do sort of an edge detect for. I want to substract a factor of all neighbouring values, if they exist.
So if my matrix consists of [5, 5, 5 ; 5, 10, 5 ; 5, 5, 5] I want it to return [4, 3, 4 ; 3, 8, 3 ; 4, 3, 4] (incredibly rough estimate just to give an example).
I can see how it would be done with for-loops, but I'm thinking it may be doable in an easier and less taxing way. So far, nlfilter seems to be a possible way out, but I cannot seem to figure it out completely on my own.

The mathematical operation you're describing is called convolution.
Convolution basically amounts to replacing every pixel in an image with a weighted sum of itself and its neighbours. The weights are given in a (usually small) matrix called a kernel, or sometimes impulse response.
For edge detection I recommend either the Sobel or discrete Laplacian kernels.
The MATLAB function conv2 can do the image convolution for you.
kernel = [ 0 1 0
1 -4 1
0 1 0 ];
edges = conv2(image,kernel,'same');

You're probably looking for something like filter2(h,X)
Given your example, h would be something like
h = [ 0 -0.1 0;
-0.1 1 -0.1;
0 -0.1 0];
This takes the value at the center and subtracts 1/10 of each of its 4 neighbors. If you use filter2(h,X,'same'), where X is your original matrix, it will pad with zeros, which appears to be what you want to get the edge values right.

Related

What does `filter2` do in this code?

function G=costfunction(im)
G=zeros(size(im,1),size(im,2));
for ii=1:size(im,3)
G=G+(filter2([.5 1 .5; 1 -6 1; .5 1 .5],im(:,:,ii))).^2;
end
end
Here, im is an input image (rgb image).
What will this cost function return?
This bit:
filter2([.5 1 .5; 1 -6 1; .5 1 .5],im(:,:,ii))
applies a Laplace filter to one 2D slice of im. Usually, the Laplace filter is implemented as [0 1 0; 1 -4 1; 0 1 0] or [1 1 1; 1 -8 1; 1 1 1]. I guess whoever wrote this code couldn't decide between those two and took the average.
The loop runs through each of the 2D slices in the 3D image im, and adds the square of each of the results together. If im is an RGB image, it will apply the filter to each of the color channels, and add the square of the results.
The Laplace operator gives a strong negative response on thin lines in the image, as well as responses (positive and negative) around the edges in an image. By taking the square, all responses are positive. Note that the cost function will be close to zero on edges, but high just inside and outside the edges.
Assuming that filter2 is the same used for image processing(as tagged in the question) it should do a 2d linear filtering, im will get its data filtered in the matrix [.5 1 .5; 1 -6 1; .5 1 .5] with the 2d FIR filter.
For the return, G should be that zeros(size(im,1),size(im,2)) plus all the processed images there.

Understanding behaviour of MATLAB's convn

I'm doing convolution of some tensors.
Here is small test in MATLAB:
ker= rand(3,4,2);
a= rand(5,7,2);
c=convn(a,ker,'valid');
c11=sum(sum(a(1:3,1:4,1).*ker(:,:,1)))+sum(sum(a(1:3,1:4,2).*ker(:,:,2)));
c(1,1)-c11 % not equal!
The third line performs a N-D convolution with convn, and I want to compare the result of the first row, first column of convn with computing the value manually. However, my computation in comparison to convn is not equal.
So what is behind MATLAB's convn? Is my understanding of tensor convolution is wrong?
You almost have it correct. There are two things slightly wrong with your understanding:
You chose valid as the convolution flag. This means that the output returned from the convolution has its size so that when you are using the kernel to sweep over the matrix, it has to fit comfortably inside the matrix itself. Therefore, the first "valid" output that is returned is actually for the computation at location (2,2,1) of your matrix. This means that you can fit your kernel comfortably at this location, and this corresponds to position (1,1) of the output. To demonstrate, this is what a and ker look like for me using your above code:
>> a
a(:,:,1) =
0.9930 0.2325 0.0059 0.2932 0.1270 0.8717 0.3560
0.2365 0.3006 0.3657 0.6321 0.7772 0.7102 0.9298
0.3743 0.6344 0.5339 0.0262 0.0459 0.9585 0.1488
0.2140 0.2812 0.1620 0.8876 0.7110 0.4298 0.9400
0.1054 0.3623 0.5974 0.0161 0.9710 0.8729 0.8327
a(:,:,2) =
0.8461 0.0077 0.5400 0.2982 0.9483 0.9275 0.8572
0.1239 0.0848 0.5681 0.4186 0.5560 0.1984 0.0266
0.5965 0.2255 0.2255 0.4531 0.5006 0.0521 0.9201
0.0164 0.8751 0.5721 0.9324 0.0035 0.4068 0.6809
0.7212 0.3636 0.6610 0.5875 0.4809 0.3724 0.9042
>> ker
ker(:,:,1) =
0.5395 0.4849 0.0970 0.3418
0.6263 0.9883 0.4619 0.7989
0.0055 0.3752 0.9630 0.7988
ker(:,:,2) =
0.2082 0.4105 0.6508 0.2669
0.4434 0.1910 0.8655 0.5021
0.7156 0.9675 0.0252 0.0674
As you can see, at position (2,2,1) in the matrix a, ker can fit comfortably inside the matrix and if you recall from convolution, it is simply a sum of element-by-element products between the kernel and the subset of the matrix at position (2,2,1) that is the same size as your kernel (actually, you need to do something else to the kernel which I will reserve for my next point - see below). Therefore, the coefficient that you are calculating is actually the output at (2,2,1), not at (1,1,1). From the gist of it though, you already know this, but I wanted to put that out there in case you didn't know.
You are forgetting that for N-D convolution, you need to flip the mask in each dimension. If you remember from 1D convolution, the mask must be flipped in the horizontally. What I mean by flipped is that you simply place the elements in reverse order. An array of [1 2 3 4] for example would become [4 3 2 1]. In 2D convolution, you must flip both horizontally and vertically. Therefore, you would take each row of your matrix and place each row in reverse order, much like the 1D case. Here, you would treat each row as a 1D signal and do the flipping. Once you accomplish this, you would take this flipped result, and treat each column as a 1D signal and do the flipping again.
Now, in your case for 3D, you must flip horizontally, vertically and temporally. This means that you would need to perform the 2D flipping for each slice of your matrix independently, you would then grab single columns in a 3D fashion and treat those as 1D signals. In MATLAB syntax, you would get ker(1,1,:), treat this as a 1D signal, then flip. You would repeat this for ker(1,2,:), ker(1,3,:) etc. until you are finished with the first slice. Bear in mind that we don't go to the second slice or any of the other slices and repeat what we just did. Because you are taking a 3D section of your matrix, you are inherently operating over all of the slices for each 3D column you extract. Therefore, only look at the first slice of your matrix, and so you need to do this to your kernel before computing the convolution:
ker_flipped = flipdim(flipdim(flipdim(ker, 1), 2), 3);
flipdim performs the flipping on a specified axis. In our case, we are doing it vertically, then taking the result and doing it horizontally, and then again doing it temporally. You would then use ker_flipped in your summation instead. Take note that it doesn't matter which order you do the flipping. flipdim operates on each dimension independently, and so as long as you remember to flip all dimensions, the output will be the same.
To demonstrate, here's what the output looks like with convn:
c =
4.1837 4.1843 5.1187 6.1535
4.5262 5.3253 5.5181 5.8375
5.1311 4.7648 5.3608 7.1241
Now, to determine what c(1,1) is by hand, you would need to do your calculation on the flipped kernel:
ker_flipped = flipdim(flipdim(flipdim(ker, 1), 2), 3);
c11 = sum(sum(a(1:3,1:4,1).*ker_flipped(:,:,1)))+sum(sum(a(1:3,1:4,2).*ker_flipped(:,:,2)));
The output of what we get is:
c11 =
4.1837
As you can see, this verifies what we get by hand with the calculation done in MATLAB using convn. If you want to compare more digits of precision, use format long and compare them both:
>> format long;
>> disp(c11)
4.183698205668000
>> disp(c(1,1))
4.183698205668001
As you can see, all of the digits are the same, except for the last one. That is attributed to numerical round-off. To be absolutely sure:
>> disp(abs(c11 - c(1,1)));
8.881784197001252e-16
... I think a difference of an order or 10-16 is good enough for me to show that they're equal, right?
Yes, your understanding of convolution is wrong. Your formula for c11 is not convolution: you just multiplied matching indices and then summed. It's more of a dot-product operation (on tensors trimmed to the same size). I'll try to explain beginning with 1 dimension.
1-dimensional arrays
Entering conv([4 5 6], [2 3]) returns [8 22 27 18]. I find it easiest to think of this in terms of multiplication of polynomials:
(4+5x+6x^2)*(2+3x) = 8+22x+27x^2+18x^3
Use the entries of each array as coefficients of a polynomial, multiply the polynomials, collect like terms, and read off the result from coefficients. The powers of x are here to keep track of what gets multiplied and added. Note that the coefficient of x^n is found in the (n+1)th entry, because powers of x begin with 0 while the indices begin with 1.
2-dimensional arrays
Entering conv2([2 3; 3 1], [4 5 6; 0 -1 1]) returns the matrix
8 22 27 18
12 17 22 9
0 -3 2 1
Again, this can be interpreted as multiplication of polynomials, but now we need two variables: say x and y. The coefficient of x^n y^m is found in (m+1, n+1) entry. The above output means that
(2+3x+3y+xy)*(4+5x+6x^2+0y-xy+x^2y) = 8+22x+27x^2+18x^3+12y+17xy+22x^2y+9x^3y-3xy^2+2x^2y^2+x^3y^2
3-dimensional arrays
Same story. You can think of the entries as coefficients of a polynomial in variables x,y,z. The polynomials get multiplied, and the coefficients of the product are the result of convolution.
'valid' parameter
This keeps only the central part of the convolution: those coefficients in which all terms of the second factor have participated. For this to be nonempty, the second array should have dimensions no greater than the first. (This is unlike the default setting, for which the order convolved arrays does not matter.) Example:
conv([4 5 6], [2 3]) returns [22 27] (compare to the 1-dimensional example above). This corresponds to the fact that in
(4+5x+6x^2)*(2+3x) = 8+22x+27x^2+18x^3
the bolded terms got contributions from both 2 and 3x.

Matlab ordfilt2 or alternatives for weighted local max

I would like to compute the weighted maxima of a vector in Matlab. For weighted maxima I intend the following:
Given a vector of 2*N+1 weights W={w[-N], w[-N+1] .. w[0] .. w[N]} and given an input sequence A, weighted maxima is a vector M where m[i]=max(w[-N]*a[i-N], w[-N+1]*a[i-N+1], ... w[N]*a[i+N])
So for example given a vector A= [1, 4, 12, 2, 4] and weights W=[0.5, 1, 0.5], the weighted maxima would be M=[2, 6, 12, 6, 4].
This can be done using ordfilt2, but ordfilt2 uses weights as additive rather then multiplicative.
I am actually working on 4D matrixes, but any 1D solution would work as the 4D weight matrix is separable.
My current solution is to generate shifted copies of the input array A, weight them according to the shift and maximize all the arrays. Shift is performed using circshift and is the bottleneck in the process. generating shifted matrixes "manually" trough indexing turned out to be even slower.
Can you suggest any more efficient solution?
EDIT: For a positive A, M=exp(ordfilt2(log(A), length(W), ones(size(W)), log(W))) does the job, but still takes longer than the circshift solution above. I am still looking for more efficient solutions.
>> B = padarray(A, [0 floor(numel(W)/2)], 0); % Pad A with zeros
>> B = bsxfun(#times, B(bsxfun(#plus, 1:numel(B)-numel(W)+1, (0:numel(W)-1)')), W(:)); % Apply the weights
>> M = max(B) % Compute the local maxima
M =
2 6 12 6 4

drow cumulative distribution function in matlab

I have two vectors of the same size. The first one can have any different numbers with any order, the second one is decreasing (but can have the same elements) and consists of only positive integers. For example:
a = [7 8 13 6];
b = [5 2 2 1];
I would like to plot them in the following way: on the x axis I have points from a vector and on the y axis I have the sum of elements from vector b before this points divided by the sum(b). Therefore I will have points:
(7; 0.5) - 0.5 = 5/(5+2+2+1)
(8; 0.7) - 0.7 = (5+2)/(5+2+2+1)
(13; 0.9) ...
(6; 1) ...
I assume that this explanation might not help, so I included the image
Because this looks to me as a cumulative distribution function, I tried to find luck with cdfplot but with no success.
I have another option is to draw the image by plotting each line segment separately, but I hope that there is a better way of doing this.
I find the values on the x axis a little confusing. Leaving that aside for the moment, I think this does what you want:
b = [5 2 2 1];
stairs(cumsum(b)/sum(b));
set(gca,'Ylim',[0 1])
And if you really need those values on the x axis, simply rename the ticks of that axis:
a = [7 8 13 6];
set(gca,'xtick',1:length(b),'xticklabel',a)
Also grid on will add grid to the plot

Smoothing out of rough plots

I want to draw some plots in Matlab.
Details: For class 1, p(x|c1) is uniform for x between [2, 4] with the parameters a = 1 and b = 4. For class 2, p(x|c2) is exponential with parameter lambda = 1. Besides p(c1) = p(c2) = 0.5 I would like to draw a sketch of the two class densities multiplied by P(c1) and P(c2) respectively, as
a function of x, clearly showing the optimal decision boundary (or boundaries).
I have the solution for this problem, this is what the writer did (and I want to get), but there's no Matlab code, so I want to do it all by myself.
And this is what I drew.
And this is the MATLAB code I wrote.
x=0:1:8;
pc1 = 0.5;
px_given_c1 = exppdf(x,1);
px_given_c2 = unifpdf(x,2,4);
figure;
plot(x,px_given_c1,'g','linewidth',3);
hold on;
plot(x,px_given_c2,'r','linewidth',3);
axis([0 8 0 0.5]);
legend('P(x|c_1)','P(x|c_2)');
figure;
plot(x,px_given_c1.*pc1,'g','linewidth',3);
hold on;
plot(x,px_given_c2.*(1-pc1),'r','linewidth',3);
axis([0 8 0 0.5]);
legend('P(x|c_1)P(c_1)','P(x|c_2)P(c_2)');
As you can see, they are almost smiliar, but I am having problem with this uniform distribution, which is drawn in red. How can I change it?
You should probably change x=0:1:8; to something like x=0:1e-3:8; or even x=linspace(0,8,1000); to have finer plotting. This increases number of points in vectors (and therefore line segments) Matlab will use to plot.
Explanation: Matlab works with line segments when it does plotting!
By writing x=0:1:8; you create vector [0 1 2 3 4 5 6 7 8] that is of length 9, and by applying exppdf and unifpdf respectively you create two vectors of the same length derived from original vector. So basically you get vectors [exppdf(0) exppdf(1) ... exppdf(8)] and [unifpdf(0) unifpdf(1) ... unifpdf(8)].
When you issue plot command afterwards Matlab plots only line segments (8 of them in this case because there are 9 points):
from (x(1), px_given_c1(1)) to (x(2), px_given_c1(2)),
...
from (x(8), px_given_c1(8)) to (x(9), px_given_c1(9)).