Second derivative filter - matlab

The filter f’ = [0 -1/2 0 1/2 0] gives an estimate of the first derivative of the image in the x direction. What is the corresponding second derivative filter f"?
Can someone give me a clue and guide me as to how I would go about this problem?

If you are talking about a 1-D signal, use [.5, -1, .5]. In the case of an image what you are looking for is probably the "Laplacean" filter, but the actual second derivative is more complicated than that. It's not a single filter, and it's definitely not just a 1-D array.
The "second derivative" could be that filter applied to the x or the y direction. Their sum is the Laplacian. But there is also dxy/dxdy, which is the convolution by something like:
[[-1, 0, 1],
[ 0, 0, 0],
[ 1, 0,-1]]
You should also use something like
[[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]]]
when calculating the directional derivatives.
If you really want to understand how all of that works, look for the excellent book "Digital Filters" by Richard Hamming! Too many people use these oversimple filters. Try to learn about windowing and Lanczos smoothing. Also there is little reason for anyone not to be using stuff like Shigeru Ando's consistent gradient operators.

Related

Matlab Plotting Two Matrixes and Marking some X-Coordinates On it Based off another vector

Say I have two vectors I want to plot on matlab, and I have this vector that I want to use to mark a small "X" on the plot where this X-value occurs on one of the vectors, how do I do that?
To clarify, say I have a vector of a = [1, 2, 3, 4, 5] another of b = [1, 2, 3, 4, 5, 6] and an identifier vector of a = [1, 4] how do I plot these and show an X on a/b on the plot on x=1 and x =4?
Actually, to find the points that you want, you can use the ismember function as show below.
a=1:5;
c=[1 4];
hold on
plot(a(~ismember(a,c)),'ro') %values of a that DO NOT match the extra entry
plot(a(ismember(a,c)),'rx') %values of a that match the extra entry
I'm not 100% clear if it is this what you want. You can give some comments and I (or someone else) can give you a better answer.

Computing Image Saliency via Neural Network Classifier

Assume that we have a Convolutional Neural Network trained to classify (w.l.o.g. grayscale) images, in Tensor-Flow.
Given the trained net and a test image one can trace which pixels of it are salient, or "equivalently" which pixels are most responsible for the output classification of the image. A nice, explanation and implementation details in Theano, are given in this article.
Assume that for the first layer of convolutions that is directly linked with the input image, we do have the gradient for the parameters of every convolutional kernel-wrt. the classification function.
How can one propagate the gradient back to the Input layer, so to compute a partial derivative on every pixel of the image?
Propagating and accumulating back the gradient, would give us the salient pixels (they are those with big in-magnitude derivative).
To find the gradient wrt. the kernels of the first layer, so far I did:
Replaced the usual loss operator with the output layer operator.
Used the "compute_gradient" function,
All in all, it looks like:
opt = tf.train.GradientDescentOptimizer(1)
grads = opt.compute_gradients(output)
grad_var = [(grad1) for grad in grads]
g1 = sess.run([grad_var[0]])
Where, the "output" is the max of the output layer of the NN.
And g1, is a (k, k, 1, M) tensor, since I used M: k x k convolutional kernels on the first layer.
Now, I need to find the correct way to propagate g1 on every input pixel, as to compute their derivative wrt. the output.
To compute the gradients, you don't need to use an optimizer, and you can directly use tf.gradients.
With this function, you can directly compute the gradient of output with respect to the image input, whereas the optimizer compute_gradients method can only compute gradients with respect to Variables.
The other advantage of tf.gradients is that you can specify the gradients of the output you want to backpropagate.
So here is how to get the gradients of an input image with respect to output[1, 1]:
we have to set the output gradients to 0 everywhere except at indice [1, 1]
input = tf.ones([1, 4, 4, 1])
filter = tf.ones([3, 3, 1, 1])
output = tf.nn.conv2d(input, filter, [1, 1, 1, 1], 'SAME')
grad_output = np.zeros((1, 4, 4, 1), dtype=np.float32)
grad_output[0, 1, 1, 0] = 1.
grads = tf.gradients(output, input, grad_output)
sess = tf.Session()
print sess.run(grads[0]).reshape((4, 4))
# prints [[ 1. 1. 1. 0.]
# [ 1. 1. 1. 0.]
# [ 1. 1. 1. 0.]
# [ 0. 0. 0. 0.]]

Matlab code for definite positive -1/1 matrix

Could anybody tell me how to generate a random sign (-1/1) definite positive matrix in Matlab ?
Update: Thanks to all who replied, that was very helpful
I am experimenting compressed sensing using l1 Magic with different sensing matrices, Gaussian worked well but with Bernoulli, l1 Magic gives me an "matrix must be definite positive" error that's why I was asking my question
A really good answer would require more knowledge about the exact requirements and context. From what I've read:
What you're asking for may be doable for non-symmetric matrices
As horchler pointed out,
A = [1, 0, 0
0, 1, 0
-1, 1, 1];
has all positive eigenvalues, hence is positive definite.
How to find these efficiently for large sized matrices seems to me a non-trivial problem, but I don't really know.
What you're asking for does not appear possible for symmetric matrices
Restricting entries to the set {-1,1}, there are NO 2x2 or 3x3 or 4x4 or 5x5 or 6x6 positive definite matrices.
Restricting entries to the set {-1, 0, 1}, the ONLY positive definite matrices that I've found, by enumerating all possibilities, are the identity matrix! I'd conjecture it's impossible for any size matrix, but I don't know for sure.
Brute force enumeration of 2x2 symmetric matrices:
[-1, -1 eigenvalues -2, 0
-1, -1]
[-1, -1 eigenvalues -1.4, 1.4
-1, 1]
[-1, 1 eigenvalues -2, 0
1, -1]
[-1, 1 eigenvalues -1.4, 1.4
1, 1]
[1, 1 eigenvalues 0, 2
1, 1]
[1, 1 eigenvalues -1.4, 1.4
1, -1]
[1, -1 eigenvalues 0, 2
-1, 1]
[1, -1 eigenvalues -1.4, 1.4
-1, -1]

Substract neighbours from matrix

OK, so I suspect that this might be rather easy, but cannot find any good help online except for bits and pieces.
I have a NxN-matrix that I want to do sort of an edge detect for. I want to substract a factor of all neighbouring values, if they exist.
So if my matrix consists of [5, 5, 5 ; 5, 10, 5 ; 5, 5, 5] I want it to return [4, 3, 4 ; 3, 8, 3 ; 4, 3, 4] (incredibly rough estimate just to give an example).
I can see how it would be done with for-loops, but I'm thinking it may be doable in an easier and less taxing way. So far, nlfilter seems to be a possible way out, but I cannot seem to figure it out completely on my own.
The mathematical operation you're describing is called convolution.
Convolution basically amounts to replacing every pixel in an image with a weighted sum of itself and its neighbours. The weights are given in a (usually small) matrix called a kernel, or sometimes impulse response.
For edge detection I recommend either the Sobel or discrete Laplacian kernels.
The MATLAB function conv2 can do the image convolution for you.
kernel = [ 0 1 0
1 -4 1
0 1 0 ];
edges = conv2(image,kernel,'same');
You're probably looking for something like filter2(h,X)
Given your example, h would be something like
h = [ 0 -0.1 0;
-0.1 1 -0.1;
0 -0.1 0];
This takes the value at the center and subtracts 1/10 of each of its 4 neighbors. If you use filter2(h,X,'same'), where X is your original matrix, it will pad with zeros, which appears to be what you want to get the edge values right.

Derivative of Gaussian filter in Matlab

Is there a derivative of Gaussian filter function in Matlab? Would it be proper to convolve the Gaussian filter with [1 0 -1] to obtain the result?
As far as I know there is no built in derivative of Gaussian filter. You can very easily create one for yourself as follow:
For 2D
G1=fspecial('gauss',[round(k*sigma), round(k*sigma)], sigma);
[Gx,Gy] = gradient(G1);
[Gxx,Gxy] = gradient(Gx);
[Gyx,Gyy] = gradient(Gy);
Where k determine the size of it (depends to which extent you want support).
For 1D is the same, but you don't have two gradient directions, just one. Also you would create the gaussian filter in another way and I assume you already have your preferred method.
Here I gave you up to second order, but you can see the pattern here to proceed to further orders.
The convolving filter you posted ( [1 0 -1] ) looks like finite difference. Although yours I believe is conceptually right, the most correct and common way to do it is with [1 -1] or [-1 1], with that 0 in the middle you skip the central sample in approximating the derivative. This may work as well (but bare in mind that is an approximation that for higher orders diverge from true result), but I usually prefer the method I posted above.
Note: If you are indeed interested in 2D filters, Derivative of Gaussian family has the steerability property, meaning that you can easily create a filter for a Derivative of Gaussian in any direction from the one I gave you up. Supposing the direction you want is defined as
cos(theta), sin(theta)
Then the derivative of Gaussian in that direction is
Gtheta = cos(theta)*Gx + sin(theta)*Gy
If you reapply this recursively you can go to any order you like.
Of just use the derivative of a gaussian which is not more difficult then computing the gaussian itself.
G(x,y) = exp(-(x^2+y^2)/(2*s^2))
d/dx G(x, y) = -x/s^2 G(x,y)
d/dy G(x, y) = -y/s^2 G(x,y)
function [ y ] = dgaus( x,n )
%DGAUS nth derivative of exp(-x.^2)
odd=0*x;
even=exp(-x.^2);
for order=0:(floor(n/2)-1)
odd=-4*order*odd-2*x.*even;
even=-(4*order+2)*even-2*x.*odd;
end
if mod(n,2)==0
y=even;
else
y=-2*(n-1)*odd-2*x.*even;
end
end