I'm trying to replicate photoshop smart sharpen filter in matlab.
The filter has three modes: "gauss","lens" and "motion". I'm interested in "gauss" and "lens". My assumption is, the filter uses an "enhanced" unsharp masking algorithm for "gauss" and "lens". According to documentation (https://helpx.adobe.com/photoshop/using/adjusting-image-sharpness-blur.html) a form of edge detection is involved for the "lens" algorithm.
I've created a test image taken from here (https://blog.minhazav.dev/lowpass-highpass-band-reject-and-band-pass-filter/)
and applied the filter in photoshop and compare it to my own unsharp mask highpass filter.
You can see the test pattern below.
I contains the original image normalized to [0,1]. My highpass filtered image is generated like so:
sigma = 10;
g = fspecial("gaussian",2*ceil(3*sigma)+1,sigma);
LP = imfilter(I, g, "replicate");
HP_TEST = I-LP;
SS contains the smart sharpened image created in photoshop (amount=100, radius=10).
If the ps algorithm was a standard unsharp mask, the highpass part of the image should be retrievable by:
HP_GT = SS-I;
In the plot below I'm comparing ps smartSharpen in "gauss" mode with a highpass filtered version
of the test pattern.
plot(HP_GT(ceil(h/2),1:end),"k");
plot(HP_TEST(ceil(h/2),1:end),"b");
In theory I expected the results to be the same but they are not. PS seems to do some additional processing:
the peaks seem to be "inverted" at some point. Does anyone have an idea what might be going on?
Related
I am interested in separating features on an image using the watershed algorithm. Using the matlab tutorial, I tried to write a small proof of principle algorithm that I can further use in my image analysis.
Im = imread('../../Pictures/testrec.png');
bw = rgb2gray(Im);
figure
imshow(bw,'InitialMagnification','fit'), title('bw')
%Compute the distance transform of the complement of the binary image.
D = bwdist(~bw);
figure
imshow(D,[],'InitialMagnification','fit')
title('Distance transform of ~bw')
%Complement the distance transform, and force pixels that don't belong to the objects to be at Inf .
D = -D;
D(~bw) = Inf;
%Compute the watershed transform and display the resulting label matrix as an RGB image.
L = watershed(D);
L(~bw) = 0;
rgb = label2rgb(L,'jet',[.5 .5 .5]);
figure
imshow(rgb,'InitialMagnification','fit')
title('Watershed transform of D')
It appears that the feature separation is somewhat random, as can be seen from the prolonged feature in the middle. However, there does not seem to be any parameters for the watershed algorithm, that could be used to optimize its performance. Can you suggest how such parameter can be introduced, or a better algorithm to process the data.
Bonus Question: I am interested to first separate my image using bwconncomp, then selectively apply the watershed algorithm to only some of the regions. Assume I know which of the cc.PixelIdxList regions I want to apply the algorithm to - how do I get a new PixelIdxList with separated components.
Watershed transformation cannot separate convex shapes.
There is no way to change that. A convex shape always results in one object.
Blobs very close to convex will always result in poor watershed results.
The only reason why you have that "somewhat random" result instead of a single basin is that a few pixels are a bit off the perimeter.
Results of watershed are improved by pre- and post-processing. But that would be very specific to a certain problem.
I'm trying to determine the point-spread-function (PSF) of a certain microscope by a deconvolution process of a measurement of a test pattern (representing the blurred image) and a simulation of this measurement (representing the non-blurred image). The "images" them self are lines out of a microscopic image of the pattern (a height profile so to say), in other words, I'm working in 1D space. (I then want to use the results on 2D problems, just to mention that by the way.)
My question now: Is there any option to utilize the deconvolution functions already given by MATLAB (deconvlucy, deconvwnr, deconvreg...) to get to the PSF (their input demands a blurred image and a PSF normally) or another approach to get the PSF out of these two data sets? I already tried to do a fft of the "images", dividing them and then doing a ifft of the division (PSF = ifft((fft(measured))/(fft(sim)))) but the PSF of this ansatz does not yield the desired result when used in a later deconvolution.
I have a gray image with noise. I am new in deleting noise from an image so I don't know the type of noise and how I can remove it from image. My aim is to convert image to binary mode by using local threshold after removing the noise.
Is there anybody who has any idea about type of noise and has a method to remove this noise?
The image:
Typically in microscopy noise comes from 2 sources:
1) Gaussian/electronic noise
This type of noise come from the fluctuations in the detector due to quantum effects in the electronics. It is randomly generated and follows a gaussian distribution. Therefore in that case using a gaussian filter might be optimal to remove it.
2) Shot noise
Photons arriving at the detector are converted to electric signal through the photoelectric effect, and the fluctuations in the number of photons arriving at the detector create shot noise, which you can hardly eliminate and is usually predominant during acquisition. It follows a Poisson distribution which looks like a Gaussian, so in this case a gaussian filter might be the appropriate as well.
So to come back to your question, it does look like a gaussian filter would be the most intuitive choice, although an average filter could be used as well. Here is a sample code that you could try and play around with:
clear
close all
clc
A = imread('http://i.stack.imgur.com/IlqAi.jpg');
BW = im2bw(A,.9); %//Treshold image
h = fspecial('gaussian', [5 5],.8); %// Create gaussian filter
BW2 = imfilter(BW,h); %// Apply filter
imshow(BW2); %// Display image
which results in the following:
You can change the filter parameters (i.e. the size of the kernel and the sigma value) and see how they affect the outcome. Here are other filters you can use as well:
Median:
BW2 = medfilt2(BW,[3 3]); %// Median filter
or average:
h = fspecial('average', 3) %//average filter
BW2 = imfilter(BW,h);
You might be interested in this link on the Mathworks website that talks about removing noise in images.
Hope that helps!
My objective is to handle illumination and expression variations on an image. So I tried to implement a MATLAB code in order to work with only the important information within the image. In other words, to work with only the "USEFUL" information on an image. To do that, it is necessary to delete all unimportant information from the image.
Reference: this paper
Lets see my steps:
1) Apply the Histogram Equalization in order to get an histo_equalized_image=histeq(MyGrayImage). so that large intensity variations
can be handled to some extent.
2) Apply svd approximations on the histo_equalized_image. But before to do that, I applied the svd decomposition ([L D R]=svd(histo_equalized_image)), then these singular values are used to make the derived image J=L*power(D, i)*R where i varies between 1 and 2.
3) Finally, the derived image is combined with the original image to: C=(MyGrayImage+(a*J))/1+a. Where a varies from 0 to 1.
4) But all the steps above are not able to perform well under varying conditions. So finally, wavelet transform should be used to handle those variations(we use only the LL image bloc). Low frequency component contains the useful information, also, unimportant
information gets lost in this component. The (LL) component is ineffective with illumination changes and expression variations.
I wrote a matlab code for that, and I would like to know if my code is correct or no (if no, so how to correct it). Furthermore, I am interested to know if I can optimize these steps. Can we improve this method? if yes, so how? Please I need help.
Lets see now my Matlab code:
%Read the RGB image
image=imread('img.jpg');
%convert it to grayscale
image_gray=rgb2gray(image);
%convert it to double
image_double=im2double(image_gray);
%Apply histogram equalization
histo_equalized_image=histeq(image_double);
%Apply the svd decomposition
[U S V] = svd(histo_equalized_image);
%calculate the derived image
P=U * power(S, 5/4) * V';
%Linearly combine both images
J=(single(histo_equalized_image) + (0.25 * P)) / (1 + 0.25);
%Apply DWT
[c,s]=wavedec2(J,2,'haar');
a1=appcoef2(c,s,'haar',1); % I need only the LL bloc.
You need to define, what do you mean by "USEFUL" or "important" information. And only then do some steps.
Histogram equalization is global transformation, which gives different results on different images. You can make an experiment - do histeq on image, that benefits from it. Then make two copies of the original image and draw in one black square (30% of image area) and white square on second. Then apply histeq and compare results.
Low frequency component contains the useful information, also,
unimportant information gets lost in this component.
Really? Edges and shapes - which are (at least for me) quite important are in high frequencies. Again we need definition of "useful" information.
I cannot see theoretical background why and how your approach would work. Could you a little bit explain, why do you choose this method?
P.S. I`m not sure if this papers are relevant to you, but recommend "Which Edges Matter?" by Bansal et al. and "Multi-Scale Image Contrast Enhancement" by V. Vonikakis and I. Andreadis.
Does the 'gaussian' filter in MATLAB convolve the image with the Gaussian kernel? Also, how do you choose the parameters hsize (size of filter) and sigma? What do you base it on?
You first create the filter with fspecial and then convolve the image with the filter using imfilter (which works on multidimensional images as in the example).
You specify sigma and hsize in fspecial.
Code:
%%# Read an image
I = imread('peppers.png');
%# Create the gaussian filter with hsize = [5 5] and sigma = 2
G = fspecial('gaussian',[5 5],2);
%# Filter it
Ig = imfilter(I,G,'same');
%# Display
imshow(Ig)
#Jacob already showed you how to use the Gaussian filter in Matlab, so I won't repeat that.
I would choose filter size to be about 3*sigma in each direction (round to odd integer). Thus, the filter decays to nearly zero at the edges, and you won't get discontinuities in the filtered image.
The choice of sigma depends a lot on what you want to do. Gaussian smoothing is low-pass filtering, which means that it suppresses high-frequency detail (noise, but also edges), while preserving the low-frequency parts of the image (i.e. those that don't vary so much). In other words, the filter blurs everything that is smaller than the filter.
If you're looking to suppress noise in an image in order to enhance the detection of small features, for example, I suggest to choose a sigma that makes the Gaussian just slightly smaller than the feature.
In MATLAB R2015a or newer, it is no longer necessary (or advisable from a performance standpoint) to use fspecial followed by imfilter since there is a new function called imgaussfilt that performs this operation in one step and more efficiently.
The basic syntax:
B = imgaussfilt(A,sigma) filters image A with a 2-D Gaussian smoothing kernel with standard deviation specified by sigma.
The size of the filter for a given Gaussian standard deviation (sigam) is chosen automatically, but can also be specified manually:
B = imgaussfilt(A,sigma,'FilterSize',[3 3]);
The default is 2*ceil(2*sigma)+1.
Additional features of imgaussfilter are ability to operate on gpuArrays, filtering in frequency or spacial domain, and advanced image padding options. It looks a lot like IPP... hmmm. Plus, there's a 3D version called imgaussfilt3.