Determination of PSF via deconvolution in MATLAB - matlab

I'm trying to determine the point-spread-function (PSF) of a certain microscope by a deconvolution process of a measurement of a test pattern (representing the blurred image) and a simulation of this measurement (representing the non-blurred image). The "images" them self are lines out of a microscopic image of the pattern (a height profile so to say), in other words, I'm working in 1D space. (I then want to use the results on 2D problems, just to mention that by the way.)
My question now: Is there any option to utilize the deconvolution functions already given by MATLAB (deconvlucy, deconvwnr, deconvreg...) to get to the PSF (their input demands a blurred image and a PSF normally) or another approach to get the PSF out of these two data sets? I already tried to do a fft of the "images", dividing them and then doing a ifft of the division (PSF = ifft((fft(measured))/(fft(sim)))) but the PSF of this ansatz does not yield the desired result when used in a later deconvolution.

Related

Replicate smart sharpen in matlab

I'm trying to replicate photoshop smart sharpen filter in matlab.
The filter has three modes: "gauss","lens" and "motion". I'm interested in "gauss" and "lens". My assumption is, the filter uses an "enhanced" unsharp masking algorithm for "gauss" and "lens". According to documentation (https://helpx.adobe.com/photoshop/using/adjusting-image-sharpness-blur.html) a form of edge detection is involved for the "lens" algorithm.
I've created a test image taken from here (https://blog.minhazav.dev/lowpass-highpass-band-reject-and-band-pass-filter/)
and applied the filter in photoshop and compare it to my own unsharp mask highpass filter.
You can see the test pattern below.
I contains the original image normalized to [0,1]. My highpass filtered image is generated like so:
sigma = 10;
g = fspecial("gaussian",2*ceil(3*sigma)+1,sigma);
LP = imfilter(I, g, "replicate");
HP_TEST = I-LP;
SS contains the smart sharpened image created in photoshop (amount=100, radius=10).
If the ps algorithm was a standard unsharp mask, the highpass part of the image should be retrievable by:
HP_GT = SS-I;
In the plot below I'm comparing ps smartSharpen in "gauss" mode with a highpass filtered version
of the test pattern.
plot(HP_GT(ceil(h/2),1:end),"k");
plot(HP_TEST(ceil(h/2),1:end),"b");
In theory I expected the results to be the same but they are not. PS seems to do some additional processing:
the peaks seem to be "inverted" at some point. Does anyone have an idea what might be going on?

Why iradon returns negative pixel values?

I took 200 projections at a step angle of 1.8 degrees using LabVIEW software. The size of the image is 2748 x 2748 pixels, uint16. Then using Matlab, I load the projection images, do the flat field correction, resize the image by 1/3 and save the images as .mat file. Then I run the code below for the filtered backprojection.
interp='linear'; %set interpolation: nearest, linear, spline, pchip, v5cubic
filter='Hann'; %set filter: Ram-Lak, Shepp-Logan, Cosine, Hamming, Hann, None
for s=1:916
for i=1:200
a(i,:)=proj065(:,s,i);
end
a=a';
%figure(3), imagesc(a)
b=iradon(a,1.8,interp,filter);
imagesc(b);
recon(:,:,s)=b;
s
clear a
end
If I used a filter in this code, I will get negative pixel values.
But, if I run the code without the filter, I will get positive pixel values.
Any idea why iradon returns negative pixel values in filtered back projection?
Thank you.
Nurul
Yes, the FBP (filtered back-projection) algorithm will do that. It can wrongly reconstruct voxels as having negative values, due to noise and discretization on the data. Nothing you can do about it than just crop those values generally.
As my PhD is about tomography reconstruction algorithms I feel contractually obligated (joking) to suggest the use of iterative algorithms to possibly obtain better images (never worse, often considerably better). Check SART/SIRT or CGLS for this problem.
However, you are calling your function wrong! In tomography, the step size is not enough to reconstruct an image, you generally need the exact angles, thus iradon doesnt accept a step size as an input, it accepts an array of angles.
in your case, theta should be theta=linspace(0,360-200/360,200), and you should call iradon(a,theta,...)

fftshift before calculating fourier transform: Matlab

I am looking at some FFT code in a matlab project and the FFT and inverse FFT is computed this way:
% Here image is a 2D image.
image_fft = fftshift(image,1);
image_fft = fftshift(image_fft,2);
image_fft = fft(image_fft,[],1);
image_fft = fft(image_fft,[],2);
image_fft = fftshift(image_fft,1);
image_fft = fftshift(image_fft,2);
% Some processing and then same sequence of fftshift, ifft and fftshift to move to
% time domain
I tried to find some information online but am having trouble wondering why the fftshift needs to be done before computing the FFT.
Another question I have is whether this is something really Matlab specific. For example, I am planning to port this code to C++ and use KISS FFT. Do I need to be vary of this?
The reason why people like to swap prior to the DFT is because it makes the center pixel of the image the one with zero-phase shift. It often makes algorithms that depend on phase easier to think about and implement. It is not matlab specific and if you want to port an exact version of the code to another language, you'll need to perform the quadswap beforehand too.
EDIT:
Let me give an example that I hope will clear things up. Let's say that our image is the sum of a bunch of sinc functions with varying locations throughout the image. In the frequency domain, each of these sinc functions is a rect function with the same amplitude but with a different linear phase component that determines the location of the sinc in the image domain. By swapping the image prior to taking the DFT, we make the linear phase component of the frequency domain representation of the center pixel be zero. Moreover, the linear phase components of the other sinc functions will now be a function of their distance from the center pixel. If we didn't swap the image beforehand, then the linear phase components of the rect functions would be a function of their distance from the pixel in the top-left of the image. This would be non-intutive and would involve the same kind of phase wrapping considerations that one sees with equating the frequencies in the range (pi,2pi) rad/sample with more the intuitive (-pi,0) rad/sample.
For images, it's better to use fft2. It's matlab's convention to arrange 2D ffts with the DC in the corners. Presumably because of the row/array conventions. fftshift allows for a more intuitive display of the FFT with the DC in the center.
I don't fully understand what the piece of code you copied is about, here is an example of fft and inverse fft of an image using matlab.
And a more detailed tutorial here.

Image Enhancement using combination between SVD and Wavelet Transform

My objective is to handle illumination and expression variations on an image. So I tried to implement a MATLAB code in order to work with only the important information within the image. In other words, to work with only the "USEFUL" information on an image. To do that, it is necessary to delete all unimportant information from the image.
Reference: this paper
Lets see my steps:
1) Apply the Histogram Equalization in order to get an histo_equalized_image=histeq(MyGrayImage). so that large intensity variations
can be handled to some extent.
2) Apply svd approximations on the histo_equalized_image. But before to do that, I applied the svd decomposition ([L D R]=svd(histo_equalized_image)), then these singular values are used to make the derived image J=L*power(D, i)*R where i varies between 1 and 2.
3) Finally, the derived image is combined with the original image to: C=(MyGrayImage+(a*J))/1+a. Where a varies from 0 to 1.
4) But all the steps above are not able to perform well under varying conditions. So finally, wavelet transform should be used to handle those variations(we use only the LL image bloc). Low frequency component contains the useful information, also, unimportant
information gets lost in this component. The (LL) component is ineffective with illumination changes and expression variations.
I wrote a matlab code for that, and I would like to know if my code is correct or no (if no, so how to correct it). Furthermore, I am interested to know if I can optimize these steps. Can we improve this method? if yes, so how? Please I need help.
Lets see now my Matlab code:
%Read the RGB image
image=imread('img.jpg');
%convert it to grayscale
image_gray=rgb2gray(image);
%convert it to double
image_double=im2double(image_gray);
%Apply histogram equalization
histo_equalized_image=histeq(image_double);
%Apply the svd decomposition
[U S V] = svd(histo_equalized_image);
%calculate the derived image
P=U * power(S, 5/4) * V';
%Linearly combine both images
J=(single(histo_equalized_image) + (0.25 * P)) / (1 + 0.25);
%Apply DWT
[c,s]=wavedec2(J,2,'haar');
a1=appcoef2(c,s,'haar',1); % I need only the LL bloc.
You need to define, what do you mean by "USEFUL" or "important" information. And only then do some steps.
Histogram equalization is global transformation, which gives different results on different images. You can make an experiment - do histeq on image, that benefits from it. Then make two copies of the original image and draw in one black square (30% of image area) and white square on second. Then apply histeq and compare results.
Low frequency component contains the useful information, also,
unimportant information gets lost in this component.
Really? Edges and shapes - which are (at least for me) quite important are in high frequencies. Again we need definition of "useful" information.
I cannot see theoretical background why and how your approach would work. Could you a little bit explain, why do you choose this method?
P.S. I`m not sure if this papers are relevant to you, but recommend "Which Edges Matter?" by Bansal et al. and "Multi-Scale Image Contrast Enhancement" by V. Vonikakis and I. Andreadis.

Matlab image centroid simulation

I was given this task, I am a noob and need some pointers to get started with centroid calculation in Matlab:
Instead of an image first I was asked to simulate a Gaussian distribution(2 dimensional), add noise(random noise) and plot the intensities, now the position of the centroid changes due to noise and I need to bring it back to its original position by
-clipping level to get rid of the noise, noise reduction by clipping or smoothing, sliding average (lpf) (averaging filter 3-5 samples ), calculating the means or using Convolution filter kernel - which does matrix operations which represent the 2-D images
Since you are a noob, even if we wrote down the answer verbatim you probably won't understand how it works. So instead I'll do what you asked, give you pointers and you'll have to read the related documentation :
a) to produce a 2-d Gaussian use meshgrid or ndgrid
b) to add noise to the image look into rand ,randn or randi, depending what exactly you need.
c) to plot the intensities use imagesc
d) to find the centroid there are several ways, try to further search SO, you'll find many discussions. Also you can check TMW File exchange for different implementations for that.