I learnt about deconvlucy and deconvwnr techniques to remove motion blur and they work pretty well on simulated deblurred images. Hence, I tried to check this algorithm on real footages captured with mobile. I also stabilised video using Movavi video editor.
And here is my code:
I = imread('mobile_blur13.png');
imshow(I);
lengthmin = 12;
lengthmax = 15;
thetamin =331;
thetamax=335;
figure;
for length = lengthmin:0.2:lengthmax
for theta = thetamin:0.5:thetamax
PSF = fspecial('motion',length,theta);
res = deconvlucy(I,PSF,100);
res2 =deconvreg(I,PSF);
noise_var = 0;
signal_var = var(double(I(:)));
estimated_nsr = noise_var/signal_var;
res1= deconvwnr(I,PSF,estimated_nsr);
%res = medfilt2(rgb2gray(res));
f = imfilter(res, fspecial('average', [3 3]));
imshow(f);
end
end
But, am getting very bad results. May I know what am doing wrong.
Here is a image:
Thanks in advance
Deblurring an image with a simulated blur is very different from deblurring an actual camera captured photos.
Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. There are several methods to deal with this problem such as blind restoration or optical correction using stabilized lenses.
The solution is to use blind deconvolution and the deconvblind command.
https://www.mathworks.com/help/images/ref/deconvblind.html
Related
The task is to inpaint the timestamp of the following picture. I use MATLAB's inpaintCoherent, but the result isn't satisfactory. I have attached the original image, the mask, and the inpainted image.
Here is the MATLAB code.
bill = imread('Billiards_ref.png');
mask = imread('mask.png');
bill_inpainted = inpaintCoherent(bill, logical(mask));
imshowpair(bill, bill_inpainted, 'montage')
What image pre or post-processing can I do to improve the quality of the inpainting?
The original image
The mask image
The inpainted image
You may want to consider blurring your mask image a bit prior to the use of inpaintCoherent. This will require some trial and error to see how much smoothing will give your the best image. Based on the images you have posted in your question, here is what I can suggest and the results:
bill = imread('Billiards_ref.jpg'); % Image you included is a jpg file
mask = imread('mask.png');
% Create a blurred mask using a 2D Gaussian kernel with std dev std_dev (in pixel units)
std_dev = 1; % You may want to change this for different images depending on what the resolution of the original mask is
mask_bl = imgaussfilt(double(mask), std_dev);
bill_inpainted = inpaintCoherent(bill, logical(mask_bl));
I am trying to write a program that uses computer vision techniques to detect (and track) tiny blobs in a stream of very noisy images. The image stream comes from an dual X ray imaging setup, which outputs left and right views (different sizes because of collimating differently). My data is of two types: one set of images are not so noisy, which I am just using to try different techniques with, and the other set are noisier, and this is where the detection needs to work at the end. The image stream is at 60 Hz. This is an example of a raw image from the X ray imager:
Here are some cropped out samples of the regions of interest. The blobs that need to be detected are the small black spots near the center of the image.
Initially I started off with a simple contour/blob detection techniques in OpenCV, which were not very helpful. Eventually I moved on to techniques such as "opening" the image using morphological operators, and subsequently performing a Laplacian of Gaussian blob detection to detect areas of interest. This gave me better results for the low-noise versions of the images, but fails when it comes to the high-noise ones: gives me too many false positives. Here is a result from a low-noise image (please note input image was inverted).
The code for my current LoG based approach in MATLAB goes as below:
while ~isDone(videoReader)
frame = step(videoReader);
roi_frame = imcrop(frame, [660 410 120 110]);
I_roi = rgb2gray(roi_frame);
I_roi = imcomplement(I_roi);
I_roi = wiener2(I_roi, [5 5]);
background = imopen(I_roi,strel('disk',3));
I2 = imadjust(I_roi - background);
K = imgaussfilt(I2, 5);
level = graythresh(K);
bw = im2bw(I2);
sigma = 3;
% Filter image with LoG
I = double(bw);
h = fspecial('log',sigma*30,sigma);
Ifilt = -imfilter(I,h);
% Threshold for points of interest
Ifilt(Ifilt < 0.001) = 0;
% Dilate to obtain local maxima
Idil = imdilate(Ifilt,strel('disk',50));
% This is the final image
P = (Ifilt == Idil) .* Ifilt;
Is there any way I can improve my current detection technique to make it work for images with a lot of background noise? Or are there techniques better suited for images like this?
The approach I would take:
-Average background subtraction
-Aggressive Gaussian smoothing (this filter should be shaped based on your target object, off the top of my head I think you want the sigma about half the smallest cross section of your object, but you may want to fiddle with this) Basically the goal is blurring the noise as much as possible without completely losing your target objects (based on shape and size)
-Edge detection. Try to be specific to the object if possible (basically, look at what the object's edge looks like after Gaussian smoothing and set your edge detection to look for that width and contrast shift)
-May consider running a closing operation here.
-Search the whole image for islands (fully enclosed regions) filter based on size and then on shape.
I am taking a hunch that despite the incredibly low signal to noise ratio, your granularity of noise is hopefully significantly smaller than your object size. (if your noise is both equivalent contrast and same ballpark size as your object... you are sunk and need to re-evaluate your acquisition imo)
Another note based on your speed needs. Extreme amounts of processing savings can be made through knowing last known positions and searching locally and also knowing where new targets can enter the image from.
I am trying to detect the objects in the following image and calculate the centroids and orientation of each object in the image.
My approach so far has been to remove the background from the image and isolate the objects. However, the segmentation is not precise.
What other approaches can I take? Will SURF detection, using reference images, be a more accurate approach?
My attempt:
I = imread('image.jpg');
figure, imshow(I)
background = imopen(I,strel('disk',15));
I2 = I - background;
figure, imshow(I2);
I3 = imadjust(rgb2gray(I2));
figure, imshow(I3);
level = graythresh(I3);
bw = im2bw(I3,level);
bw = bwareaopen(bw, 50);
figure, imshow(bw)
Nice start.
I would do the following:
1- pre process your image
apply some filters and some convolutions to remove noise; dilation and erosion for instance.
2- after calculating the thresholds, try to fill in the masks so that you closed "objects". I think imfill - http://www.mathworks.com/help/images/ref/imfill.html - will help you doing this.
Also take a look at - http://www.mathworks.com/help/images/image-enhancement-and-analysis.html -
I'm having trouble separating cells in microscope images. When I apply a watershed transform I end up cutting up cells into many pieces and not merely separating them at the boundary/minimum.
I am using the bpass filter from http://physics.georgetown.edu/matlab/code.html.
bp = bpass(image,1,15);
op = imopen(bp,strel('ball',10,700));
bw = im2bw(bp-op,graythresh(bp-op));
bw = bwmorph(bw,'majority',10);
bw = imclearborder(bw);
D = bwdist(~bw);
D = -D;
D(~bw) = -Inf;
L = watershed(D);
mask = im2bw(L,1/255);
Any ideas would be greatly appreciated! You can see that my cells are being split apart too much in the final mask.
Here is the kind of image I'm trying to watershed. It's a 16bit image so it looks like it is all black.
Starting fluorescent image
Final image mask:
I separated the cells manually here:
Finding the centers of the cells should be relatively straight-forward: finding a local maxima of the intensity. Using these points as seeds for the watershed, you might find this tutorial useful.
Some morphologcal operations you might find useful are:
- imimposemin - forcing a seed point to be a local min when computing the watershed transform.
- imregionalmax - finding local maxima of intensity image.
I have a video that I am using background subtraction and motion segmentation on. The floor in the video is black, so when I get the silhouette the feet and parts of the legs are cut off. Is there a fix around this? This is what it looks like.
This is the background image.
This is a piece of my code....
clear all
close all
clc
% Read the video in the video object Mov
MM = mmreader('kassie_test_video.wmv');
% Read in all video frames.
Mov = read(MM);
% Get the number of frames.
FrameNum = MM.NumberOfFrames;
% load 'object_data.mat'
BackgroundImage = (Mov(:,:,:,98)); % background image
% set the sampling rate as well as the threshold for binary image.
downSamplingRate = MM.FrameRate;
%%
index = 1;
clear IM
clear Images
sf=10;ef=sf+30;
for ii =sf:ef
% Extract next frame
Im = im2double(Mov(:,:,:,ii));
% Background subtraction
Ib = rgb2gray(abs(im2double((Mov(:,:,:,ii)))-im2double(BackgroundImage)));
% conversion to binary image.
Thresh = graythresh(Ib);
Ib = im2bw(Ib, Thresh);
se = strel('square',1);
Ib = imerode(Ib,se); % Erode the image
Ib = medfilt2(Ib); % median filtering
Ib = imfill(Ib,'holes'); % fill the holes in the image
imshow(Ib,[])
end
there is a limit to what can be achieved in computer vision utilizing only pixel-processing without incorporating higher-level semantic information. it appears as if he only thing that makes you think the legs are missing is your high-level knowledge of how a body should look like. The real question here: is there any real information in the pixels? if it just so happen that the legs are exactly the same color as the background there's nothing much you can do unless you incorporate high level semantic information.