I have phase-contrast microscopy images that needs to be segmented. It seems very difficult to segment them due to the lack of contrast between the objects from the background (image 1). I used the function adapthisteq to increase the visibility of the cells (image 2). Is there any way I can improve the segmentation of the cells?
normalImage = imread(fileName);
channlImage = rgb2gray(normalImage);
histogramEq = adapthisteq(channlImage,'NumTiles',[50 50],'ClipLimit',0.1);
saturateInt = imadjust(histogramEq);
binaryImage = im2bw(saturateInt,graythresh(saturateInt));
binaryImage = 1 - binaryImage;
normalImage - raw image
histogramEq - increased visibility image
binaryImage - binarized image
Before to apply the threshold, I would separate the different patterns from the background by using a white top-hat. See here the result. Then you stretch the histogram.
Then you can apply what you did.
I would like to build up on FiReTiTi's answer. I have the code below and some screenshots. I have done this using OpenCV 3.0.0
import cv2
x = 'test.jpg'
img = cv2.imread(x, 1)
cv2.imshow("img",img)
#----converting the image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imshow('gray', gray)
#----binarization of image
ret,thresh = cv2.threshold(gray,250,255,cv2.THRESH_BINARY)
cv2.imshow("thresh",thresh)
#----performing adaptive thresholding
athresh=cv2.adaptiveThreshold(thresh, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 2)
cv2.imshow('athresh', athresh)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(7, 7))
#----morphological operation
closing = cv2.morphologyEx(athresh, cv2.MORPH_CLOSE, kernel)
cv2.imshow('closing', closing)
#----masking the obtained result on the grayscale image
result = cv2.bitwise_and(gray, gray, mask= closing)
cv2.imshow('result ', result )
Related
I have been trying to separate the human body in an image from the background, but all the methods I have seen don't seem to work very well for me.
I have collected the following images;
The image of the background
The image of the background with the person in it.
Now I want to cut out the person from the background.
I tried subtracting the image of the background from the image with the person using res = cv2.subtract(background, foreground) (I am new to image processing).
Background subtraction methods in opencv like cv2.BackgroundSubtractorMOG2() and cv2.BackgroundSubtractorMOG2() only works with videos or image sequence and contour detection methods I have seen are only for solid shapes.
And grabCut doesn't quite work well for me because I would like to automate the process.
Given the images I have (Image of the background and image of the background with the person in it), is there a method of cutting the person out from the background?
I wouldn't recommend a neural net for this problem. That's a lot of work for something like this where you have a known background. I'll walk through the steps I took to do the background segmentation on this image.
First I shifted into the LAB color space to get some light-resistant channels to work with. I did a simple subtractions of foreground and background and combined the a and b channels.
You can see that there is still significant color change in the background even with a less light-sensitive color channel. This is likely due to the auto white balance on the camera, you can see that some of the background colors change when you step into view.
The next step I took was thresholding off of this image. The optimal threshold values may not always be the same, you'll have to adjust to a range that works well for your set of photos.
I used openCV's findContours function to get the segmentation points of each blob and I filtered the available contours by size. I set a size threshold of 15000. For reference, the person in the image had a pixel area of 27551.
Then it's just a matter of cropping out the contour.
This technique works for any good thresholding strategy. If you can improve the consistency of your pictures by turning off auto settings and ensure good contrast of the person against the wall then you can use simpler thresholding strategies and get good results.
Just for fun:
Edit:
I forgot to add in the code I used:
import cv2
import numpy as np
# rescale values
def rescale(img, orig, new):
img = np.divide(img, orig);
img = np.multiply(img, new);
img = img.astype(np.uint8);
return img;
# get abs(diff) of all hue values
def diff(bg, fg):
# do both sides
lh = bg - fg;
rh = fg - bg;
# pick minimum # this works because of uint wrapping
low = np.minimum(lh, rh);
return low;
# load image
bg = cv2.imread("back.jpg");
fg = cv2.imread("person.jpg");
fg_original = fg.copy();
# blur
bg = cv2.blur(bg,(5,5));
fg = cv2.blur(fg,(5,5));
# convert to lab
bg_lab = cv2.cvtColor(bg, cv2.COLOR_BGR2LAB);
fg_lab = cv2.cvtColor(fg, cv2.COLOR_BGR2LAB);
bl, ba, bb = cv2.split(bg_lab);
fl, fa, fb = cv2.split(fg_lab);
# subtract
d_b = diff(bb, fb);
d_a = diff(ba, fa);
# rescale for contrast
d_b = rescale(d_b, np.max(d_b), 255);
d_a = rescale(d_a, np.max(d_a), 255);
# combine
combined = np.maximum(d_b, d_a);
# threshold
# check your threshold range, this will work for
# this image, but may not work for others
# in general: having a strong contrast with the wall makes this easier
thresh = cv2.inRange(combined, 70, 255);
# opening and closing
kernel = np.ones((3,3), np.uint8);
# closing
thresh = cv2.dilate(thresh, kernel, iterations = 2);
thresh = cv2.erode(thresh, kernel, iterations = 2);
# opening
thresh = cv2.erode(thresh, kernel, iterations = 2);
thresh = cv2.dilate(thresh, kernel, iterations = 3);
# contours
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter contours by size
big_cntrs = [];
marked = fg_original.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > 15000:
print(area);
big_cntrs.append(contour);
cv2.drawContours(marked, big_cntrs, -1, (0, 255, 0), 3);
# create a mask of the contoured image
mask = np.zeros_like(fb);
mask = cv2.drawContours(mask, big_cntrs, -1, 255, -1);
# erode mask slightly (boundary pixels on wall get color shifted)
mask = cv2.erode(mask, kernel, iterations = 1);
# crop out
out = np.zeros_like(fg_original) # Extract out the object and place into output image
out[mask == 255] = fg_original[mask == 255];
# show
cv2.imshow("combined", combined);
cv2.imshow("thresh", thresh);
cv2.imshow("marked", marked);
# cv2.imshow("masked", mask);
cv2.imshow("out", out);
cv2.waitKey(0);
Since it is very easy to find dataset consist a lot of human body, I suggest you to implement neural network segmentation tecniques to extract human body perfectly. Please check this link to see similar example.
I have 2 images. One is background image and other image has same background but with some foreground object. I want to extract foreground object from background. Simple subtraction operation in matlab will not suffice as it subtracts RGB value of background image from that of foreground image (as in below code).
im1 = imread('output/frame-1.jpg')
im2 = imread('output/frame-7.jpg')
%# subtract
deltaImage = im1 - im2;
imshow(deltaImage)
So if background color is white and foreground object is blue, then output (i.e. deltaImage) comes foreground object with orange color with black background. However the output I want is foreground object with blue color (i.e. original color) with black background. How can I get this ? I tried to do it using below code, but output image is incorrect.
im1 = imread('foreground.jpg')
im2 = imread('background.jpg')
[m n k]=size(im2);
deltaImage = zeros(m,n,3);
fprintf('%d %d %d.\n',m,n,k);
for l=1:k
for i=1:m-1
for j=1:n-1
if im1(i:j:l)~=im2(i:j:l)
deltaImage(i,j,l) = im1(i,j,l);
end
end
end
end
imshow(deltaImage)
Background IMAGE
Foreground Image
Output Image (Here I want color of man to be blue)
You can use deltaImage to create a mask (zeros and ones image) that multiplies the foreground. However, note that you will have artifacts associated with lossy image compression (.jpeg). These can be reduced, to some extent, if you use a threshold, like the average difference or a specific value you want. Try this:
im1 = double(imread('~/Downloads/foreground.jpg'));
im2 = double(imread('~/Downloads/background.jpg'));
compute the difference of the averages of the 3 channels
deltaImage = mean(im2,3) - mean(im1,3);
then use the product of the mean by a standard deviation (~3), or uncomment the line below to use a specific threshold, like 128
mask = deltaImage>3*mean(deltaImage(:));
% mask = deltaImage>128;
then assuming all original images are in 8-bit format produce a result also in 8-bit format:
result = uint8(cat(3, im1(:,:,1).*mask, im1(:,:,2).*mask, im1(:,:,3).*mask));
imshow(result)
And this is the result you should get:
Again the weird looking pixels around the main object are artifacts of lossy image compression (.jpeg), you should try working with lossless like .png formats.
I want to apply Temporal Median Filter to a depth map video to ensure temporal consistency and prevent the flickering effect.
Thus, I am trying to apply the filter on all video frames at once by:
First loading all frames,
%%% Read video sequence
numfrm = 5;
infile_name = 'depth_map_1920x1088_80fps.yuv';
width = 1920; %xdim
height = 1088; %ydim
fid_in = fopen(infile_name, 'rb');
[Yd, Ud, Vd] = yuv_import(infile_name,[width, height],numfrm);
fclose(fid_in);
then creating a 3-D depth matrix (height x width x number-of-frames),
%%% Build a stack of images from the video sequence
stack = zeros(height, width, numfrm);
for i=1:numfrm
RGB = yuv2rgb(Yd{i}, Ud{i}, Vd{i});
RGB = RGB(:, :, 1);
stack(:,:,i) = RGB;
end
and finally applying the 1-D median filter along the third direction (time)
temp = medfilt1(stack);
However, for some reason this is not working. When I try to view each frame, I get white images.
frame1 = temp(:,:,1);
imshow(frame1);
Any help would be appreciated!
My guess is that this is actually working but frame1 is of class double and contains values, e.g. between 0 and 255. As imshow represents double images by default on a [0,1] scale, you obtain a white, saturated image.
I would therefore suggest:
caxis auto
after imshow to fix the display problem.
Best,
I have a segmented image
When I apply bwperim function on this I get the output as below
I want to have a thin line of perimeter - just one pixel-thick. This is essential for further processing work. What is the best approach?
Please suggest.
======
BoundingBox
%%% ComputeBoundingBox
%%%
function [stats, statsAlreadyComputed] = ...
ComputeBoundingBox(imageSize,stats,statsAlreadyComputed)
% [minC minR width height]; minC and minR end in .5.
if ~statsAlreadyComputed.BoundingBox
statsAlreadyComputed.BoundingBox = 1;
[stats, statsAlreadyComputed] = ...
ComputePixelList(imageSize,stats,statsAlreadyComputed);
num_dims = numel(imageSize);
for k = 1:length(stats)
list = stats(k).PixelList;
if (isempty(list))
stats(k).BoundingBox = [0.5*ones(1,num_dims) zeros(1,num_dims)];
else
min_corner = min(list,[],1) - 0.5;
max_corner = max(list,[],1) + 0.5;
stats(k).BoundingBox = [min_corner (max_corner - min_corner)];
end
end
end
That is happening because your image had quantization error when you were saving the image. Did you save your image using a lossy compression algorithm, like JPEG? If you want to preserve the intensities so that they don't change when you save the image, use a lossless compression algorithm, like PNG.
To eliminate these "noisy" effects, threshold your image first to eliminate any quantization errors so that you can set these pixels to completely white, then try using bwperim again. In other words, do something like this:
im = im2bw(imread('http://i.stack.imgur.com/dagEc.png'));
im_noborder = imclearborder(im);
out = bwperim(im_noborder);
imshow(out);
The first line of code reads in your image directly from StackOverflow and we use im2bw to threshold your image. This image was originally grayscale, and so we want to convert this into black and white only. This will also remove any quantization artifacts as it thresholds anything higher than 128. The next line of code removes the white border with imclearborder that surrounds your shape because the image you uploaded has a white border surrounding it for some reason. Once we remove this border, we then apply bwperim and we show the image.
This is the image I get:
I have a retinal fundus image which has a white border along the corners. I am trying to remove the borders on all four sides of the image. This is a pre-processing step and my image looks like this:
fundus http://snag.gy/XLGkC.jpg
It is an RGB image, and I took the green channel, and created a mask using logical indexing. I searched for pixels which were all black in the image, and eroded the mask to remove the white edge pixels. However, I am not sure how to retrieve the final image, without the white pixel border using the mask that I have. This is my code, and any help would be appreciated:
maskIdx = rgb(:,:,2) == 0; # rgb is the original image
se = strel('disk',3); # erode 3-pixel using a disk structuring element
im2 = imerode(maskIdx, se);
newrgb = rgb(im2); # gives a vector - not the same size as original im
Solved it myself. This is what I did with some help.
I first computed the mask for all three color channels combined. This is because the mask for each channel is not the same when applied to all the three channels individually, and residual pixels will be left in the final image if I used only the mask from one of the channels in the original image:
mask = (rgb(:,:,1) == 0) & (rgb(:,:,2) == 0) & (rgb(:,:,3) == 0);
Next, I used a disk structuring element with a radius of 9 pixels to dilate my mask:
se = strel('disk', 9);
maskIdx = imdilate(mask,se);
EDIT: A structuring element which is arbitrary can also be used. I used: se = strel(ones(9,9))
Then, with the new mask, I multiplied the original image with the new dilated mask:
newImg(:,:,1) = rgb(:,:,1) .* uint8(maskIdx); # image was of double data-type
newImg(:,:,2) = rgb(:,:,2) .* uint8(maskIdx);
newImg(:,:,3) = rgb(:,:,3) .* uint8(maskIdx);
Finally, I subtracted the computed color-mask from the original image to get my desired border-removed image:
finalImg = rgb - newImg;
Result:
image http://snag.gy/g2X1v.jpg