separated foreground and background or change the value of background image in Matlab - matlab

How should I change the background image color to black or change the RGB values become black color background. I want to take the original leaf image only.
Leaf image

In order to change the background color to black you'll need the following:
calculate the background mask by using a threshold
The threshold can be either find automatically, by using the function graythresh, or manually, by looking at the image histogram.
perform thresholding by using the value from stage 1 in order to find the foreground mask. Also, pick the largest connected component and perform noise cleaning (imclose operation).
calculate the BG from the FG, and zero out the corresponding locations in the original input image.
Code example:
I = imread('YaEwk.jpg');
%converts to hsv colorspace, and takes the 3rd dimension. normlizes it.
im = rgb2hsv(I);
im = mat2gray(im(:,:,3));
%determines a threshold to distinguish between the leaf and its surroundings.
T = graythresh(im);
%defines FG as all the values below the threshold
%Also, keeps just the biggest connected component and perform noise
%reduction.
FG = im < T;
FG = bwareafilt(FG,1);
FG = imclose(FG,strel('disk',2));
%defines the background as the opposite of the foreground
BG = ~FG;
I(repmat(BG,1,1,3)) = 0;
%smooth the output
I(:,:,1) = medfilt2(I(:,:,1));
I(:,:,2) = medfilt2(I(:,:,2));
I(:,:,3) = medfilt2(I(:,:,3));
Result:

Related

How to separate human body from background in an image

I have been trying to separate the human body in an image from the background, but all the methods I have seen don't seem to work very well for me.
I have collected the following images;
The image of the background
The image of the background with the person in it.
Now I want to cut out the person from the background.
I tried subtracting the image of the background from the image with the person using res = cv2.subtract(background, foreground) (I am new to image processing).
Background subtraction methods in opencv like cv2.BackgroundSubtractorMOG2() and cv2.BackgroundSubtractorMOG2() only works with videos or image sequence and contour detection methods I have seen are only for solid shapes.
And grabCut doesn't quite work well for me because I would like to automate the process.
Given the images I have (Image of the background and image of the background with the person in it), is there a method of cutting the person out from the background?
I wouldn't recommend a neural net for this problem. That's a lot of work for something like this where you have a known background. I'll walk through the steps I took to do the background segmentation on this image.
First I shifted into the LAB color space to get some light-resistant channels to work with. I did a simple subtractions of foreground and background and combined the a and b channels.
You can see that there is still significant color change in the background even with a less light-sensitive color channel. This is likely due to the auto white balance on the camera, you can see that some of the background colors change when you step into view.
The next step I took was thresholding off of this image. The optimal threshold values may not always be the same, you'll have to adjust to a range that works well for your set of photos.
I used openCV's findContours function to get the segmentation points of each blob and I filtered the available contours by size. I set a size threshold of 15000. For reference, the person in the image had a pixel area of 27551.
Then it's just a matter of cropping out the contour.
This technique works for any good thresholding strategy. If you can improve the consistency of your pictures by turning off auto settings and ensure good contrast of the person against the wall then you can use simpler thresholding strategies and get good results.
Just for fun:
Edit:
I forgot to add in the code I used:
import cv2
import numpy as np
# rescale values
def rescale(img, orig, new):
img = np.divide(img, orig);
img = np.multiply(img, new);
img = img.astype(np.uint8);
return img;
# get abs(diff) of all hue values
def diff(bg, fg):
# do both sides
lh = bg - fg;
rh = fg - bg;
# pick minimum # this works because of uint wrapping
low = np.minimum(lh, rh);
return low;
# load image
bg = cv2.imread("back.jpg");
fg = cv2.imread("person.jpg");
fg_original = fg.copy();
# blur
bg = cv2.blur(bg,(5,5));
fg = cv2.blur(fg,(5,5));
# convert to lab
bg_lab = cv2.cvtColor(bg, cv2.COLOR_BGR2LAB);
fg_lab = cv2.cvtColor(fg, cv2.COLOR_BGR2LAB);
bl, ba, bb = cv2.split(bg_lab);
fl, fa, fb = cv2.split(fg_lab);
# subtract
d_b = diff(bb, fb);
d_a = diff(ba, fa);
# rescale for contrast
d_b = rescale(d_b, np.max(d_b), 255);
d_a = rescale(d_a, np.max(d_a), 255);
# combine
combined = np.maximum(d_b, d_a);
# threshold
# check your threshold range, this will work for
# this image, but may not work for others
# in general: having a strong contrast with the wall makes this easier
thresh = cv2.inRange(combined, 70, 255);
# opening and closing
kernel = np.ones((3,3), np.uint8);
# closing
thresh = cv2.dilate(thresh, kernel, iterations = 2);
thresh = cv2.erode(thresh, kernel, iterations = 2);
# opening
thresh = cv2.erode(thresh, kernel, iterations = 2);
thresh = cv2.dilate(thresh, kernel, iterations = 3);
# contours
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter contours by size
big_cntrs = [];
marked = fg_original.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > 15000:
print(area);
big_cntrs.append(contour);
cv2.drawContours(marked, big_cntrs, -1, (0, 255, 0), 3);
# create a mask of the contoured image
mask = np.zeros_like(fb);
mask = cv2.drawContours(mask, big_cntrs, -1, 255, -1);
# erode mask slightly (boundary pixels on wall get color shifted)
mask = cv2.erode(mask, kernel, iterations = 1);
# crop out
out = np.zeros_like(fg_original) # Extract out the object and place into output image
out[mask == 255] = fg_original[mask == 255];
# show
cv2.imshow("combined", combined);
cv2.imshow("thresh", thresh);
cv2.imshow("marked", marked);
# cv2.imshow("masked", mask);
cv2.imshow("out", out);
cv2.waitKey(0);
Since it is very easy to find dataset consist a lot of human body, I suggest you to implement neural network segmentation tecniques to extract human body perfectly. Please check this link to see similar example.

Background and foreground color changes

Hello i need to make background color black and foreground color white. As u can see i did this with transfering image to 2 dimension. I want to make this color changes in 3 dimension, so we are nor allowed to transfer it bw. Is there any way to do this ?
logo=imread('logo.png');
subplot(2,2,1);
imshow(logo);
b=rgb2gray(logo);
subplot(2,2,2);
imshow(b);
c=im2bw(b,0.92)
subplot(2,2,3);
imshow(c);
c = 1-c;
subplot(2,2,4);
imshow(c);
Preface:
To set the pixel to white or black each layer of the pixel needs to be set to an intensity value of 0 (black) or 255 (white).
White Pixel → rgb(255,255,255)
Black Pixel → rgb(0,0,0)
The colon can be used to obtain all the indices in the 3rd dimension (grab all the layers). To grab one RGB-pixel in the top-left corner of the image:
RGB_Pixel = Image(1,1,:);
Method 1:
If you wish to retain the three colour channels you can use matrix indexing to change the white background to black. Matrix indexing can also be used to change anywhere that isn't white to white. This method may, unfortunately, break down if you have a coloured component with a 255 intensity component. This doesn't seem to be the case for your image though. You can use method 2 for a more safe approach.
logo = imread('logo.png');
[Image_Height,Image_Width,Depth]= size(logo);
new_logo = zeros(Image_Height,Image_Width,Depth);
new_logo(logo == 255) = 0;
new_logo(logo ~= 255) = 255;
imshow(new_logo);
Method 2:
Checks each pixel (RGB-triplet) using a set of for-loops that scan through the entire image. If the RGB-intensities of the pixel are rgb(255,255,255) then the pixels are set to 0 (black). If the RGB-intensities of the pixel are anything else the pixels are set to 255 (white). The ~ismember() function is used to check if the RGB-pixel has an intensity that is not 255 (not-white).
logo = imread('logo.png');
%Grabbing the size of the image%
[Image_Height,Image_Width,~]= size(logo);
for Row = 1: Image_Height
for Column = 1: Image_Width
%Grabbing RGB pixel%
RGB_Pixel = logo(Row,Column,:);
if(~ismember(255,RGB_Pixel))
%RGB pixel is white change
logo(Row,Column,:) = 255;
else
%RGB pixel is coloured change to black%
logo(Row,Column,:) = 0;
end
end
end
imshow(logo);
Using the repmat() function is also a great solution that the above comment suggested. Which possibly may be the quickest method since you already have the code that generates one layer from the greyscale image.
Ran using MATLAB R2019b

How to overlap color figure on a gray one with colorbar in MATLAB?

I'm trying to color code some parameter values on a gray image in MATLAB. I'm wondering how to show gray image, make parameter values colored on the gray appearance image and on some pixels, and finally draw a colorbar aside the image just showing parameter values range.
Unsuccessful Code since now:
I = Igray; % gray image
Icc = zeros(size(I,1),size(I,2),3); % color coded image
Icc(:,:,1) = I;
Icc(:,:,2) = I;
Icc(:,:,3) = I;
Icc('some address in the image',3) = 'some number between 0 and 255';
imshow(Icc,[])
colorbar % colorbar showing colored parts spectrum
Result image that I need:
Try something like this:
I = Igray; % gray image
RGB = [1.0,0.7,0.2]; % color for patch
x = 30:50;
y = 70:90;
% If you gray image is in the range [0,255], make it range [0,1]
I = double(I)/255;
Icc = repmat(I,[1,1,3]);
block = I(y,x);
Icc(y,x,1) = 1 - ((1-block) * (1-RGB(1)));
Icc(y,x,2) = 1 - ((1-block) * (1-RGB(2)));
Icc(y,x,3) = 1 - ((1-block) * (1-RGB(3)));
imshow(Icc)
I'm sure there is a prettier way to encode this, but this way it shows the intent.
You're basically multiplying the grey values with the RGB color you want to make the patch. By inverting the patch and the color first, and inverting the result, the multiplication makes the patch brighter, not darker. That way you get the effect you want where the dark parts show the color also. If you multiply directly without inverting first, black stays black and doesn't show the color.
You'll have to figure out how to coordinate with the color bar after that. There are commands in MATLAB to set the limits of the color bar, you can find those reading the documentation.
The color bar you show uses the PARULA color map. You could do this to find the right RGB value to color your patch:
T; % value to color your patch in, in range [0,1]
cm = parula(256);
RGB = interp1(cm,T*255,'linear')

Extract foreground from background in matlab

I have 2 images. One is background image and other image has same background but with some foreground object. I want to extract foreground object from background. Simple subtraction operation in matlab will not suffice as it subtracts RGB value of background image from that of foreground image (as in below code).
im1 = imread('output/frame-1.jpg')
im2 = imread('output/frame-7.jpg')
%# subtract
deltaImage = im1 - im2;
imshow(deltaImage)
So if background color is white and foreground object is blue, then output (i.e. deltaImage) comes foreground object with orange color with black background. However the output I want is foreground object with blue color (i.e. original color) with black background. How can I get this ? I tried to do it using below code, but output image is incorrect.
im1 = imread('foreground.jpg')
im2 = imread('background.jpg')
[m n k]=size(im2);
deltaImage = zeros(m,n,3);
fprintf('%d %d %d.\n',m,n,k);
for l=1:k
for i=1:m-1
for j=1:n-1
if im1(i:j:l)~=im2(i:j:l)
deltaImage(i,j,l) = im1(i,j,l);
end
end
end
end
imshow(deltaImage)
Background IMAGE
Foreground Image
Output Image (Here I want color of man to be blue)
You can use deltaImage to create a mask (zeros and ones image) that multiplies the foreground. However, note that you will have artifacts associated with lossy image compression (.jpeg). These can be reduced, to some extent, if you use a threshold, like the average difference or a specific value you want. Try this:
im1 = double(imread('~/Downloads/foreground.jpg'));
im2 = double(imread('~/Downloads/background.jpg'));
compute the difference of the averages of the 3 channels
deltaImage = mean(im2,3) - mean(im1,3);
then use the product of the mean by a standard deviation (~3), or uncomment the line below to use a specific threshold, like 128
mask = deltaImage>3*mean(deltaImage(:));
% mask = deltaImage>128;
then assuming all original images are in 8-bit format produce a result also in 8-bit format:
result = uint8(cat(3, im1(:,:,1).*mask, im1(:,:,2).*mask, im1(:,:,3).*mask));
imshow(result)
And this is the result you should get:
Again the weird looking pixels around the main object are artifacts of lossy image compression (.jpeg), you should try working with lossless like .png formats.

Make a pixel transparent in Matlab

I have imported an image in matlab and before I display it how would I make the background of the image transparent? For example I have a red ball on a white background, how would i make the white pixels of the image tranparent so that only the red ball is visible and the white pixels are transparent?
You need to make sure the image is saved in the 'png' format. Then you can use the 'Alpha' parameter of a png file, which is a matrix that specifies the transparency of each pixel individually. It is essentially a boolean matrix that is 1 if the pixel is transparent, and 0 if not. This can be done easily with a for loop as long as the color that you want to be transparent is always the same value (i.e. 255 for uint8). If it is not always the same value then you could define a threshold, or range of values, where that pixel would be transparent.
Update :
First generate the alpha matrix by iterating through the image and (assuming you set white to be transparent) whenever the pixel is white, set the alpha matrix at that pixel as 1.
# X is your image
[M,N] = size(X);
# Assign A as zero
A = zeros(M,N);
# Iterate through X, to assign A
for i=1:M
for j=1:N
if(X(i,j) == 255) # Assuming uint8, 255 would be white
A(i,j) = 1; # Assign 1 to transparent color(white)
end
end
end
Then use this newly created alpha matrix (A) to save the image as a ".png"
imwrite(X,'your_image.png','Alpha',A);
Note for loops in MATLAB should be avoided at all costs because they are slow. Rewriting code to remove the loops is commonly referred to as "vectorizing" code. In the case of ademing2's answer, it could be done as follows:
A = zeros(size(X));
A(X == 255) = 1;