Wrong background subtraction - matlab

I'm trying to subtract the background of an image with two images.
Image A is the background and image B is an image with things over the background.
I'm normalizing the images but I don't get the expected result.
Here's the code:
a = rgb2gray(im);
b = rgb2gray(im2);
resA = ((a - min(a(:)))./(max(a(:))-min(a(:))));
resB = ((b - min(b(:)))./(max(b(:))-min(b(:))));
resAbs = abs(resB-resA);
imshow(resAbs);
The resulting image is a completely dark image. Thanks to the answer of the user saeed masoomi, I realized that was because of the data type, so now, I have the following code:
a = rgb2gray(im);
b = rgb2gray(im2);
resA = im2double(a);
resB = im2double(b);
resAbs = imsubtract(resB,resA);
imshow(resAbs,[]);
The resulting image is not well filtered and there are parts of image B that don't appear but they should.
If I try doing this without normalizing, I still have the same problem.
The only difference between image A and B are the arms that only appears in image B, so they should appear without any cut.
Can you see something wrong? Maybe I should filter with a threshold?

Do not normalize the two images. Background subtraction is typically done with identical camera settings, so the two images are directly comparable. If the background image doesn't have a bright object in it, normalizing like you do would brighten it w.r.t. the second image. The intensities are no longer comparable, and you'd see differences where there were none.
If you recorded the background image with different camera settings (different exposure time, illumination, etc) then background subtraction is a lot more complicated than you think. You'd have to apply an optimization scheme to make the two images comparable, such that their difference is sparse. You'd have to look through the literature for that, it's not at all trivial.

Hi please pay attention to your data type ... images in matlab save in unsigned char(or int) (8-bit 0 to 255 and there is no 0.1 or 0.2 or any float number so if you have 1.2 output will be 1).
you have a wrong computation in uint8 data like below
max=uint8(255); %uint8
min=uint8(20); %uint8
data=uint8(40); %uint8
normalized=(data-min)/(max-min) %uint8
output will be
normalized =
uint8
0
ooops, you may think that this output will be 0.0851 but it's not because data type is uint8 and output will be 0 ... so i guess your all data is zero( result image is dark ) ...so for prevent this mistake MATLAB have a handy function named im2double (convert uint8 to double and all data normalized between 0 and one)
I2 = im2double(I) converts the intensity image I to double precision, rescaling the data if necessary. I can be a grayscale intensity image, a truecolor image, or a binary image.
so we can rewrite your code like below
a = rgb2gray(im);
b = rgb2gray(im2);
resA = im2double(a);
resB = im2double(b);
resAbs = abs(imsubtract(a,b)); %edited
imshow(resAbs,[])
edited
so if again output image is dark you must be check that two image have different pixel by below code!!
if(isempty(nonzeros))
disp('Two image is diffrent -> normal')
else
disp('Two image is same -> something wrong')
end

Related

Matlab - Removing image background using normalization

I have an image like this:
my goal is to get the output under background normalization at this link.
Following the above link, I did the following:
(1). I first dilate the image to get the background
(2). then try to remove it via normalization
I got the background:
However, when I try to do the normalized division, I get this :
(black borders added to make clear of the boundary of the image)
this is my code:
image = imread('image.png');
image = rgb2gray(image);
se = offsetstrel('ball',9,9);
dilatedI = imdilate(image,se);
output = imdivide(image,dilatedI);
imshow(output,[]);
using
imshow(output)
just gives a black image.
I thought it might be a type conversion issue, but based on the resources mentioned earlier, I am uncertain if it is the case...
Any advice would be appreciated
Just make sure you dont do integer division! your images are integer type, so 4/5 returns 0 and 5/4 returns 1, not a floating point number. Just convert to float before dividing:
image = imread('https://i.stack.imgur.com/bIVRT.png');
%image = rgb2gray(image); The image is not a RGB online
se = offsetstrel('ball',21,21);
dilatedI = imdilate(image,se);
output = imdivide(double(image),double(dilatedI));
figure
subplot(121)
imshow(image);
subplot(122)
imshow(output);

Not getting appropriate image after background subtraction

I take two frames from my video .One of then is the background and the next is the frame to which I applied background subtraction.The third image is the result after background subtraction.Here I am only getting the shirt of the person rather than the whole body.
Code for backgorund subtraction
v = VideoReader('test.mp4');
n = get(v,'NumberOfFrames');
back = read(v,30);
y = read(v,150);
imshow([y;back;y-back]);
As white has probably a higher value (in each channel maybe? I don't know how the format of your data is). You get negative values which then I guess is cropped to 0 (black). See how your shirt is green as you subtract the red from it (board in the background).
You have to mask out the background by checking what has changed and then remove everything that hasn't changed.
maybe something like
diff =y-back
if ( element of diff unequal 0) then set element to 1
noback = diff .* y
a little example I wrote:
back = rand(4)
y = back
y(5) = 0.6 %put something in front of the background
y(7) = 0.7 %put something in front of the background
mask = zeros(4)
mask(find(y-back)) = 1 %set values that are different in y to 1
noback = mask.*y %elementwise multiplication to mask out the background
You may have to use something other than find for the mask, because the image will not be 100% the same, but this should show the general approach.

Extract foreground from background in matlab

I have 2 images. One is background image and other image has same background but with some foreground object. I want to extract foreground object from background. Simple subtraction operation in matlab will not suffice as it subtracts RGB value of background image from that of foreground image (as in below code).
im1 = imread('output/frame-1.jpg')
im2 = imread('output/frame-7.jpg')
%# subtract
deltaImage = im1 - im2;
imshow(deltaImage)
So if background color is white and foreground object is blue, then output (i.e. deltaImage) comes foreground object with orange color with black background. However the output I want is foreground object with blue color (i.e. original color) with black background. How can I get this ? I tried to do it using below code, but output image is incorrect.
im1 = imread('foreground.jpg')
im2 = imread('background.jpg')
[m n k]=size(im2);
deltaImage = zeros(m,n,3);
fprintf('%d %d %d.\n',m,n,k);
for l=1:k
for i=1:m-1
for j=1:n-1
if im1(i:j:l)~=im2(i:j:l)
deltaImage(i,j,l) = im1(i,j,l);
end
end
end
end
imshow(deltaImage)
Background IMAGE
Foreground Image
Output Image (Here I want color of man to be blue)
You can use deltaImage to create a mask (zeros and ones image) that multiplies the foreground. However, note that you will have artifacts associated with lossy image compression (.jpeg). These can be reduced, to some extent, if you use a threshold, like the average difference or a specific value you want. Try this:
im1 = double(imread('~/Downloads/foreground.jpg'));
im2 = double(imread('~/Downloads/background.jpg'));
compute the difference of the averages of the 3 channels
deltaImage = mean(im2,3) - mean(im1,3);
then use the product of the mean by a standard deviation (~3), or uncomment the line below to use a specific threshold, like 128
mask = deltaImage>3*mean(deltaImage(:));
% mask = deltaImage>128;
then assuming all original images are in 8-bit format produce a result also in 8-bit format:
result = uint8(cat(3, im1(:,:,1).*mask, im1(:,:,2).*mask, im1(:,:,3).*mask));
imshow(result)
And this is the result you should get:
Again the weird looking pixels around the main object are artifacts of lossy image compression (.jpeg), you should try working with lossless like .png formats.

How do I convert the whole image to grayscale except for a sub image which should be in color?

I have an image and a subimage which is cropped out of the original image.
Here's the code I have written so far:
val1 = imread(img);
val2 = imread(img_w);
gray1 = rgb2gray(val1);%grayscaling both images
gray2 = rgb2gray(val2);
matchingval = normxcorr2(gray1,gray2);%normalized cross correlation
[max_c,imax]=max(abs(matchingval(:)));
After this I am stuck. I have no idea how to change the whole image grayscale except for the sub image which should be in color.
How do I do this?
Thank you.
If you know what the coordinates are for your image, you can always just use the rgb2gray on just the section of interest.
For instance, I tried this on an image just now:
im(500:1045,500:1200,1)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,2)=rgb2gray(im(500:1045,500:1200,1:3));
im(500:1045,500:1200,3)=rgb2gray(im(500:1045,500:1200,1:3));
Where I took the rows (500 to 1045), columns (500 to 1200), and the rgb depth (1 to 3) of the image and applied the rgb2gray function to just that. I did it three times as the output of rgb2gray is a 2d matrix and a color image is a 3d matrix, so I needed to change it layer by layer.
This worked for me, making only part of the image gray but leaving the rest in color.
The issue you might have though is this, a color image is 3 dimensions while a gray scale need only be 2 dimensions. Combining them means that the gray scale must be in a 3d matrix.
Depending on what you want to do, this technique may or may not help.
Judging from your code, you are reading the image and the subimage in MATLAB. What you need to know are the coordinates of where you extracted the subimage. Once you do that, simply take your original colour image, convert that to grayscale, then duplicate this image in the third dimension three times. You need to do this so that you can place colour pixels in this image.
For RGB images, grayscale images have the RGB components to all be the same. Duplicating this image in the third dimension three times creates the RGB version of the grayscale image. Once you do that, simply use the row and column coordinates of where you extracted the subimage and place that into the equivalent RGB grayscale image.
As such, given your colour image that is stored in img and your subimage stored in imgsub, and specifying the rows and columns of where you extracted the subimage in row1,col1 and row2,col2 - with row1,col1 being the top left corner of the subimage and row2,col2 is the bottom right corner, do this:
img_gray = rgb2gray(img);
img_gray = cat(3, img_gray, img_gray, img_gray);
img_gray(row1:row2, col1:col2,:) = imgsub;
To demonstrate this, let's try this with an image in MATLAB. We'll use the onion.png image that's part of the image processing toolbox in MATLAB. Therefore:
img = imread('onion.png');
Let's also define our row1,col1,row2,col2:
row1 = 50;
row2 = 90;
col1 = 80;
col2 = 150;
Let's get the subimage:
imgsub = img(row1:row2,col1:col2,:);
Running the above code, this is the image we get:
I took the same example as rayryeng's answer and tried to solve by HSV conversion.
The basic idea is to set the second layer i.e saturation layer to 0 (so that they are grayscale). then rewrite the layer with the original saturation layer only for the sub image area, so that, they alone have the saturation values.
Code:
img = imread('onion.png');
img = rgb2hsv(img);
sPlane = zeros(size(img(:,:,1)));
sPlane(50:90,80:150) = img(50:90,80:150,2);
img(:,:,2) = sPlane;
img = hsv2rgb(img);
imshow(img);
Output: (Same as rayryeng's output)
Related Answer with more details here

iPhone colour Image analysis

I am looking for some ideas about an approach that will let me analyze an image, and determine how greenISH or brownISH or whiteISH it is... I am emphasizing ISH here because, I am interested in capturing ALL the shades of these colours. So far, I have done the following:
I have my UIImage, I have CGImageRef and I actually have the colour of the pixel itself (it's RGB and Alpha), what I don't know is how to quantify this, and determine all the green shades, blues, browns, yellows, purples etc... So, I can process each and every pixel, determine it's basic RGB, but I need some help in quantifying the colours it over a whole image.
Thanks for your ideas...
Alex.
One fairly good solution is to switch from RGB colour space to one of the Y colour spaces, such as YUV, YCrCb or any of those. In all cases the Y channel represents brightness and the other two channels together represent colour, relative to brightness. You probably want to factor brightness out, possibly with the caveat that all colours below a certain darkness are to be excluded, so getting Y separately is a helpful first step in itself.
Converting from RGB to YUV is achieved with a simple linear combination. Straight from Wikipedia and a thousand other sources:
y = 0.299*r + 0.587*g + 0.114*b;
u = -0.14713*r - 0.28886*g + 0.436*b;
v = 0.615*r - 0.51499*g - 0.10001*b;
Assuming you're keeping r, g and b in the range [0, 1], your first test might be:
if(y < 0.05)
{
// this colour is very dark, so it's considered to be as
// far as we allow from any colour we're interested in
}
To decide how close your colour then is to, say, green, work out the u and v components of the green you're interested in, as a proportion of the y:
r = b = 0;
g = 0;
y = 0.299*r + 0.587*g + 0.114*b = 0.587;
u = -0.14713*r - 0.28886*g + 0.436*b = -0.28886;
v = 0.615*r - 0.51499*g - 0.10001*b = -0.51499;
proportionOfU = u / y = -2.0479;
proportionOfV = v / y = -0.8773;
Subsequently, work out and compare the proportions of U and V for incoming colours and compare (e.g. with 2d planar distance) to those you've computed for the colour you're comparing to. Closer values are more similar. How you scale and use that metric depends on your application.
Notice that as y goes toward 0, the computed proportions become increasingly less precise because of the limited range of the input data, and are undefined when y is 0. Conceptually that's because all colours look exactly the same when there's no light on them. Checking that y is above at least a certain minimum value is the pragmatic way of working around this issue. This also means that you're not going to get sensible results if you try to say "how black is this picture?", though again that's because of the ambiguity between a surface that doesn't reflect any light and a surface that doesn't have any light falling upon it.