Discord.py Image Editing with Python Imaging Library only works for some pictures? - python-imaging-library

I've tried an image-editing-effect which should recolor a picture with little black dots, however it only works for certain images and I honestly don't know why. Any ideas?
#url = member.avatar_url
#print(url)
#response = requests.get(url=url, stream=True).raw
#imag = Image.open(response)
imag = Image.open("unknown.png")
#out = Image.new('I', imag.size)
i = 0
width, height = imag.size
for x in range(width):
i+=1
for y in range(height):
if i ==5:
# changes every 5th pixel to a certain brightness value
r,g,b,a = imag.getpixel((x,y))
print(imag.getpixel((x,y)))
brightness = int(sum([r,g,b])/3)
print(brightness)
imag.putpixel((x, y), (brightness,brightness,brightness,255))
i= 0
else:
i += 1
imag.putpixel((x,y),(255,255,255,255))
imag.save("test.png")
The comments are what I would've used if my tests had worked. Using local pngs also don't work all the time.

Your image that doesn't work doesn't have an alpha channel but your code assumes it does. Try forcing in an alpha channel on opening like this:
imag = Image.open("unknown.png").convert('RGBA')
See also What's the difference between a "P" and "L" mode image in PIL?
A couple of other ideas too:
looping over images with Python for loops is slow and inefficient - in general, try to find a vectorised Numpy alternative
you have an alpha channel but set it to 255 (i.e. opaque) everywhere, so in reality, you may as well not have it and save roughly 1/4 of the file size
your output image is RGB with all 3 components set identically - that is really a greyscale image, so you could create it as such and your output file will be 1/3 the size
So, here is an alternative rendition:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
# Load image and ensure neither palette nor alpha
im = Image.open('paddington.png').convert('RGB')
# Make into Numpy array
na = np.array(im)
# Calculate greyscale image as mean of R, G and B channels
grey = np.mean(na, axis=-1).astype(np.uint8)
# Make white output image
out = np.full(grey.shape, 255, dtype=np.uint8)
# Copy across selected pixels
out[1::6, 1::4] = grey[1::6, 1::4]
out[3::6, 0::4] = grey[3::6, 0::4]
out[5::6, 2::4] = grey[5::6, 2::4]
# Revert to PIL Image
Image.fromarray(out).save('result.png')
That transforms this:
into this:
If you accept calculating the greyscale with the normal method, rather than averaging R, G and B, you could change to this:
im = Image.open('paddington.png').convert('L')
and remove the line that does the averaging:
grey = np.mean(na, axis=-1).astype(np.uint8)

Related

How to separate human body from background in an image

I have been trying to separate the human body in an image from the background, but all the methods I have seen don't seem to work very well for me.
I have collected the following images;
The image of the background
The image of the background with the person in it.
Now I want to cut out the person from the background.
I tried subtracting the image of the background from the image with the person using res = cv2.subtract(background, foreground) (I am new to image processing).
Background subtraction methods in opencv like cv2.BackgroundSubtractorMOG2() and cv2.BackgroundSubtractorMOG2() only works with videos or image sequence and contour detection methods I have seen are only for solid shapes.
And grabCut doesn't quite work well for me because I would like to automate the process.
Given the images I have (Image of the background and image of the background with the person in it), is there a method of cutting the person out from the background?
I wouldn't recommend a neural net for this problem. That's a lot of work for something like this where you have a known background. I'll walk through the steps I took to do the background segmentation on this image.
First I shifted into the LAB color space to get some light-resistant channels to work with. I did a simple subtractions of foreground and background and combined the a and b channels.
You can see that there is still significant color change in the background even with a less light-sensitive color channel. This is likely due to the auto white balance on the camera, you can see that some of the background colors change when you step into view.
The next step I took was thresholding off of this image. The optimal threshold values may not always be the same, you'll have to adjust to a range that works well for your set of photos.
I used openCV's findContours function to get the segmentation points of each blob and I filtered the available contours by size. I set a size threshold of 15000. For reference, the person in the image had a pixel area of 27551.
Then it's just a matter of cropping out the contour.
This technique works for any good thresholding strategy. If you can improve the consistency of your pictures by turning off auto settings and ensure good contrast of the person against the wall then you can use simpler thresholding strategies and get good results.
Just for fun:
Edit:
I forgot to add in the code I used:
import cv2
import numpy as np
# rescale values
def rescale(img, orig, new):
img = np.divide(img, orig);
img = np.multiply(img, new);
img = img.astype(np.uint8);
return img;
# get abs(diff) of all hue values
def diff(bg, fg):
# do both sides
lh = bg - fg;
rh = fg - bg;
# pick minimum # this works because of uint wrapping
low = np.minimum(lh, rh);
return low;
# load image
bg = cv2.imread("back.jpg");
fg = cv2.imread("person.jpg");
fg_original = fg.copy();
# blur
bg = cv2.blur(bg,(5,5));
fg = cv2.blur(fg,(5,5));
# convert to lab
bg_lab = cv2.cvtColor(bg, cv2.COLOR_BGR2LAB);
fg_lab = cv2.cvtColor(fg, cv2.COLOR_BGR2LAB);
bl, ba, bb = cv2.split(bg_lab);
fl, fa, fb = cv2.split(fg_lab);
# subtract
d_b = diff(bb, fb);
d_a = diff(ba, fa);
# rescale for contrast
d_b = rescale(d_b, np.max(d_b), 255);
d_a = rescale(d_a, np.max(d_a), 255);
# combine
combined = np.maximum(d_b, d_a);
# threshold
# check your threshold range, this will work for
# this image, but may not work for others
# in general: having a strong contrast with the wall makes this easier
thresh = cv2.inRange(combined, 70, 255);
# opening and closing
kernel = np.ones((3,3), np.uint8);
# closing
thresh = cv2.dilate(thresh, kernel, iterations = 2);
thresh = cv2.erode(thresh, kernel, iterations = 2);
# opening
thresh = cv2.erode(thresh, kernel, iterations = 2);
thresh = cv2.dilate(thresh, kernel, iterations = 3);
# contours
_, contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
# filter contours by size
big_cntrs = [];
marked = fg_original.copy();
for contour in contours:
area = cv2.contourArea(contour);
if area > 15000:
print(area);
big_cntrs.append(contour);
cv2.drawContours(marked, big_cntrs, -1, (0, 255, 0), 3);
# create a mask of the contoured image
mask = np.zeros_like(fb);
mask = cv2.drawContours(mask, big_cntrs, -1, 255, -1);
# erode mask slightly (boundary pixels on wall get color shifted)
mask = cv2.erode(mask, kernel, iterations = 1);
# crop out
out = np.zeros_like(fg_original) # Extract out the object and place into output image
out[mask == 255] = fg_original[mask == 255];
# show
cv2.imshow("combined", combined);
cv2.imshow("thresh", thresh);
cv2.imshow("marked", marked);
# cv2.imshow("masked", mask);
cv2.imshow("out", out);
cv2.waitKey(0);
Since it is very easy to find dataset consist a lot of human body, I suggest you to implement neural network segmentation tecniques to extract human body perfectly. Please check this link to see similar example.

Matlab - Removing image background using normalization

I have an image like this:
my goal is to get the output under background normalization at this link.
Following the above link, I did the following:
(1). I first dilate the image to get the background
(2). then try to remove it via normalization
I got the background:
However, when I try to do the normalized division, I get this :
(black borders added to make clear of the boundary of the image)
this is my code:
image = imread('image.png');
image = rgb2gray(image);
se = offsetstrel('ball',9,9);
dilatedI = imdilate(image,se);
output = imdivide(image,dilatedI);
imshow(output,[]);
using
imshow(output)
just gives a black image.
I thought it might be a type conversion issue, but based on the resources mentioned earlier, I am uncertain if it is the case...
Any advice would be appreciated
Just make sure you dont do integer division! your images are integer type, so 4/5 returns 0 and 5/4 returns 1, not a floating point number. Just convert to float before dividing:
image = imread('https://i.stack.imgur.com/bIVRT.png');
%image = rgb2gray(image); The image is not a RGB online
se = offsetstrel('ball',21,21);
dilatedI = imdilate(image,se);
output = imdivide(double(image),double(dilatedI));
figure
subplot(121)
imshow(image);
subplot(122)
imshow(output);

Wrong background subtraction

I'm trying to subtract the background of an image with two images.
Image A is the background and image B is an image with things over the background.
I'm normalizing the images but I don't get the expected result.
Here's the code:
a = rgb2gray(im);
b = rgb2gray(im2);
resA = ((a - min(a(:)))./(max(a(:))-min(a(:))));
resB = ((b - min(b(:)))./(max(b(:))-min(b(:))));
resAbs = abs(resB-resA);
imshow(resAbs);
The resulting image is a completely dark image. Thanks to the answer of the user saeed masoomi, I realized that was because of the data type, so now, I have the following code:
a = rgb2gray(im);
b = rgb2gray(im2);
resA = im2double(a);
resB = im2double(b);
resAbs = imsubtract(resB,resA);
imshow(resAbs,[]);
The resulting image is not well filtered and there are parts of image B that don't appear but they should.
If I try doing this without normalizing, I still have the same problem.
The only difference between image A and B are the arms that only appears in image B, so they should appear without any cut.
Can you see something wrong? Maybe I should filter with a threshold?
Do not normalize the two images. Background subtraction is typically done with identical camera settings, so the two images are directly comparable. If the background image doesn't have a bright object in it, normalizing like you do would brighten it w.r.t. the second image. The intensities are no longer comparable, and you'd see differences where there were none.
If you recorded the background image with different camera settings (different exposure time, illumination, etc) then background subtraction is a lot more complicated than you think. You'd have to apply an optimization scheme to make the two images comparable, such that their difference is sparse. You'd have to look through the literature for that, it's not at all trivial.
Hi please pay attention to your data type ... images in matlab save in unsigned char(or int) (8-bit 0 to 255 and there is no 0.1 or 0.2 or any float number so if you have 1.2 output will be 1).
you have a wrong computation in uint8 data like below
max=uint8(255); %uint8
min=uint8(20); %uint8
data=uint8(40); %uint8
normalized=(data-min)/(max-min) %uint8
output will be
normalized =
uint8
0
ooops, you may think that this output will be 0.0851 but it's not because data type is uint8 and output will be 0 ... so i guess your all data is zero( result image is dark ) ...so for prevent this mistake MATLAB have a handy function named im2double (convert uint8 to double and all data normalized between 0 and one)
I2 = im2double(I) converts the intensity image I to double precision, rescaling the data if necessary. I can be a grayscale intensity image, a truecolor image, or a binary image.
so we can rewrite your code like below
a = rgb2gray(im);
b = rgb2gray(im2);
resA = im2double(a);
resB = im2double(b);
resAbs = abs(imsubtract(a,b)); %edited
imshow(resAbs,[])
edited
so if again output image is dark you must be check that two image have different pixel by below code!!
if(isempty(nonzeros))
disp('Two image is diffrent -> normal')
else
disp('Two image is same -> something wrong')
end

Not getting appropriate image after background subtraction

I take two frames from my video .One of then is the background and the next is the frame to which I applied background subtraction.The third image is the result after background subtraction.Here I am only getting the shirt of the person rather than the whole body.
Code for backgorund subtraction
v = VideoReader('test.mp4');
n = get(v,'NumberOfFrames');
back = read(v,30);
y = read(v,150);
imshow([y;back;y-back]);
As white has probably a higher value (in each channel maybe? I don't know how the format of your data is). You get negative values which then I guess is cropped to 0 (black). See how your shirt is green as you subtract the red from it (board in the background).
You have to mask out the background by checking what has changed and then remove everything that hasn't changed.
maybe something like
diff =y-back
if ( element of diff unequal 0) then set element to 1
noback = diff .* y
a little example I wrote:
back = rand(4)
y = back
y(5) = 0.6 %put something in front of the background
y(7) = 0.7 %put something in front of the background
mask = zeros(4)
mask(find(y-back)) = 1 %set values that are different in y to 1
noback = mask.*y %elementwise multiplication to mask out the background
You may have to use something other than find for the mask, because the image will not be 100% the same, but this should show the general approach.

How to smooth the perimeter output of bwperim?

I have a segmented image
When I apply bwperim function on this I get the output as below
I want to have a thin line of perimeter - just one pixel-thick. This is essential for further processing work. What is the best approach?
Please suggest.
======
BoundingBox
%%% ComputeBoundingBox
%%%
function [stats, statsAlreadyComputed] = ...
ComputeBoundingBox(imageSize,stats,statsAlreadyComputed)
% [minC minR width height]; minC and minR end in .5.
if ~statsAlreadyComputed.BoundingBox
statsAlreadyComputed.BoundingBox = 1;
[stats, statsAlreadyComputed] = ...
ComputePixelList(imageSize,stats,statsAlreadyComputed);
num_dims = numel(imageSize);
for k = 1:length(stats)
list = stats(k).PixelList;
if (isempty(list))
stats(k).BoundingBox = [0.5*ones(1,num_dims) zeros(1,num_dims)];
else
min_corner = min(list,[],1) - 0.5;
max_corner = max(list,[],1) + 0.5;
stats(k).BoundingBox = [min_corner (max_corner - min_corner)];
end
end
end
That is happening because your image had quantization error when you were saving the image. Did you save your image using a lossy compression algorithm, like JPEG? If you want to preserve the intensities so that they don't change when you save the image, use a lossless compression algorithm, like PNG.
To eliminate these "noisy" effects, threshold your image first to eliminate any quantization errors so that you can set these pixels to completely white, then try using bwperim again. In other words, do something like this:
im = im2bw(imread('http://i.stack.imgur.com/dagEc.png'));
im_noborder = imclearborder(im);
out = bwperim(im_noborder);
imshow(out);
The first line of code reads in your image directly from StackOverflow and we use im2bw to threshold your image. This image was originally grayscale, and so we want to convert this into black and white only. This will also remove any quantization artifacts as it thresholds anything higher than 128. The next line of code removes the white border with imclearborder that surrounds your shape because the image you uploaded has a white border surrounding it for some reason. Once we remove this border, we then apply bwperim and we show the image.
This is the image I get: