I am taking an image file and thumbnailing and cropping it with the following PIL code:
image = Image.open(filename)
image.thumbnail(size, Image.ANTIALIAS)
image_size = image.size
thumb = image.crop( (0, 0, size[0], size[1]) )
offset_x = max( (size[0] - image_size[0]) / 2, 0 )
offset_y = max( (size[1] - image_size[1]) / 2, 0 )
thumb = ImageChops.offset(thumb, offset_x, offset_y)
thumb.convert('RGBA').save(filename, 'JPEG')
This works great, except when the image isn't the same aspect ratio, the difference is filled in with a black color (or maybe an alpha channel?). I'm ok with the filling, I'd just like to be able to select the fill color -- or better yet an alpha channel.
Output example:
How can I specify the fill color?
I altered the code just a bit to allow for you to specify your own background color, including transparency.
The code loads the image specified into a PIL.Image object, generates the thumbnail from the given size, and then pastes the image into another, full sized surface.
(Note that the tuple used for color can also be any RGBA value, I have just used white with an alpha/transparency of 0.)
# assuming 'import from PIL *' is preceding
thumbnail = Image.open(filename)
# generating the thumbnail from given size
thumbnail.thumbnail(size, Image.ANTIALIAS)
offset_x = max((size[0] - thumbnail.size[0]) / 2, 0)
offset_y = max((size[1] - thumbnail.size[1]) / 2, 0)
offset_tuple = (offset_x, offset_y) #pack x and y into a tuple
# create the image object to be the final product
final_thumb = Image.new(mode='RGBA',size=size,color=(255,255,255,0))
# paste the thumbnail into the full sized image
final_thumb.paste(thumbnail, offset_tuple)
# save (the PNG format will retain the alpha band unlike JPEG)
final_thumb.save(filename,'PNG')
Its a bit easier to paste your re-sized thumbnail image onto a new image, that is the colour (and alpha value) you want.
You can create an image, and speicfy its colour in a RGBA tuple like this:
Image.new('RGBA', size, (255,0,0,255))
Here there is there is no transparency as the alpha band is set to 255. But the background will be red. Using this image to paste onto we can create thumbnails with any colour like this:
If we set the alpha band to 0, we can paste onto a transparent image, and get this:
Example code:
import Image
image = Image.open('1_tree_small.jpg')
size=(50,50)
image.thumbnail(size, Image.ANTIALIAS)
# new = Image.new('RGBA', size, (255, 0, 0, 255)) #without alpha, red
new = Image.new('RGBA', size, (255, 255, 255, 0)) #with alpha
new.paste(image,((size[0] - image.size[0]) / 2, (size[1] - image.size[1]) / 2))
new.save('saved4.png')
Related
Based on this post: Converting image grayscale pixel values to alpha values , how could I change an image transparency based on grayscale values with Pillow (6.2.2)?
I would like the brighter a pixel, the more transparent it is. Thus, pixels that are black or close to black would not be transparent.
I found the following script that works fine for white pixels but I don't know how to modify it on order to manage grayscale values. Maybe there is a better or faster way, I'm a real newbie in Python.
from PIL import Image
img = Image.open('Image.jpg')
img_out = img.convert("RGBA")
datas = img.getdata()
target_color = (255, 255, 255)
newData = list()
for item in datas:
newData.append((
item[0], item[1], item[2],
max(
abs(item[0] - target_color[0]),
abs(item[1] - target_color[1]),
abs(item[2] - target_color[2]),
)
))
img_out.putdata(newData)
img_out.save('ConvertedImage', 'PNG')
This is what I finally did:
from PIL import Image, ImageOps
img = Image.open('Image.jpg')
img = img.convert('RGBA') # RGBA = RGB + alpha
mask = ImageOps.invert(img.convert('L')) # 8-bit grey
img.putalpha(mask)
img.save('ConvertedImage', 'PNG')
I need to get a color from a pixel on the screen and convert its color space. The problem I have is that the color values are not the same when comparing the values against the Digital Color Meter app.
// create a 1x1 image at the mouse position
if let image:CGImage = CGDisplayCreateImage(disID, rect: CGRect(x: x, y: y, width: 1, height: 1))
{
let bitmap = NSBitmapImageRep(cgImage: image)
// get the color from the bitmap and convert its colorspace to sRGB
var color = bitmap.colorAt(x: 0, y: 0)!
color = color.usingColorSpace(.sRGB)!
// print the RGB values
let red = color.redComponent, green = color.greenComponent, blue = color.blueComponent
print("r:", Int(red * 255), " g:", Int(green * 255), " b:", Int(blue * 255))
}
My code (converted to sRGB): 255, 38, 0
Digital Color Meter (sRGB): 255, 4, 0
How do you get a color from a pixel on the screen with the correct color space values?
Update:
If you don’t convert the colors colorspace to anything (or convert it to calibratedRGB), the values match the Digital Color Meters values when it’s set to “Display native values”.
My code (not converted): 255, 0, 1
Digital Color Meter (set to: Display native values): 255, 0, 1
So why when the colors values match the native values in the DCM app, does converting the color to sRGB and comparing it to the DCM's values(in sRGB) not match? I also tried converting to other colorspaces and there always different from the DCM.
OK, I can tell you how to fix it/match DCM, you'll have to decide if this is correct/a bug/etc.
It seems the color returned by colorAt() has the same component values as the bitmap's pixel but a different color space - rather than the original device color space it is a generic RGB one. We can "correct" this by building a color in the bitmap's space:
let color = bitmap.colorAt(x: 0, y: 0)!
// need a pointer to a C-style array of CGFloat
let compCount = color.numberOfComponents
let comps = UnsafeMutablePointer<CGFloat>.allocate(capacity: compCount)
// get the components
color.getComponents(comps)
// construct a new color in the device/bitmap space with the same components
let correctedColor = NSColor(colorSpace: bitmap.colorSpace,
components: comps,
count: compCount)
// convert to sRGB
let sRGBcolor = correctedColor.usingColorSpace(.sRGB)!
I think you'll find that the values of correctedColor track DCM's native values, and those of sRGBcolor track DCM's sRGB values.
Note that we are constructing a color in the device space, not converting a color to the device space.
HTH
I have phase-contrast microscopy images that needs to be segmented. It seems very difficult to segment them due to the lack of contrast between the objects from the background (image 1). I used the function adapthisteq to increase the visibility of the cells (image 2). Is there any way I can improve the segmentation of the cells?
normalImage = imread(fileName);
channlImage = rgb2gray(normalImage);
histogramEq = adapthisteq(channlImage,'NumTiles',[50 50],'ClipLimit',0.1);
saturateInt = imadjust(histogramEq);
binaryImage = im2bw(saturateInt,graythresh(saturateInt));
binaryImage = 1 - binaryImage;
normalImage - raw image
histogramEq - increased visibility image
binaryImage - binarized image
Before to apply the threshold, I would separate the different patterns from the background by using a white top-hat. See here the result. Then you stretch the histogram.
Then you can apply what you did.
I would like to build up on FiReTiTi's answer. I have the code below and some screenshots. I have done this using OpenCV 3.0.0
import cv2
x = 'test.jpg'
img = cv2.imread(x, 1)
cv2.imshow("img",img)
#----converting the image to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
cv2.imshow('gray', gray)
#----binarization of image
ret,thresh = cv2.threshold(gray,250,255,cv2.THRESH_BINARY)
cv2.imshow("thresh",thresh)
#----performing adaptive thresholding
athresh=cv2.adaptiveThreshold(thresh, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 2)
cv2.imshow('athresh', athresh)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(7, 7))
#----morphological operation
closing = cv2.morphologyEx(athresh, cv2.MORPH_CLOSE, kernel)
cv2.imshow('closing', closing)
#----masking the obtained result on the grayscale image
result = cv2.bitwise_and(gray, gray, mask= closing)
cv2.imshow('result ', result )
I have a retinal fundus image which has a white border along the corners. I am trying to remove the borders on all four sides of the image. This is a pre-processing step and my image looks like this:
fundus http://snag.gy/XLGkC.jpg
It is an RGB image, and I took the green channel, and created a mask using logical indexing. I searched for pixels which were all black in the image, and eroded the mask to remove the white edge pixels. However, I am not sure how to retrieve the final image, without the white pixel border using the mask that I have. This is my code, and any help would be appreciated:
maskIdx = rgb(:,:,2) == 0; # rgb is the original image
se = strel('disk',3); # erode 3-pixel using a disk structuring element
im2 = imerode(maskIdx, se);
newrgb = rgb(im2); # gives a vector - not the same size as original im
Solved it myself. This is what I did with some help.
I first computed the mask for all three color channels combined. This is because the mask for each channel is not the same when applied to all the three channels individually, and residual pixels will be left in the final image if I used only the mask from one of the channels in the original image:
mask = (rgb(:,:,1) == 0) & (rgb(:,:,2) == 0) & (rgb(:,:,3) == 0);
Next, I used a disk structuring element with a radius of 9 pixels to dilate my mask:
se = strel('disk', 9);
maskIdx = imdilate(mask,se);
EDIT: A structuring element which is arbitrary can also be used. I used: se = strel(ones(9,9))
Then, with the new mask, I multiplied the original image with the new dilated mask:
newImg(:,:,1) = rgb(:,:,1) .* uint8(maskIdx); # image was of double data-type
newImg(:,:,2) = rgb(:,:,2) .* uint8(maskIdx);
newImg(:,:,3) = rgb(:,:,3) .* uint8(maskIdx);
Finally, I subtracted the computed color-mask from the original image to get my desired border-removed image:
finalImg = rgb - newImg;
Result:
image http://snag.gy/g2X1v.jpg
So I need to take an image I made in PIL and convert it to a pixmap to be displayed in a drawable.
How do I convert from PIL to pixmap and keep the images alpha?
Currently I have this code written:
def gfx_draw_tem2(self, r, x, y):
#im = Image.open("TEM/TEM cropped.png")
im = Image.new("RGBA", (r*2,r*2), (255, 255, 255, 255))
draw = ImageDraw.Draw(im)
for i in range(0,r*2):
for j in range(0,r*2):
if(self.in_circle(i,j,r)):
draw.point((i,j), fill=(100,50,75,50)) #alpha at 255 for test2.png
im.save("test.png")
im_data = im.tostring()
pixbuf = gdk.pixbuf_new_from_data(im_data, gdk.COLORSPACE_RGB, True, 8, im.size[0], im.size[1], 4*im.size[0])
pixmap2, mask = pixbuf.render_pixmap_and_mask()
self.pixmap.draw_drawable(self.white_gc, pixmap2, 0,0,x-r,y-r,-1,-1)
Here are the images I created from im.save("test.png"):
http://imgur.com/43spsBG,lqowten#0
Notice the first picture has an alpha of 255 (full) and the seconds has an alpha of 50.
However When I convert the images to a pixmap with my current code I lose the transparent affect.
Thanks for your help,
Ian
EDIT: I have narrowed it down a little bit with more testing. I am losing the alpha of my image when converting the pixbuf to a pixmap.
Okay figured it out.
Trick here is to not convert the pixbuf to a pixmap using pixbuf.render_pixmap_and_mask()
Instead I took my self.pixmap that I draw onto my drawable and called draw_pixbuf() on it.
Here is the new code I used.
def gfx_draw_tem2(self, r, x, y):
im = Image.new("RGBA", (r*2,r*2), (1, 1, 1, 0))
draw = ImageDraw.Draw(im)
for i in range(0,r*2):
for j in range(0,r*2):
if(self.in_circle(i,j,r)):
draw.point((i,j), fill=(100,50,75,140))
im_data = im.tostring()
pixbuf = gdk.pixbuf_new_from_data(im_data, gdk.COLORSPACE_RGB, True, 8, im.size[0], im.size[1], 4*im.size[0])
self.pixmap.draw_pixbuf(self.white_gc, pixbuf, 0, 0, x, y, -1, -1, gdk.RGB_DITHER_NORMAL, 0, 0)
Hope this helps someone.