Converting PIL image to GTK pixmap with alpha - gtk

So I need to take an image I made in PIL and convert it to a pixmap to be displayed in a drawable.
How do I convert from PIL to pixmap and keep the images alpha?
Currently I have this code written:
def gfx_draw_tem2(self, r, x, y):
#im = Image.open("TEM/TEM cropped.png")
im = Image.new("RGBA", (r*2,r*2), (255, 255, 255, 255))
draw = ImageDraw.Draw(im)
for i in range(0,r*2):
for j in range(0,r*2):
if(self.in_circle(i,j,r)):
draw.point((i,j), fill=(100,50,75,50)) #alpha at 255 for test2.png
im.save("test.png")
im_data = im.tostring()
pixbuf = gdk.pixbuf_new_from_data(im_data, gdk.COLORSPACE_RGB, True, 8, im.size[0], im.size[1], 4*im.size[0])
pixmap2, mask = pixbuf.render_pixmap_and_mask()
self.pixmap.draw_drawable(self.white_gc, pixmap2, 0,0,x-r,y-r,-1,-1)
Here are the images I created from im.save("test.png"):
http://imgur.com/43spsBG,lqowten#0
Notice the first picture has an alpha of 255 (full) and the seconds has an alpha of 50.
However When I convert the images to a pixmap with my current code I lose the transparent affect.
Thanks for your help,
Ian
EDIT: I have narrowed it down a little bit with more testing. I am losing the alpha of my image when converting the pixbuf to a pixmap.

Okay figured it out.
Trick here is to not convert the pixbuf to a pixmap using pixbuf.render_pixmap_and_mask()
Instead I took my self.pixmap that I draw onto my drawable and called draw_pixbuf() on it.
Here is the new code I used.
def gfx_draw_tem2(self, r, x, y):
im = Image.new("RGBA", (r*2,r*2), (1, 1, 1, 0))
draw = ImageDraw.Draw(im)
for i in range(0,r*2):
for j in range(0,r*2):
if(self.in_circle(i,j,r)):
draw.point((i,j), fill=(100,50,75,140))
im_data = im.tostring()
pixbuf = gdk.pixbuf_new_from_data(im_data, gdk.COLORSPACE_RGB, True, 8, im.size[0], im.size[1], 4*im.size[0])
self.pixmap.draw_pixbuf(self.white_gc, pixbuf, 0, 0, x, y, -1, -1, gdk.RGB_DITHER_NORMAL, 0, 0)
Hope this helps someone.

Related

Combining image channels in CImg

In CImg, I have split an RGBA image apart into multiple single-channel images, with code like:
CImg<unsigned char> input("foo.png");
CImg<unsigned char> r = input.get_channel(0), g = input.get_channel(1), b = input.get_channel(2), a = input.get_channel(3);
Then I try to swizzle the channel order:
CImg<unsigned char> output(input.width(), input.height(), 1, input.channels());
output.channel(0) = g;
output.channel(1) = b;
output.channel(2) = r;
output.channel(3) = a;
When I save the image out, however, it turns out grayscale, apparently based on the alpha channel value; for example, this input:
becomes this output:
How do I specify the image color format so that CImg saves into the correct color space?
Simply copying a channel does not work like that; a better approach is to copy the pixel data with std::copy:
std::copy(g.begin(), g.end(), &output.atX(0, 0, 0, 0));
std::copy(b.begin(), b.end(), &output.atX(0, 0, 0, 1));
std::copy(r.begin(), r.end(), &output.atX(0, 0, 0, 2));
std::copy(a.begin(), a.end(), &output.atX(0, 0, 0, 3));
This results in an output image like:

Pillow - Adding transparency depending on grayscale values

Based on this post: Converting image grayscale pixel values to alpha values , how could I change an image transparency based on grayscale values with Pillow (6.2.2)?
I would like the brighter a pixel, the more transparent it is. Thus, pixels that are black or close to black would not be transparent.
I found the following script that works fine for white pixels but I don't know how to modify it on order to manage grayscale values. Maybe there is a better or faster way, I'm a real newbie in Python.
from PIL import Image
img = Image.open('Image.jpg')
img_out = img.convert("RGBA")
datas = img.getdata()
target_color = (255, 255, 255)
newData = list()
for item in datas:
newData.append((
item[0], item[1], item[2],
max(
abs(item[0] - target_color[0]),
abs(item[1] - target_color[1]),
abs(item[2] - target_color[2]),
)
))
img_out.putdata(newData)
img_out.save('ConvertedImage', 'PNG')
This is what I finally did:
from PIL import Image, ImageOps
img = Image.open('Image.jpg')
img = img.convert('RGBA') # RGBA = RGB + alpha
mask = ImageOps.invert(img.convert('L')) # 8-bit grey
img.putalpha(mask)
img.save('ConvertedImage', 'PNG')

Accurately get a color from pixel on screen and convert its color space

I need to get a color from a pixel on the screen and convert its color space. The problem I have is that the color values are not the same when comparing the values against the Digital Color Meter app.
// create a 1x1 image at the mouse position
if let image:CGImage = CGDisplayCreateImage(disID, rect: CGRect(x: x, y: y, width: 1, height: 1))
{
let bitmap = NSBitmapImageRep(cgImage: image)
// get the color from the bitmap and convert its colorspace to sRGB
var color = bitmap.colorAt(x: 0, y: 0)!
color = color.usingColorSpace(.sRGB)!
// print the RGB values
let red = color.redComponent, green = color.greenComponent, blue = color.blueComponent
print("r:", Int(red * 255), " g:", Int(green * 255), " b:", Int(blue * 255))
}
My code (converted to sRGB): 255, 38, 0
Digital Color Meter (sRGB): 255, 4, 0
How do you get a color from a pixel on the screen with the correct color space values?
Update:
If you don’t convert the colors colorspace to anything (or convert it to calibratedRGB), the values match the Digital Color Meters values when it’s set to “Display native values”.
My code (not converted): 255, 0, 1
Digital Color Meter (set to: Display native values): 255, 0, 1
So why when the colors values match the native values in the DCM app, does converting the color to sRGB and comparing it to the DCM's values(in sRGB) not match? I also tried converting to other colorspaces and there always different from the DCM.
OK, I can tell you how to fix it/match DCM, you'll have to decide if this is correct/a bug/etc.
It seems the color returned by colorAt() has the same component values as the bitmap's pixel but a different color space - rather than the original device color space it is a generic RGB one. We can "correct" this by building a color in the bitmap's space:
let color = bitmap.colorAt(x: 0, y: 0)!
// need a pointer to a C-style array of CGFloat
let compCount = color.numberOfComponents
let comps = UnsafeMutablePointer<CGFloat>.allocate(capacity: compCount)
// get the components
color.getComponents(comps)
// construct a new color in the device/bitmap space with the same components
let correctedColor = NSColor(colorSpace: bitmap.colorSpace,
components: comps,
count: compCount)
// convert to sRGB
let sRGBcolor = correctedColor.usingColorSpace(.sRGB)!
I think you'll find that the values of correctedColor track DCM's native values, and those of sRGBcolor track DCM's sRGB values.
Note that we are constructing a color in the device space, not converting a color to the device space.
HTH

color replacement in image for iphone application

Basically i want to implement color replacement feature for my paint application.
Below are original and expected output
Original:
After changing wall color selected by user along with some threshold for replacement
I have tried two approaches but could not got working as expected
Approach 1:
Queue-based Flood Fill algorithm for color replacement
but with i got below output with terribly slow and wall shadow has not been preserved.
Approach 2:
So i have tried to look at another option and found below post from SO
How to change a particular color in an image?
but i could not understand logic and not sure about my code implementation from step 3.
Please find below code for each step wise with my understanding.
1) Convert the image from RGB to HSV using cvCvtColor (we only want to
change the hue).
IplImage *mainImage=[self CreateIplImageFromUIImage:[UIImage imageNamed:#"original.jpg"]];
IplImage *hsvImage = cvCreateImage(cvGetSize(mainImage), IPL_DEPTH_8U, 3);
IplImage *threshImage = cvCreateImage(cvGetSize(mainImage), IPL_DEPTH_8U, 3);
cvCvtColor(mainImage,hsvImage,CV_RGB2HSV);
2) Isolate a color with cvThreshold specifying a
certain tolerance (you want a range of colors, not one flat color).
cvThreshold(hsvImage, threshImage, 0, 100, CV_THRESH_BINARY);
3) Discard areas of color below a minimum size using a blob detection
library like cvBlobsLib. This will get rid of dots of the similar
color in the scene. Do i need to specify original image or thresold image?
CBlobResult blobs = CBlobResult(threshImage, NULL, 0);
blobs.Filter( blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 10);
4) Mask the color with cvInRangeS and use the
resulting mask to apply the new hue.
Not sure about this function how it helps in color replacement and not able to understand arguments to be provided.
5) cvMerge the new image with the
new hue with an image composed by the saturation and brightness
channels that you saved in step one.
i understand that cvMerge will merge three channel of H S and V but how i can use output of above three steps.
so basically stuck with opencv implementation,
if possible then please guide me for opencv implemenation or any other solution to tryout.
Finally i am able to achieve some desired output using below javacv code and same ported to opencv too.
this solution has 2 problems
don't have edge detection, i think using contours i can achieve it
replaced color has flat hue and sat which should set based on source
pixel hue sat difference but not sure how to achieve that. may be
instead of cvSet using cvAddS
IplImage image = cvLoadImage("sample.png");
CvSize cvSize = cvGetSize(image);
IplImage hsvImage = cvCreateImage(cvSize, image.depth(),image.nChannels());
IplImage hChannel = cvCreateImage(cvSize, image.depth(), 1);
IplImage sChannel = cvCreateImage(cvSize, image.depth(), 1);
IplImage vChannel = cvCreateImage(cvSize, image.depth(), 1);
cvSplit(hsvImage, hChannel, sChannel, vChannel, null);
IplImage cvInRange = cvCreateImage(cvSize, image.depth(), 1);
CvScalar source=new CvScalar(72/2,0.07*255,66,0); //source color to replace
CvScalar from=getScaler(source,false);
CvScalar to=getScaler(source, true);
cvInRangeS(hsvImage, from , to, cvInRange);
IplImage dest = cvCreateImage(cvSize, image.depth(), image.nChannels());
IplImage temp = cvCreateImage(cvSize, IPL_DEPTH_8U, 2);
cvMerge(hChannel, sChannel, null, null, temp);
cvSet(temp, new CvScalar(45,255,0,0), cvInRange);// destination hue and sat
cvSplit(temp, hChannel, sChannel, null, null);
cvMerge(hChannel, sChannel, vChannel, null, dest);
cvCvtColor(dest, dest, CV_HSV2BGR);
cvSaveImage("output.png", dest);
method to for calculating threshold
CvScalar getScaler(CvScalar seed,boolean plus){
if(plus){
return CV_RGB(seed.red()+(seed.red()*thresold),seed.green()+(seed.green()*thresold),seed.blue()+(seed.blue()*thresold));
}else{
return CV_RGB(seed.red()-(seed.red()*thresold),seed.green()-(seed.green()*thresold),seed.blue()-(seed.blue()*thresold));
}
}
I know this answer will be useful to someone someday.
try this out in your view viewdidLoad() override method for iOS.
image in the code snippet below should be from your UIImageView
seed also is fixed.you can make it dynamic based on user tap event.
cv::Mat mask = cv::Mat::zeros(image.rows + 2, image.cols + 2, CV_8U);
imageView.image = [self UIImageFromCVMat:image];
cv::cvtColor(image, image, cv::COLOR_BGR2RGB);
try {
if(seed.x > 0 && seed.y > 0){
cv::floodFill(image, mask, seed, cv::Scalar(50, 155, 20) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
cv::floodFill(image, mask, seed2, cv::Scalar(50, 155, 20) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
cv::floodFill(image, mask, seed3, cv::Scalar(50, 155, 0) ,0, cv::Scalar(2,2, 2), cv::Scalar(2,2, 2), 8);
}
} catch (Exception ex) {
}
cv::cvtColor(image, image, cv::COLOR_RGB2BGR);
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
self.imageView.image = [self UIImageFromCVMat:image];

Fill Color of PIL Cropping/Thumbnailing

I am taking an image file and thumbnailing and cropping it with the following PIL code:
image = Image.open(filename)
image.thumbnail(size, Image.ANTIALIAS)
image_size = image.size
thumb = image.crop( (0, 0, size[0], size[1]) )
offset_x = max( (size[0] - image_size[0]) / 2, 0 )
offset_y = max( (size[1] - image_size[1]) / 2, 0 )
thumb = ImageChops.offset(thumb, offset_x, offset_y)
thumb.convert('RGBA').save(filename, 'JPEG')
This works great, except when the image isn't the same aspect ratio, the difference is filled in with a black color (or maybe an alpha channel?). I'm ok with the filling, I'd just like to be able to select the fill color -- or better yet an alpha channel.
Output example:
How can I specify the fill color?
I altered the code just a bit to allow for you to specify your own background color, including transparency.
The code loads the image specified into a PIL.Image object, generates the thumbnail from the given size, and then pastes the image into another, full sized surface.
(Note that the tuple used for color can also be any RGBA value, I have just used white with an alpha/transparency of 0.)
# assuming 'import from PIL *' is preceding
thumbnail = Image.open(filename)
# generating the thumbnail from given size
thumbnail.thumbnail(size, Image.ANTIALIAS)
offset_x = max((size[0] - thumbnail.size[0]) / 2, 0)
offset_y = max((size[1] - thumbnail.size[1]) / 2, 0)
offset_tuple = (offset_x, offset_y) #pack x and y into a tuple
# create the image object to be the final product
final_thumb = Image.new(mode='RGBA',size=size,color=(255,255,255,0))
# paste the thumbnail into the full sized image
final_thumb.paste(thumbnail, offset_tuple)
# save (the PNG format will retain the alpha band unlike JPEG)
final_thumb.save(filename,'PNG')
Its a bit easier to paste your re-sized thumbnail image onto a new image, that is the colour (and alpha value) you want.
You can create an image, and speicfy its colour in a RGBA tuple like this:
Image.new('RGBA', size, (255,0,0,255))
Here there is there is no transparency as the alpha band is set to 255. But the background will be red. Using this image to paste onto we can create thumbnails with any colour like this:
If we set the alpha band to 0, we can paste onto a transparent image, and get this:
Example code:
import Image
image = Image.open('1_tree_small.jpg')
size=(50,50)
image.thumbnail(size, Image.ANTIALIAS)
# new = Image.new('RGBA', size, (255, 0, 0, 255)) #without alpha, red
new = Image.new('RGBA', size, (255, 255, 255, 0)) #with alpha
new.paste(image,((size[0] - image.size[0]) / 2, (size[1] - image.size[1]) / 2))
new.save('saved4.png')