How to convert a 1 or 4 bit deep PNG into an 8-bit PNG image? - python-imaging-library

image = Image.open(file) # file is 1-bit jpg
image = image.convert('P') # image become 8-bit
colors = [0, 0, 0, 128, 0, 0, 0, 128, 0, 128, 128, 0, 0, 0, 128]
image.putpalette(colors) # why use this code image become 4-bit ?
image.save("1.png",format='PNG') # save image become 4-bit png ????? (-w-)?
i need a 8-bit color png ,can you help me?

Related

Combining image channels in CImg

In CImg, I have split an RGBA image apart into multiple single-channel images, with code like:
CImg<unsigned char> input("foo.png");
CImg<unsigned char> r = input.get_channel(0), g = input.get_channel(1), b = input.get_channel(2), a = input.get_channel(3);
Then I try to swizzle the channel order:
CImg<unsigned char> output(input.width(), input.height(), 1, input.channels());
output.channel(0) = g;
output.channel(1) = b;
output.channel(2) = r;
output.channel(3) = a;
When I save the image out, however, it turns out grayscale, apparently based on the alpha channel value; for example, this input:
becomes this output:
How do I specify the image color format so that CImg saves into the correct color space?
Simply copying a channel does not work like that; a better approach is to copy the pixel data with std::copy:
std::copy(g.begin(), g.end(), &output.atX(0, 0, 0, 0));
std::copy(b.begin(), b.end(), &output.atX(0, 0, 0, 1));
std::copy(r.begin(), r.end(), &output.atX(0, 0, 0, 2));
std::copy(a.begin(), a.end(), &output.atX(0, 0, 0, 3));
This results in an output image like:

How to replace the appropriate colors with my own pallette in MATLAB?

I am using MATLAB 2015. I want to reduce the image color count. An RGB image will be segmentated using k-means algorithm. Then mean colors will be replaced with the colors I have.
The colors are (10),
black - [255, 255, 255],
yellow - [255, 255, 0],
orange - [255, 128, 0],
white - [255, 255, 255],
pink - [255, 153, 255],
lavender - [120, 102, 255],
brown - [153, 51, 0],
green - [0, 255, 0],
blue - [0, 0, 255],
red - [255, 0, 0].
I have succeeded clustering the image. Clustered images should be replaced with the nearest color. How can I change those colors after clustering?
In case you don't succeed in finding a way with MATLAB, you can remap the colours in an image at the command line with ImageMagick which is installed on most Linux distros and is available for OSX and Windows too.
First, you would make a swatch of the colours in your palette. You only need do this once obviously:
convert xc:black xc:yellow xc:"rgb(255,128,0)" \
xc:white xc:"rgb(255,153,255)" xc:"rgb(120,102,255)" \
xc:"rgb(153,51,0)" xc:lime xc:blue xc:red \
+append colormap.png
That looks like this (enlarged):
Now, let's assume you have an image like this colorwheel (colorwheel.png):
and you want to apply your palette (i.e. remap the colours to those in your swatch):
convert colorwheel.png +dither -remap colormap.png result.png

Cairo only render to specific color component

I am using Cairo and would like to render one color component at a time. For example, if I render a set of blue rectangles and then render a set of red rectangles, I want where they overlap to be purple rather than red.
Using set_source_rgb(ctx, 0.0, 1.0, 0.0) doesn't work, because it will overwrite the other channels with zeros. Using transparency doesn't work either, as it equally effects all channels. I would like a way to only render to one channel.
Is that possible? Thank you.
Use CAIRO_OPERATOR_ADD instead of CAIRO_OPERATOR_OVER (the default):
#include <cairo.h>
int main() {
cairo_surface_t *s = cairo_image_surface_create(CAIRO_FORMAT_ARGB32, 20, 20);
cairo_t *cr = cairo_create(s);
cairo_set_operator(cr, CAIRO_OPERATOR_ADD);
/* Render blue */
cairo_set_source_rgb(cr, 0, 0, 1);
cairo_rectangle(cr, 0, 0, 15, 15);
cairo_fill(cr);
/* Render red */
cairo_set_source_rgb(cr, 1, 0, 0);
cairo_rectangle(cr, 5, 5, 15, 15);
cairo_fill(cr);
cairo_surface_write_to_png(s, "out.png");
cairo_destroy(cr);
cairo_surface_destroy(s);
return 0;
}

Converting PIL image to GTK pixmap with alpha

So I need to take an image I made in PIL and convert it to a pixmap to be displayed in a drawable.
How do I convert from PIL to pixmap and keep the images alpha?
Currently I have this code written:
def gfx_draw_tem2(self, r, x, y):
#im = Image.open("TEM/TEM cropped.png")
im = Image.new("RGBA", (r*2,r*2), (255, 255, 255, 255))
draw = ImageDraw.Draw(im)
for i in range(0,r*2):
for j in range(0,r*2):
if(self.in_circle(i,j,r)):
draw.point((i,j), fill=(100,50,75,50)) #alpha at 255 for test2.png
im.save("test.png")
im_data = im.tostring()
pixbuf = gdk.pixbuf_new_from_data(im_data, gdk.COLORSPACE_RGB, True, 8, im.size[0], im.size[1], 4*im.size[0])
pixmap2, mask = pixbuf.render_pixmap_and_mask()
self.pixmap.draw_drawable(self.white_gc, pixmap2, 0,0,x-r,y-r,-1,-1)
Here are the images I created from im.save("test.png"):
http://imgur.com/43spsBG,lqowten#0
Notice the first picture has an alpha of 255 (full) and the seconds has an alpha of 50.
However When I convert the images to a pixmap with my current code I lose the transparent affect.
Thanks for your help,
Ian
EDIT: I have narrowed it down a little bit with more testing. I am losing the alpha of my image when converting the pixbuf to a pixmap.
Okay figured it out.
Trick here is to not convert the pixbuf to a pixmap using pixbuf.render_pixmap_and_mask()
Instead I took my self.pixmap that I draw onto my drawable and called draw_pixbuf() on it.
Here is the new code I used.
def gfx_draw_tem2(self, r, x, y):
im = Image.new("RGBA", (r*2,r*2), (1, 1, 1, 0))
draw = ImageDraw.Draw(im)
for i in range(0,r*2):
for j in range(0,r*2):
if(self.in_circle(i,j,r)):
draw.point((i,j), fill=(100,50,75,140))
im_data = im.tostring()
pixbuf = gdk.pixbuf_new_from_data(im_data, gdk.COLORSPACE_RGB, True, 8, im.size[0], im.size[1], 4*im.size[0])
self.pixmap.draw_pixbuf(self.white_gc, pixbuf, 0, 0, x, y, -1, -1, gdk.RGB_DITHER_NORMAL, 0, 0)
Hope this helps someone.

Formatting CIColorCube data

Recently, I've been trying to set up a CIColorCube on a CIImage to create a custom effect. Here's what I have now:
uint8_t color_cube_data[8*4] = {
0, 0, 0, 1,
255, 0, 0, 1,
0, 255, 0, 1,
255, 255, 0, 1,
0, 0, 255, 1,
255, 0, 255, 1,
0, 255, 255, 1,
255, 255, 255, 1
};
NSData * cube_data =[NSData dataWithBytes:color_cube_data length:8*4*sizeof(uint8_t)];
CIFilter *filter = [CIFilter filterWithName:#"CIColorCube"];
[filter setValue:beginImage forKey:kCIInputImageKey];
[filter setValue:#2 forKey:#"inputCubeDimension"];
[filter setValue:cube_data forKey:#"inputCubeData"];
outputImage = [filter outputImage];
I've checked out the WWDC 2012 Core Image session, and what I have still doesn't work. I've also checked the web, and there are very few resources available on this issue. My code above just returns a black image.
In Apple's developer library, it says:
This filter applies a mapping from RGB space to new color values that are defined in inputCubeData. For each RGBA pixel in inputImage the filter uses the R,G and B values to index into a thee dimensional texture represented by inputCubeData. inputCubeData contains floating point RGBA cells that contain linear premultiplied values. The data is organized into inputCubeDimension number of xy planes, with each plane of size inputCubeDimension by inputCubeDimension. Input pixel components R and G are used to index the data in x and y respectively, and B is used to index in z. In inputCubeData the R component varies fastest, followed by G, then B.
However, this makes no sense to me. How does my inputCubeData need to be formatted?
The accepted answer is incorrect. While the cube data is indeed supposed to be scaled to [0 .. 1], it's supposed to be float, not int.
float color_cube_data[8*4] = {
0.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
0.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
0.0, 0.0, 1.0, 1.0,
1.0, 0.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0
};
(Technically, you don't have to put the ".0" on each number, the compiler knows how to handle it.)
I found the issue... I have updated my question if anyone has the same problem!
The input float array had to be pre-divided out of 255.
The original used 255:
uint8_t color_cube_data[8*4] = {
0, 0, 0, 1,
255, 0, 0, 1,
0, 255, 0, 1,
255, 255, 0, 1,
0, 0, 255, 1,
255, 0, 255, 1,
0, 255, 255, 1,
255, 255, 255, 1
};
It should look like this instead:
uint8_t color_cube_data[8*4] = {
0, 0, 0, 1,
1, 0, 0, 1,
0, 1, 0, 1,
1, 1, 0, 1,
0, 0, 1, 1,
1, 0, 1, 1,
0, 1, 1, 1,
1, 1, 1, 1
};
Your problem is that you are using value 1(which is next to zero) for alpha channel, max for uint8_t is 255
See example below:
CIFilter *cubeHeatmapLookupFilter = [CIFilter filterWithName:#"CIColorCube"];
int dimension = 4; // Must be power of 2, max of 128
int cubeDataSize = 4 * dimension * dimension * dimension;
unsigned char cubeDataBytes[cubeDataSize];
//cubeDataBytes[cubeDataSize]
unsigned char cubeDataBytes[4*4*4*4] = {
0, 0, 0, 0,
255, 0, 0, 170,
255, 250, 0, 200,
255, 255, 255, 255
};
NSData *cube_data = [NSData dataWithBytes:cubeDataBytes length:(cubeDataSize*sizeof(char))];
//applying
[cubeHeatmapLookupFilter setValue:myImage forKey:#"inputImage"];
[cubeHeatmapLookupFilter setValue:cube_data forKey:#"inputCubeData"];
[cubeHeatmapLookupFilter setValue:#(dimension) forKey:#"inputCubeDimension"];
This is link to full project https://github.com/knerush/heatMap