Put a custom palette and custom framerate in animated gif using python - python-imaging-library

I'm trying to write a series of numpy array as an animated gif. I need to control strictly the colormap or palette (which color is associated with each integer value in the array), so that it matches the indices in the arrays
I found the imageio.mimwrite. It has the ability to set the frame rate, and use compression, which seem great.
imageio.mimwrite('test.gif', ims, duration=0.2, subrectangles=True)
but I haven't found a way of setting a custom palette, only the number of colors seem to be settable...
I know I can write image to disk and then imageio, but I would prefer not having to.
Using pillow, I can save the gif with custom palette:
im = Image.fromarray(...)
im.putpalette(...)
for i in im_list: i.putpalette(...)
im.save(filename, save_all=True, append_images=[image_list])
But I haven't found a way of setting both palette and framerate...
Any idea ?
Thanks!

In case it can help someone, here a piece of code that use PIL to save a palette animated gif with custom duration:
from PIL import Image
# image_list: list of numpy 2d uint8 array
# duration is a list of duration for each individual frame
# loop, 0 for infinite
# colormap_np : n by 3 uint8 array
pil_ims = [Image.fromarray(i, mode='P') for i in image_list]
pil_ims[0].save(
filename='test.gif',
save_all=True,
append_images=pil_ims[1:],
duration=duration,
loop=0,
palette=colormap.tobytes()
)

Related

PIL simple image paste - image changing color

I'm trying to paste an image onto another, using:
original = Img.open('original.gif')
tile_img = Img.open('tile_image.jpg')
area = 0, 0, 300, 300
original.paste(tile_img, area)
new_cropped.show()
This works except the pasted image changes color to grey.
Image before:
Image after:
Is there a simple way to retain the same pasted image color? I've tried reading the other questions and the documentation, but I can't find any explanation of how to do this.
Many thanks
I believe all GIF images are palettised - that is, rather than containing an RGB triplet at each location, they contain an index into a palette of RGB triplets. This saves space and improves download speed - at the expense of only allowing 256 unique colours per image.
If you want to treat a GIF (or palettised PNG file) as RGB, you need to ensure you convert it to RGB on opening, otherwise you will be working with palette indices rather than RGB triplets.
Try changing the first line to:
original = Img.open('original.gif').convert('RGB')

Export to vector graphics fails with large number of data-points

I want to export some MATLAB-plots as vector-graphics for presentations. In the most cases, using print-command, for example:
set(0,'defaultAxesTickLabelInterpreter','Latex')
set(0,'defaultTextInterpreter','Latex')
t=linspace(0,6,6000);
s=sin(t);
figure
for spl=1:16
subplot(4,4,spl);
plot(t,s,'k')
end
print('Sinetest','-dpdf');
but as soon as the number of data-points (or the expected file size) turns too big, for example use t=linspace(0,6,7000); the method fails: instead of a scalable vector graphic, an ugly pixel-monster is saved in the .pdf-file. I've tried to use other file-formats, for exampl .emf, .eps, .svg (svg is what I need actually) instead of .pdf, but it's always the same problem. Reducing the number of data points works in this example, but not in general for me.
Is there any option or work around?
The solution is to specify that the painter renderer should be used:
print('Sinetest','-dpdf', '-painters');
If you save to a vector graphics file and if the figure RendererMode
property is set to 'auto', then print automatically attempts to use
the Painters renderer. If you want to ensure that your output format
is a true vector graphics file, then specify the Painters renderer.
Note that this may result in long rendering times as mentioned in the docs:
Sometimes, saving a file with the '-painters' option can cause longer
rendering times [...]

Converting PIL image to VIPS image

I'm working on some large histological images using Vips image library. Together with the image I have an array with coordinates. I want to make a binary mask which masks out the part of the image within the polygon created by the coordinates. I first tried to do this using vips draw function, but this is very inefficiently and takes forever (in my real code the images are about 100000 x 100000px and the array of polygons are very large).
I then tried creating the binary mask using PIL, and this works great. My problem is to convert the PIL image into an vips image. They both have to be vips images to be able to use the multiply-command. I also want to write and read from memory, as I believe this is faster than writing to disk.
In the im_PIL.save(memory_area,'TIFF') command I have to specify and image format, but since I'm creating a new image, I'm not sure what to put here.
The Vips.Image.new_from_memory(..) command returns: TypeError: constructor returned NULL
from gi.overrides import Vips
from PIL import Image, ImageDraw
import io
# Load the image into a Vips-image
im_vips = Vips.Image.new_from_file('images/image.tif')
# Coordinates for my mask
polygon_array = [(368, 116), (247, 174), (329, 222), (475, 129), (368, 116)]
# Making a new PIL image of only 1's
im_PIL = Image.new('L', (im_vips.width, im_vips.height), 1)
# Draw polygon to the PIL image filling the polygon area with 0's
ImageDraw.Draw(im_PIL).polygon(polygon_array, outline=1, fill=0)
# Write the PIL image to memory ??
memory_area = io.BytesIO()
im_PIL.save(memory_area,'TIFF')
memory_area.seek(0)
# Read the PIL image from memory into a Vips-image
im_mask_from_memory = Vips.Image.new_from_memory(memory_area.getvalue(), im_vips.width, im_vips.height, im_vips.bands, im_vips.format)
# Close the memory buffer ?
memory_area.close()
# Apply the mask with the image
im_finished = im_vips.multiply(im_mask_from_memory)
# Save image
im_finished.tiffsave('mask.tif')
You are saving from PIL in TIFF format, but then using the vips new_from_memory constructor, which is expecting a simple C array of pixel values.
The easiest fix is to use new_from_buffer instead, which will load an image in some format, sniffing the format from the string. Change the middle part of your program like this:
# Write the PIL image to memory in TIFF format
memory_area = io.BytesIO()
im_PIL.save(memory_area,'TIFF')
image_str = memory_area.getvalue()
# Read the PIL image from memory into a Vips-image
im_mask_from_memory = Vips.Image.new_from_buffer(image_str, "")
And it should work.
The vips multiply operation on two 8-bit uchar images will make a 16-bit uchar image, which will look very dark, since the numeric range will be 0 - 255. You could either cast it back to uchar again (append .cast("uchar") to the multiply line) before saving, or use 255 instead of 1 for your PIL mask.
You can also move the image from PIL to VIPS as a simple array of bytes. It might be slightly faster.
You're right, the draw operations in vips don't work well with very large images in Python. It's not hard to write a thing in vips to make a mask image of any size from a set of points (just combine lots of && and < with the usual winding rule), but using PIL is certainly simpler.
You could also consider having your poly mask as an SVG image. libvips can load very large SVG images efficiently (it renders sections on demand), so you just magnify it up to whatever size you need for your raster images.

Dicom: Matlab versus ImageJ grey level

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.

Extending palette of indexed images in MATLAB

I extracted the color palette of an indexed image - a 256x3 matrix, duplicated the palette to 512x3 matrix with duplicate values in each half. What I want to do is steganography. When the secret message bit is 0,I want to refer to one half of palette, else to the other half. In this way, we can get lossless steganography in indexed images!
But when I try to save the image as bitmap with the new color map, it says bmp/gif files cannot have more than 256 entries in the color palette!
[im,map]=imread('mandril_color.gif');
nmap=zeros(512,3);
nmap(1:256,1:3)=map(1:256,1:3);
nmap(257:512,1:3)=map(1:256,1:3);
imwrite(im,nmap,'palette1.gif');
The above was my code to just test whether saving an image with an extended palette works or not.. unfortunately it did not. How can I avoid this problem and have a custom palette with more than 256 values?
The standard for .bmp and .gif only supports color palettes of length 256. There is no way around that for you.
To use color palettes with more than 256 entries, you can use .jpg, for example. Make sure you choose lossless compression, since otherwise, your message will be scrambled.