PIL simple image paste - image changing color - python-imaging-library

I'm trying to paste an image onto another, using:
original = Img.open('original.gif')
tile_img = Img.open('tile_image.jpg')
area = 0, 0, 300, 300
original.paste(tile_img, area)
new_cropped.show()
This works except the pasted image changes color to grey.
Image before:
Image after:
Is there a simple way to retain the same pasted image color? I've tried reading the other questions and the documentation, but I can't find any explanation of how to do this.
Many thanks

I believe all GIF images are palettised - that is, rather than containing an RGB triplet at each location, they contain an index into a palette of RGB triplets. This saves space and improves download speed - at the expense of only allowing 256 unique colours per image.
If you want to treat a GIF (or palettised PNG file) as RGB, you need to ensure you convert it to RGB on opening, otherwise you will be working with palette indices rather than RGB triplets.
Try changing the first line to:
original = Img.open('original.gif').convert('RGB')

Related

How can I increase the bit depth of an image while keeping it transparent?

I am using UIGraphicsBeginImageContext(canvasRect.size) to export images, but because UIGraphicsBeginImageContext uses only 8-bit context, the exported image has the original image's The color representation was dropped, resulting in a blurry appearance.
Therefore, we changed the code to UIGraphicsBeginImageContextWithOptions(canvasRect.size, true, 1.0).
The original image was successfully exported as a clean image with no loss of color representation, but transparency is no longer represented because opaque was set to true.
Please let me know if you know how to increase the bit rate while keeping the transparency of the image.
Please let me know if there is any other method other than UIGraphicsBeginImageContextWithOptions that can be used to export an image while preserving the color representation of the image.

Dicom: Matlab versus ImageJ grey level

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.

Image resize issue

This appears to be a trivial problem but the result is strange, totally lost where I am going wrong. There is an input RGB image which needs to be converted to gray scale and sized to 1000 x 1000 pixels. This is how I have done
img=imread('flowers.jpg');
flowers_gray=rgb2gray(img);
flowers_resize=imresize(flowers_gray,[1000 1000]);
but strangely the output image is not of 1000 by 1000 pixels. Moreover, matlab did not save the image (tried using SaveAs option and the File --->Export Setup) gray scale mode
and also the size was incorrect since when I opened the saved image by
img1=imread('flowers_resize.jpg')
s=size(img1)
it gave
s=586 665 3
And the image flowers_resize.jpg is saved with a white border surrounding it in the image folder. So, I went to Paint toolbox to select the image A1 and manually deleted the surrounding background and resized the image.But alas, it saved the image with 3 color channels and not in gray scale mode although the size was correct! Can somebody please point out the correct way of resizing to 1000 by 1000 pixels and saving in gray scale mode without the white border surrounding the saved output file? Thank you.
When you use the image export processing, you are saving the entire figure including the space around the figure (white space).
Instead, use the imwrite command. In your case:
imwrite(A1,'flowers_resize.jpg','jpg');

Extending palette of indexed images in MATLAB

I extracted the color palette of an indexed image - a 256x3 matrix, duplicated the palette to 512x3 matrix with duplicate values in each half. What I want to do is steganography. When the secret message bit is 0,I want to refer to one half of palette, else to the other half. In this way, we can get lossless steganography in indexed images!
But when I try to save the image as bitmap with the new color map, it says bmp/gif files cannot have more than 256 entries in the color palette!
[im,map]=imread('mandril_color.gif');
nmap=zeros(512,3);
nmap(1:256,1:3)=map(1:256,1:3);
nmap(257:512,1:3)=map(1:256,1:3);
imwrite(im,nmap,'palette1.gif');
The above was my code to just test whether saving an image with an extended palette works or not.. unfortunately it did not. How can I avoid this problem and have a custom palette with more than 256 values?
The standard for .bmp and .gif only supports color palettes of length 256. There is no way around that for you.
To use color palettes with more than 256 entries, you can use .jpg, for example. Make sure you choose lossless compression, since otherwise, your message will be scrambled.

How to convert a black and white photo that was originally colored, back to its original color?

I've converted a colored photo to black and white, and bolded the edges. Now i need to convert it back to its original color with the bolded edges. Is there any function in matlab which allows me to do so?
Once you remove the colour from an image, there is no possible way to automatically put it back. You're basically reducing a set of 16,777,216 colours to a set of 256 - on average each shade of grey has 65,536 equivalent colours, and without the original image there's no way to guess which it could be.
Now, if you were to take the bolded lines from your black-and-white image and paint them on top of the original coloured image, that might end up producing what you're looking for.
If what you are trying to do is to use some filter over the B/W image and then use that with the original color. I suggest you convert your image to a color space with Lightness channel that suits your needs (for example L*a*b* if you need the ligtness to be uniformly distributed regarding human recognition of differences) and apply your filter only over the Lightness channel.