PIL opens only first chanel of TIF image - python-imaging-library

I'm trying to open (and then process) a 3-channel Tif image (8-bits) created with ImageJ.
im = Image.open('spinal.tif')
im.show()
shows me a png for the first channel
n = np.array(im)
print(n.shape)
gives me (400, 450), thus considers only the first channel
How could I work on the different channels? Many thanks
Info on my tif file from ImageJ:
Title: spinal.tif
Width: 1986.4042 microns (450)
Height: 1765.6926 microns (400)
Size: 527K
Resolution: 0.2265 pixels per micron
Voxel size: 4.4142x4.4142x1 micron^3
ID: -466
Bits per pixel: 8 (grayscale LUT)
Display ranges
1: 0-255
2: 0-255
3: 0-255
Image: 1/3 (c:1/3 - 64_spinal_20x lame2.ndpis #1)
Channels: 3
Composite mode: "grayscale"
The file is temporarily available here :
https://filesender.renater.fr/?s=download&token=ab39ca56-24c3-4993-ae78-19ac5cf916ee

I finally found a way around using the scikit-image library.
This opens correctly the 3 channels (matplotlib didn't, nor PIL).
Once I have the array, I can go back to PIL using Image.fromarray to resume the processing.
from skimage.io import imread
from PIL import Image
img = imread('spinal.tif')
im_pil = Image.fromarray(img)
im_pil.show()
print(np.array(im_pil).shape)
This shows the composite image, and the correct (400, 450, 3) shape.
I can then get the different channels with Image.getchannel(channel) as in :
im_BF = im_pil.getchannel(0)
im_BF.show()
Thank you to the contributors who tried to solve my issue (I saw that the file was downloaded several times) and there might be a better way to process these multiple-channels TIF images with PIL, but this looks like working !

Your image is not a 3-channel RGB image. Rather, it is 3 separate images, each one a single greyscale channel. You can see that with ImageMagick:
magick identify spinal.tif
spinal.tif[0] TIFF 450x400 450x400+0+0 8-bit Grayscale Gray 545338B 0.000u 0:00.000
spinal.tif[1] TIFF 450x400 450x400+0+0 8-bit Grayscale Gray 0.000u 0:00.000
spinal.tif[2] TIFF 450x400 450x400+0+0 8-bit Grayscale Gray 0.000u 0:00.000
Or with tiffinfo which comes with libtiff:
TIFF Directory at offset 0x8 (8)
Subfile Type: (0 = 0x0)
Image Width: 450 Image Length: 400
Resolution: 0.22654, 0.22654 (unitless)
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: min-is-black
Samples/Pixel: 1
Rows/Strip: 400
Planar Configuration: single image plane
ImageDescription: ImageJ=1.53f
images=3
channels=3
mode=grayscale
unit=micron
loop=false
TIFF Directory at offset 0x545014 (850f6)
Subfile Type: (0 = 0x0)
Image Width: 450 Image Length: 400
Resolution: 0.22654, 0.22654 (unitless)
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: min-is-black
Samples/Pixel: 1
Rows/Strip: 400
Planar Configuration: single image plane
ImageDescription: ImageJ=1.53f
images=3
channels=3
mode=grayscale
unit=micron
loop=false
TIFF Directory at offset 0x545176 (85198)
Subfile Type: (0 = 0x0)
Image Width: 450 Image Length: 400
Resolution: 0.22654, 0.22654 (unitless)
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: min-is-black
Samples/Pixel: 1
Rows/Strip: 400
Planar Configuration: single image plane
ImageDescription: ImageJ=1.53f
images=3
channels=3
mode=grayscale
unit=micron
loop=false
If it is meant to be 3-channel RGB, rather than 3 separate greyscale channels, you need to save it differently in ImageJ. I cannot advise on that.
If you want combine the 3 channels into a single image on the command-line, you can do that with ImageMagick:
magick spinal.tif -combine spinal-RGB.png
If you want to read it with PIL/Pillow, you need to treat it as an image sequence:
from PIL import Image, ImageSequence
with Image.open("spinal.tif") as im:
for frame in ImageSequence.Iterator(im):
print(frame)
which gives this:
<PIL.TiffImagePlugin.TiffImageFile image mode=L size=450x400 at 0x11DB64220>
<PIL.TiffImagePlugin.TiffImageFile image mode=L size=450x400 at 0x11DB64220>
<PIL.TiffImagePlugin.TiffImageFile image mode=L size=450x400 at 0x11DB64220>
Or, if you want to assemble into RGB, something more like this:
from PIL import Image
# Open image and hunt down separate channels
with Image.open("spinal.tif") as im:
R = im.copy()
im.seek(1)
G = im.copy()
im.seek(2)
B = im.copy()
# Merge the three separate channels into single RGB image
RGB = Image.merge("RGB", (R, G, B))
RGB.save('result.png')

Related

Pillow - Adding transparency depending on grayscale values

Based on this post: Converting image grayscale pixel values to alpha values , how could I change an image transparency based on grayscale values with Pillow (6.2.2)?
I would like the brighter a pixel, the more transparent it is. Thus, pixels that are black or close to black would not be transparent.
I found the following script that works fine for white pixels but I don't know how to modify it on order to manage grayscale values. Maybe there is a better or faster way, I'm a real newbie in Python.
from PIL import Image
img = Image.open('Image.jpg')
img_out = img.convert("RGBA")
datas = img.getdata()
target_color = (255, 255, 255)
newData = list()
for item in datas:
newData.append((
item[0], item[1], item[2],
max(
abs(item[0] - target_color[0]),
abs(item[1] - target_color[1]),
abs(item[2] - target_color[2]),
)
))
img_out.putdata(newData)
img_out.save('ConvertedImage', 'PNG')
This is what I finally did:
from PIL import Image, ImageOps
img = Image.open('Image.jpg')
img = img.convert('RGBA') # RGBA = RGB + alpha
mask = ImageOps.invert(img.convert('L')) # 8-bit grey
img.putalpha(mask)
img.save('ConvertedImage', 'PNG')

Measuring Contrast of an Image using Python

I have a code for brightness, and im currently looking into measuring contrast
from PIL import Image
from math import sqrt
imag = Image.open("../Images/noise.jpg")
imag = imag.convert ('RGB')
imag.show()
X,Y = 0,0
pixelRGB = imag.getpixel((X,Y))
R,G,B = pixelRGB
brightness = sum([R,G,B])/3 ##0 is dark (black) and 255 is bright (white)
print(brightness)
print(R,G,B)
Surely contrast could be something similiar to this code, any ideas would be great, thanks
Different folks have different ideas of contrast... one method is to look at the difference between the brightest and darkest pixel in the image, another is to look at the standard deviation of the pixels away from the mean. There are other statistics too. Note that it requires looking at all the pixels in the image - not just the first.
The simplest way to look at the statistics of an image is to use PIL's ImageStat function:
#!/usr/bin/env python3
from PIL import Image, ImageStat
# Load image
im = Image.open('image.png')
# Calculate statistics
stats = ImageStat.Stat(im)
for band,name in enumerate(im.getbands()):
print(f'Band: {name}, min/max: {stats.extrema[band]}, stddev: {stats.stddev[band]}')
So, if I create a greyscale image like this with ImageMagick:
magick -size 1024x768 gradient:"rgb(64,64,64)-rgb(200,200,200)" -depth 8 image.png
and run the above code, I get:
Band: L, min/max: (64, 200), stddev: 39.31443755161709
If I create a magenta-black gradient:
magick -size 1024x768 gradient:magenta-black -depth 8 image.png
and run the code, I get:
Band: R, min/max: (0, 255), stddev: 73.68457550034924
Band: G, min/max: (0, 0), stddev: 0.0
Band: B, min/max: (0, 255), stddev: 73.68457550034924
If the min and max are close, the contrast is low. If the min and max are widely spaced, the contrast is high. Likewise the standard deviation, as it measures how "spread out" the pixels are across the histogram.

Convert JPG image from grayscale to RGB using imagemagick

I am trying to convert gray scale images to RGB using the imagemagick command-line tools.
It is working fine for PNG images, using:
convert image.png -define png:color-type=2 result.png
(taken from an answer to "How to convert gray scale png image to RGB from comand line using image magick")
Although checking with identify -format %r result.png will still return DirectClass Gray, I can see it worked using gdalinfo as there are now 3 bands / channels listed:
gdalinfo [successfully converted PNG]:
Driver: PNG/Portable Network Graphics
Files: result.png
Size is 567, 479
Coordinate System is `'
Image Structure Metadata:
INTERLEAVE=PIXEL
Corner Coordinates:
Upper Left ( 0.0, 0.0)
Lower Left ( 0.0, 479.0)
Upper Right ( 567.0, 0.0)
Lower Right ( 567.0, 479.0)
Center ( 283.5, 239.5)
Band 1 Block=567x1 Type=Byte, ColorInterp=Red
Band 2 Block=567x1 Type=Byte, ColorInterp=Green
Band 3 Block=567x1 Type=Byte, ColorInterp=Blue
However, it seems the -define option is only working for PNG images.
My question: How can I achieve the same effect for JPG images?
When I try the above command for JPG, it does not work:
convert image.jpg -define jpg:color-type=2 result.jpg
gdalinfo [unsuccessfully converted JPG]:
Driver: JPEG/JPEG JFIF
Files: result.jpg
Size is 1500, 1061
Coordinate System is `'
Metadata:
EXIF_ColorSpace=1
EXIF_PixelYDimension=2480
...
EXIF_YCbCrPositioning=1
EXIF_YResolution=(300)
Corner Coordinates:
Upper Left ( 0.0, 0.0)
Lower Left ( 0.0, 1061.0)
Upper Right ( 1500.0, 0.0)
Lower Right ( 1500.0, 1061.0)
Center ( 750.0, 530.5)
Band 1 Block=1500x1 Type=Byte, ColorInterp=Gray
Overviews: 750x531, 375x266, 188x133
Image Structure Metadata:
COMPRESSION=JPEG
The PNG colour types do not apply to JPEG images, so you can't use the same technique. Try forcing the colorspace to sRGB, and/or setting the type to TrueColour so you don't get a palettised image:
convert input.jpg -colorspace sRGB -type truecolor result.jpg
image-magick refuses to process png, but ffmpeg does not.
ffmpeg -loglevel warning -i ${file} -sws_flags
"lanczos+accurate_rnd+full_chroma_int+full_chroma_inp+bitexact" -y
-pix_fmt rgb8 test.png
reported depth of end file is 8 bit.
(also useful for rgba → rgb.)

Image Compression using imfinfo function in Matlab

I am trying to calculate the compression ratio of a given image. My matlab code is as follows:
temp = imfinfo('flowers.jpg');
comperssion_ratio = (temp.Width * temp.Height * temp.BitDepth) / temp.FileSize;
The imfinfo displays the following:
FileSize: 11569
Format: 'jpg'
FormatVersion: ''
Width: 430
Height: 430
BitDepth: 8
ColorType: 'grayscale'
FormatSignature: ''
NumberOfSamples: 1
CodingMethod: 'Huffman'
CodingProcess: 'Sequential'
Comment: {}
Running the above code gives me a compression ratio of about 120 which is huge and does not seem right. Is there something that I'm doing wrong? I went through a document from MIT and they showed that the Width and Height and BitDepth should be divided by 8 and then divided by the FileSize. Why divide by 8?
The division by factor of 8 is to convert bits to bytes.
According to the Matlab documentation for imfinfo
the FileSize parameter is the size of the compressed file, in bytes.
The compression ratio is defined as:
uncompressed size of image in bytes/compressed size of file in bytes
imfinfo gives you the pixel width, height, and bits per pixel (bit depth). From that you can compute the uncompressed size in bits, and divide by 8 to get bytes.
For the uncompressed image , you have 430*430*8/8 = 184,900 bytes.
The size of the compressed image is 11569 bytes.
So the compression ratio is actually 184,900/11569 or 15.98, not an unreasonable value for JPEG.

Distored image when convert RGB into YUV. How to fix???

From .ppm image, I extracted R,G,B data and saved following structures:
RGB RGB RGB RGB RGB RGB RGB..........
corresponding to each pixel position.
Then I used formula to convert R,G,V into Y,U,V.With each pixel, I obtained YUV correspondingly.
R0G0B0->Y0U0V0 , R1G1B1 ->Y1U1B1, ........
I saved data following YUV422 Data Format:The YUV422 data format shares U and V values between two pixels. As a result, these values are transmitted to the PC image buffer only once for every two pixels.
R0G0B0R1G1B1->Y0UY1V
a,How to calculate U and V from U0,U1 and V0,V1????????
b,In my case , I used this formula:
U=(U0+U1)/2; V=(V0+V1)/2;
Obtained data was saved following structure to create .yuv file:
YUYV YUYV YUYV YUYV YUYV YUYV......
But when I used YUV tools to read new .yuv file, that image is not similar to original image. What did I do wrong here ????
The formula what you have employed is the right one. But a small correction in the arrangement of output data of YUV422 Planar (Horizontal sampling).
Luminance data of Width * Height size followed by Cb data of Width * Height/2 and Cr data of width * Height/2.
It should be as mentioned below:
[Y1 Y2 Y3 ... (Width * Height)] [Cb1 Cb2 .... (Width * Height/2)] [Cr1 Cr2 ...(Width * Height/2)]