Calculating area in ha from TIFF file using either Rasterio or Pillow - python-imaging-library

I'm new to image processing and need a bit of help.
My goal is to calculate the area, in hectares, of my TIFF image of a farm field. When I check the image data in Preview, it tells me that the DPI is at 72 pixels/inch.
My field image is diagonal and so what I am currently doing is, as a simple proxy, only counting the pixels that are not white to get all the pixels that are part of the field itself.
My first issue is with accessing the DPI information via my Python3 script. With Pillow, when I do image.info['dpi'] I get (1,1) which doesn't seem right. With Rasterio, I am unsure how to get the DPI information.
Once that is accessible, how would I go about calculating the area in hectares?
Any help would be greatly appreciated!
Here's my current attempt:
import rasterio
Image.MAX_IMAGE_PIXELS = None
im = Image.open('/home/ubuntu/poly-quickcount/server/data/1/1/geotiff/champ.tif')
numOfPixels = 0
for pixel in im.getdata():
if pixel != (255, 255, 255):
numOfPixels += 1
dpi = im.info['dpi']
area = numOfPixels * dpi[0] * dpi[1] / 10000

Related

How to open a txt file of IR temperatures as an image in matlab or other analysis software

I am using a therm-app camera to take infra-red photos of bats. I would like to draw around parts of the bat and find the hottest, coldest and average temperature and do further analysis.
The software that comes with the camera doesn't let me draw polygons so I would like to load the image in another program such as MATLAB or maybe imageJ (also happy to use Python or other if that would work).
The camera creates 4 files total:
I have a .jpg file, however when I open this in MATLAB it just appears as an image and I think it is just opening as a normal image, not sure how to accurately get the temperatures from this. I used the following to open it:
im=imread('C:\18. Bats\20190321_064039.jpg');
imshow(im);
I also have three other files, two are metadata (e.g. show date-time emissivity settings etc.) and one is a text file.
The text file appears to show the temperature of every pixel in the image.
e.g. (for a photo that had a minimum temperature of 15deg and max of 20deg it would be a text file with a minimum value of 1500 and maximum value of 2000)
1516 1530 1530 1540 1600 1600 1600 1600 1536 1536 ........
This file looks very useful, just wondering if there is some way I can open this as an image, probably in a program like MATLAB, which I think has image analysis so that I could draw around certain parts of the image (e.g. the wing of the bat) and find the average, max, min etc.
Has anyone had experience with this type of thing, can I just assign colours to numbers somehow? Or maybe other people have done it already and there is a much easier way. I will keep searching on the internet also and try to find out.
Alternatively maybe I need to open the .jpg image, draw around different parts, write a program to find out which pixels I drew around, find these in the txt file and then do averaging etc? Or somehow link the values in the text file to the .jpg file.
Sorry if this is the wrong place to ask, I can't find an image processing site on stack exchange.
All help is really appreciated, I will continue searching on the internet in the meantime.
the following worked in the end, it was much much easier than I thought it would be. Now a big fan of MATLAB, I thought it could take days to do this.
Just pasting here in case it is useful to someone else. I'm sure there is a more elegant way to write the code, however this is the first time I've used MATLAB in 20 years :p Use at your own risk, I haven't double checked I'm getting the correct results yet (though will do before I use it for anything important).
edit, since writing this I've found that the output .txt file of temperatures is actually sensor temperatures which need to be corrected for emissivity and background temperature to obtain the target temperatures. (One way to do this is to use the software which comes free with the camera to create new output .csv files of temperatures and use those instead).
Thanks to bla who put me on the right track with dlmread.
M=dlmread('C:\18. Bats\20190321_064039\20190321_064039_temps.txt') % read in the text file as a matrix (call it M)
% note that file seems to be a list of temperature values for each pixel
% e.g. 1934 1935 1935 1960 2000 2199...
M = rot90( M , 1 ) % rotate M anti-clockwise by 1*90 (All the pictures were saved sideways for some reason so rotate for easier viewing)
a = min(M(:)); % find the minimum temperature in the image
b = max(M(:)); % find the maximum temperature in the image
imresize(M,1.64); % resize the image to fit the computer screen prior to showing it on the screen
imshow(M,[a b]); % show image on the screen and fit the colours so that white is the value with the highest temperature in the image (b) and black is the lowest (a).
h = drawpolygon('FaceAlpha',0); % Let the user draw a polygon around the region of interest (ROI)
%(this stops code until polygon is drawn)
maskOfROI = h.createMask(); % For each pixel in the image assign a binary number, pixels inside the polygon (ROI) area are given 1 outside are 0
selectedValues = M(maskOfROI); % Now get the image values for all pixels where the mask value is '1' (i.e. all pixels within the polygon) and call this selectedValues.
averageTemperature = mean(selectedValues); % Get the mean of selectedValues (i.e. mean of the temperatures inside the polygon area)
maxTemperature = max(selectedValues); % Get the max of selectedValues
minTemperature = min(selectedValues); % Get the min of selectedValues

How to shrink or manage an image's size in bytes

Python 3.6.6, Pillow 5.2.0
The Google Vision API has a size limit of 10485760 bytes.
When I'm working with a PIL Image, and save it to Bytes, it is hard to predict what the size will be. Sometimes when I try to resize it to have smaller height and width, the image size as bytes gets bigger.
I've tried experimenting with modes and formats, to understand their impact on size, but I'm not having much luck getting consistent results.
So I start out with a rawImage that is Bytes obtained from some user uploading an image (meaning I don't know much about what I'm working with yet).
rawImageSize = sys.getsizeof(rawImage)
if rawImageSize >= 10485760:
imageToShrink = Image.open(io.BytesIO(rawImage))
## do something to the image here to shrink it
# ... mystery code ...
## ideally, the minimum amount of shrinkage necessary to get it under 10485760
rawBuffer = io.BytesIO()
# possibly convert to RGB first
shrunkImage.save(rawBuffer, format='JPEG') # PNG files end up bigger after this resizing (!?)
rawImage = rawBuffer.getvalue()
print(sys.getsizeof(rawImage))
To shrink it I've tried getting a shrink ratio and then simply resizing it:
shrinkRatio = 10485760.0 / float(rawImageSize)
imageWidth, imageHeight = pilImage.size
shrunkImage = imageToShrink.resize((int(imageWidth * shrinkRatio),
int(imageHeight * shrinkRatio)), Image.LANCZOS)
Of course I could use a sufficiently small and somewhat arbitrary thumbnail size instead. I've thought about iterating thumbnail sizes until a combination takes me below the maximum bytes size threshold. I'm guessing the bytes size varies based on the color depth and mode and (?) I got from the end user that uploaded the original image. And that brings me to my questions:
Can I predict the size in bytes a PIL Image will be before I convert it for consumption by Google Vision? What is the best way to manage that size in bytes before I convert it?
First all, you probably don't need to maximize to the 10M limit posed by Google Vision API. In most case, a much smaller file will be just fine, and faster.
In addition to that, you may want to keep in mind that the aspect ratio might lead to different result. See this, https://www.mlreader.com/prepare-image-for-google-vision-api

Find the size of 1 pixel in my CMOS camera

I have a small problem with finding the pixel size of an image. I am to find size of nano and micro particles on my BW image. I used regionprops to get the area - then the diameter. Now i know the value in pixels. How do i convert to micro or nano meter scale? Do I take into account the sensor size(6.5umx6.5um) of my camera?
I use MATLAB for image processing.
Thank you
there is a function called imfinfo which will return a struct. In this struct you will maybe find three fields (it depends on the coder that you used for the image format) called XResolution, YResolution and ResolutionUnit. Using this 3 fields you can easily get pixel size, for example if XResolution=10, YResolution=10 and ResolutionUnit='meter' then you have a 100cm2 pixels (its a bit unreal i know :))
I hope this helps and that your image file contains the XResolution and YResolution information in your header.

Tesseract OCR failed to recognize full height numbers

I have tested with sample text both alphanumeric and digits only. I am using digits mode.
How do I recognize digits like in the following image:
I think it is because of full height.
I have also tried converting it to .jpg using some online tools (not code)
I am using pytesseract 0.1.6, but I think this is Tesseract problem.
Here is my code:
def classify(hash):
socket = urllib.urlopen(hash)
image = StringIO(socket.read())
socket.close()
image = Image.open(image)
number = image_to_string(image, config='digits')
mapping[hash] = number
return number
classify('any url')
I think you've got two problems here.
First is that the text is rather small. You can scale the image up by making it 2x as tall and 2x as wide (preferably using AA or cubic interpolation to try and make the letters clearer).
Next there isn't enough white around the edge of the numbers for tesseract to know that it's actually an edge. So you need to add some blank whitespace image around what you've already got.
You can do that manually using photoshop or GIMP or ImageMagick or whatever to validate that it'll actually help. But if you need to do a bunch of images then you'll probably want to use PIL and ImageOps to help.
How do I resize an image using PIL and maintain its aspect ratio?
If you make the new sizes bigger rather than smaller, PIL will grow the image rather than shrink it. Grow it by 2x or 3x both width and height rather than 20% as that'll cause artifacts.
Here's one way to add extra white border:
http://effbot.org/imagingbook/imageops.htm#tag-ImageOps.expand
This question might help you with adding the extra whitespace also:
In Python, Python Image Library 1.1.6, how can I expand the canvas without resizing?
The input image is too small for recognition. Here is my solution:
Upsample the image
Add constant borders
Apply adaptive-threshold
Set configuration to digits
Upsampling the image is required for the accurate recognition. Adding contant borders will center the digits. Applying adaptive-threhsold will result the features (digit-strokes) more available. Result will be:
When you read:
049
Code:
import cv2
import pytesseract
img = cv2.imread("0cLW9.png")
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(h, w) = gry.shape[:2]
gry = cv2.resize(gry, (w * 2, h * 2))
gry = cv2.copyMakeBorder(gry, 10, 10, 10, 10, cv2.BORDER_CONSTANT, value=255)
thr = cv2.adaptiveThreshold(gry, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 81, 12)
txt = pytesseract.image_to_string(thr, config="digits")
print(txt)
cv2.imshow("thr", thr)
cv2.waitKey(0)
You can achieve the same result using other pre-processing methods.

Dicom: Matlab versus ImageJ grey level

I am processing a group of DICOM images using both ImageJ and Matlab.
In order to do the processing, I need to find spots that have grey levels between 110 and 120 in an 8 bit-depth version of the image.
The thing is: The image that Matlab and ImageJ shows me are different, using the same source file.
I assume that one of them is performing some sort of conversion in the grey levels of it when reading or before displaying. But which one of them?
And in this case, how can I calibrate do so that they display the same image?
The following image shows a comparison of the image read.
In the case of the imageJ, I just opened the application and opened the DICOM image.
In the second case, I used the following MATLAB script:
[image] = dicomread('I1400001');
figure (1)
imshow(image,[]);
title('Original DICOM image');
So which one is changing the original image and if that's the case, how can I modify so that both version looks the same?
It appears that by default ImageJ uses the Window Center and Window Width tags in the DICOM header to perform window and level contrast adjustment on the raw pixel data before displaying it, whereas the MATLAB code is using the full range of data for the display. Taken from the ImageJ User's Guide:
16 Display Range of DICOM Images
With DICOM images, ImageJ sets the
initial display range based on the Window Center (0028, 1050) and
Window Width (0028, 1051) tags. Click Reset on the W&L or B&C window and the display range will be set to the minimum and maximum
pixel values.
So, setting ImageJ to use the full range of pixel values should give you an image to match the one displayed in MATLAB. Alternatively, you could use dicominfo in MATLAB to get those two tag values from the header, then apply window/leveling to the data before displaying it. Your code will probably look something like this (using the formula from the first link above):
img = dicomread('I1400001');
imgInfo = dicominfo('I1400001');
c = double(imgInfo.WindowCenter);
w = double(imgInfo.WindowWidth);
imgScaled = 255.*((double(img)-(c-0.5))/(w-1)+0.5); % Rescale the data
imgScaled = uint8(min(max(imgScaled, 0), 255)); % Clip the edges
Note that 1) double is used to convert to double precision to avoid integer arithmetic, 2) the data is assumed to be unsigned 8-bit integers (which is what the result is converted back to), and 3) I didn't use the variable name image because there is already a function with that name. ;)
A normalized CT image (e.g. after the modality LUT transformation) will have an intensity value ranging from -1024 to position 2000+ in the Hounsfield unit (HU). So, an image processing filter should work within this image data range. On the other hand, a RGB display driver can only display 256 shades of gray. To overcome this limitation, most typical medical viewers apply Window Leveling to create a view of the image where the anatomy of interest has the proper contrast to display in the RGB display driver (mapping the image data of interest to 256 or less shades of gray). One of the ways to define the Window Level settings is to use Window Center (0028,1050) and Window Width (0028,1051) tags. Also, a single CT image can have multiple Window Level values and each pair is basically a view of the anatomy of interest. So using view data for image processing, instead actual image data, may not produce consistent results.