libvips - generate tiles starting from specific zoom level and specific coordinates - openmaptiles

I am using windows cli version libvips. I want to generate map tiles for leaflet from image 8000px x 6000px. This image is old map of my town, and I want to display it on my website, but I am stuck on generating tiles.
How to tell libvips to generate tiles from zoom level 10 to 15. With command
dzsave input.jpg outputdir --layout google
I receive tiles from zoom level 0 to 5.
And second question.
How to set bounds of my map? Generated tiles from above command cover the whole world.

The libvips CLI lets you run any save operation (like jpegsave, tiffsave or dzsave) as part of the write step of a command. You select the saver with the filename suffix and you can pass any parameters in square brackets at the end of the filename (be careful not to use any spaces).
So these two commands do the same thing:
vips jpegsave x.jpg y.jpg --Q 90
vips copy x.jpg x.jpg[Q=90]
The copy command will run jpegsave for you (it sees the .jpg suffix) and set Q to 90.
You can select dzsave with the .dz suffix. If your image is 50,000 x 50,000 pixels, you can save just the centre 50% with:
vips crop my-huge-map.jpg x.dz[layout=google] 12500 12500 25000 25000
I'm not sure what you mean by "layers 10 to 15". Do you only want the low-res layers? Just do a shrink by eg. 16 before running dzsave.

Related

How to open a txt file of IR temperatures as an image in matlab or other analysis software

I am using a therm-app camera to take infra-red photos of bats. I would like to draw around parts of the bat and find the hottest, coldest and average temperature and do further analysis.
The software that comes with the camera doesn't let me draw polygons so I would like to load the image in another program such as MATLAB or maybe imageJ (also happy to use Python or other if that would work).
The camera creates 4 files total:
I have a .jpg file, however when I open this in MATLAB it just appears as an image and I think it is just opening as a normal image, not sure how to accurately get the temperatures from this. I used the following to open it:
im=imread('C:\18. Bats\20190321_064039.jpg');
imshow(im);
I also have three other files, two are metadata (e.g. show date-time emissivity settings etc.) and one is a text file.
The text file appears to show the temperature of every pixel in the image.
e.g. (for a photo that had a minimum temperature of 15deg and max of 20deg it would be a text file with a minimum value of 1500 and maximum value of 2000)
1516 1530 1530 1540 1600 1600 1600 1600 1536 1536 ........
This file looks very useful, just wondering if there is some way I can open this as an image, probably in a program like MATLAB, which I think has image analysis so that I could draw around certain parts of the image (e.g. the wing of the bat) and find the average, max, min etc.
Has anyone had experience with this type of thing, can I just assign colours to numbers somehow? Or maybe other people have done it already and there is a much easier way. I will keep searching on the internet also and try to find out.
Alternatively maybe I need to open the .jpg image, draw around different parts, write a program to find out which pixels I drew around, find these in the txt file and then do averaging etc? Or somehow link the values in the text file to the .jpg file.
Sorry if this is the wrong place to ask, I can't find an image processing site on stack exchange.
All help is really appreciated, I will continue searching on the internet in the meantime.
the following worked in the end, it was much much easier than I thought it would be. Now a big fan of MATLAB, I thought it could take days to do this.
Just pasting here in case it is useful to someone else. I'm sure there is a more elegant way to write the code, however this is the first time I've used MATLAB in 20 years :p Use at your own risk, I haven't double checked I'm getting the correct results yet (though will do before I use it for anything important).
edit, since writing this I've found that the output .txt file of temperatures is actually sensor temperatures which need to be corrected for emissivity and background temperature to obtain the target temperatures. (One way to do this is to use the software which comes free with the camera to create new output .csv files of temperatures and use those instead).
Thanks to bla who put me on the right track with dlmread.
M=dlmread('C:\18. Bats\20190321_064039\20190321_064039_temps.txt') % read in the text file as a matrix (call it M)
% note that file seems to be a list of temperature values for each pixel
% e.g. 1934 1935 1935 1960 2000 2199...
M = rot90( M , 1 ) % rotate M anti-clockwise by 1*90 (All the pictures were saved sideways for some reason so rotate for easier viewing)
a = min(M(:)); % find the minimum temperature in the image
b = max(M(:)); % find the maximum temperature in the image
imresize(M,1.64); % resize the image to fit the computer screen prior to showing it on the screen
imshow(M,[a b]); % show image on the screen and fit the colours so that white is the value with the highest temperature in the image (b) and black is the lowest (a).
h = drawpolygon('FaceAlpha',0); % Let the user draw a polygon around the region of interest (ROI)
%(this stops code until polygon is drawn)
maskOfROI = h.createMask(); % For each pixel in the image assign a binary number, pixels inside the polygon (ROI) area are given 1 outside are 0
selectedValues = M(maskOfROI); % Now get the image values for all pixels where the mask value is '1' (i.e. all pixels within the polygon) and call this selectedValues.
averageTemperature = mean(selectedValues); % Get the mean of selectedValues (i.e. mean of the temperatures inside the polygon area)
maxTemperature = max(selectedValues); % Get the max of selectedValues
minTemperature = min(selectedValues); % Get the min of selectedValues

Detecting a line in a JPEG image

I'm new to Swift and image processing but I didn't find any program to do what I wanted. I have thousands of pages of questionnaires but the OMR freeware (Optical Mark Recognition) I use fails to detect the boxes. That is because the questionnaires were printed by me or by the participants in the study yielding to different images (scale and rotation). Redressing the image is not sufficient. Lucky me, there is an horizontal line somewhere on top of each pages. So, the algorithm would look something like this:
Select all the JPEG to transform (done)
Enter the coordinates of the target line (done)
For each JPEG image
3a. Load the image (NSData? not UIImage since it is an App)
3b. Uncompress the image
3c. Detect the line on top of the page
3d. Calculate and apply the angle and the translation (I found a free source in Java doing that)
3e. Save the image under a modified name
I need your help for steps 3a-3b. For step 3c, shall I use Canny edge detector followed by Hough transform?
Any thoughts would be appreciated.
---- EDIT ----
Here is an image describing the problem. On the upper part (Patient #1), the coordinate of the top horizontal line are (294, 242) to (1437, 241). One the lower part (Patient #2), the coordinate of the top horizontal line are (299, 230) to (1439, 230). This seems a small difference but the OMR looks at the ROIs (i.e. boxes) with fixed coordinates. In other scanned images, the difference may be even greater and the top line may be not horizontal (e.g. (X1, Y1) = (320, 235) and (X2, Y2) = (1480, 220)).
My idea is to get a template for the check boxes (the OMR does it) and coordinates of the top line once for ever (I can get them with Paint or whatever). Then align all the images to this template (using their top line) before running the OMR. There may be a scaling, a rotation and a translation needed. In other words, all the images should be perfectly stackable on the template image for the OMR to perform correctly...
--- EDIT Dec 26th ---
I've translated into Swift the Probabilistic Hough Transform of OpenCV (open cpp code from GitHub. Unfortunately, the segments detected are too short (i.e. the entire line segment is not captured). I'm wondering: does it make sense to use Canny Edge Detector before Hough Transform to detect a single segment of a black line on a white page?

Parsing for RGB regions over thousands of high resolution image files

I have lots of high resolution image files that have regions of colors, basically blobs with different rgb values. I need to go through the images and for every image make a text file that contains the coordinates to one pixel in every blob. Because I have so many files the script needs to be fast. I already wrote some scala code to do the task except it only saves locations for one blob per specific RGB value, meaning if I have two blobs of the same color that are not connected it will only save one the location for the first one found. The solution to this is for each images copy the location and colors to a map and when I find a blob flood delete (flood fill except delete instead of fill) and then keep parsing on the new map. However, I think this will make run time horribly slow because I will have to go through the entire image to add it to a map before even starting the parse. Thoughts? Am I going about this all wrong?
Thanks.

Line detection using PIL

Given an image consisting of black lines (a few pixels wide) on white background, what is a good way to find the coordinates along the the lines, say for every 10th pixel or so? I am considering using PIL for the task, but other python or java-based libraries would also be OK.
Ideally the coordinates would point to the middle of the line, but as the lines are narrow, it's enough that they point somewhere inside the line.
A very short line or a point should be identified with at least one coordinate.
Usually, Hough transformation is used to find lines. It gives you the parameters describing the line (which can be transformed easily between different representations), and you can sample this function to get your sample points. See http://en.wikipedia.org/wiki/Hough_transform and https://stackoverflow.com/questions/tagged/hough-transform+python
I only found this http://coding-experiments.blogspot.co.at/2011/05/ellipse-detection-in-image-by-using.html implementation in python, which actually searches for ellipses.

How do I turn an image into a vector on the iPhone like Adobe Illustrator's Live Trace?

I want to be able to create vector files like Illustrator does on the iPhone. Does anyone know of an algorithm?
for each pixel try to grow by testing against it's neighbours for colour similarity with a threshhold. keep growing until no more expansion is possible due to threshold then you make a path using the outermost border pixels. Now repeat for the other pixels in the orignal raster image which were not already included in your previous expansions.