QGIS 3.16 finding the width and height of bounding boxes - qgis

Using QGIS 3.16, I have drawn polygons around shapes and found their areas in units of meters2. Next I have created a shapefile for bounding boxes around these polygons. When I click on the shapefile for the bounding boxes just created, and open the attribute table, I expect to see units for height, width, area, and perimeter in units of meters or meters2. The units displayed in the image below is unknown. How do I convert all these measurements into meters or meters**2?

to answer at your question it's necessary to know Coordinate systems worldwide EPSG code or your data reference system (SR).
Maybe that your data are in WGS84 or in other Geographic coordinate system,and so data have measure unit in degrees (or in grads).
You can recalculate your measures in field calculator using transform() function, that reprojects your data in a Projected coordinate system, that has measures in meters. For example:
transform($geometry,'EPSG:4326','EPSG:32634')
where first epsg code is you data SR e second one is the new EPSG code.
If you undestand this passage, then you can calculate
BBox area:
area( bounds( transform($geometry,'EPSG:4326','EPSG:32634')))
same formula without transform
area( bounds( $geometry))
bbox heigth
bounds_height( ( transform($geometry,'EPSG:4326','EPSG:32634')))
same formula without transform
> bounds_height( ($geometry))
bbox width:
bounds_width( ( transform($geometry,'EPSG:4326','EPSG:32634')))
same formula without transform
> bounds_width( ($geometry))

Related

How can I measure the distance from vector coordinate to a raster pixel?

I have a file of geocoded point data (not pictured) that overlays a 30m cell size raster with the pixels of interest shown in green (image below).
For each point I want to calculate the distance to nearest green pixel. I tried raster to point (an attempt to convert each pixel to a point), but this process takes a long time to complete (days). Are there other viable options for me?
Is there something I can first do to the raster to preprocess it in order to make it a smaller file (dropping pixels if they are not pixels of interest) before attempting the raster to point conversion?
One way this can be done is by reducing the number of pixels to the pixels of interest. For now, I'm using this workflow below. Although it takes some time, it works.
Reproject raster and/or point data, if necessary
Reclassify the raster (No Data applied to the non-interest pixels)
Raster to point
Near tool for distance to nearest point

Proper storage of a uniform, XYZ plane, in Postgres PostGIS

Problem Statement:
Consider a gray scale image taken of a plane. Each pixel is an intensity value (Z).
I have the position in physical space of the top left most pixel. I also have the offset of each X and Y pixel from left to right (X), and up to down (Y). Consider the image to be perfectly non-distorted, so the offset is uniform for every row.
I would like to store these image in a POSTGIS database in a way that allows the user to query a polygon from an image.
Current Implementation:
Currently I believe I am doing something stupid, I create a Multipoint geometry and store each point with ( X Y Z ) using python. Since the image is uniform, I am repeating lots of data (x and y for each point), but this works when querying the database for a polygon, but is an excessive amount of extra data.
Question:
What is the best way to store such data and allow the user to query
a polygon from the image?
Is PostGIS Geometry the correct datatype?
Is the multipoint approach reasonable?

calculating the scores using matlab

I am working on calculating the scores for air rifle paper target. I'm able to calculate the distance from center of the image to the center of the bullet hole in Pixels.
Here's my code:
I = imread('Sample.jpg');
RGB = imresize(I,0.9);
imshow(RGB);
bw = im2bw(RGB,graythresh(getimage));
figure, imshow(bw);
bw2 = imfill(bw,'holes');
s = regionprops(bw2,'centroid');
centroids = cat(1,s.Centroid);
dist_from_center = norm(size(bw(:,:,1))/2 - centroids,2);
hold(imgca,'on');
plot(imgca,centroids(:,1),centroids(:,2),'r*');
hold(imgca,'off');
numberOfPixels = numel(I);
Number_Of_Pixel = numel(RGB);
This is the raw image with one bullet hole.
This is the result I am having.
This is the paper target I'm using to get the score.
Can any one suggest me how to calculate the score using this.
See my walk through your problem in Python
It's a very fun problem you have.
I assumed you have already a way of getting the binary holes mask (since you gave us the image)
Some scores are wrong because of target centering issues in given image
Given hole-mask, find 2D shot center
I assume that the actual images would include several holes instead of one.
Shot locations extracted by computing the local maxima of the distance transform of the binary hole image. Since the distance transform gives as intensity output the distance from the examined point to a border, this allows us to compute the centermost pixels as local maximum.
Local maximum technique I used is computing a maximum filter of your image with a given size (10 for me) and find the pixels that have filtered == original.
You have to remove the 0-valued "maxima" but apart from that it's a nice trick to remember, since it works in N-dimension by using a N-dimensional maximum filter.
Given a 2D position of shot center, compute the score
You need to transform your coordinate system from cartesian (X,Y) to polar (distance,angle).
Image from MathWorks to illustrate the math.
To use the center of image as reference point, offset each position by the image center vector.
Discarding the angle, your score is directly linked to the distance from center.
Your score is an integer that you need to compute based on distance :
As I understand you score 10 if you are at distance 0 and decrease till 0 points.
This means the scoring function is
border_space = 10 px # distance between each circle, up to you to find it :)
score = 10 - (distance / border_space) # integer division though
with the added constraint that score can not be negative :
score = max(10 - (distance / border_space),0)
Really do look through my ipython notebook, it's very visual
Edit: Regarding the distance conversion.
Your target practice image is in pixels, but these pixel distances can be mapped to millimeters : You probably know what your target's size is in centimeters (it's regulation size, right ?), so you can set up a conversion rate:
target_size_mm = 1000 # 1 meter = 1000 millimeters
target_size_px = 600 # to be measured for each image
px_to_mm_ratio = target_size_mm / target_size_px
object_size_px = 102 # any value you want to measure really
object_size_mm = object_size_px * px_to_mm_ratio
Everytime you're thinking about a facet of your problem, think "Is what I'm looking at in pixels or in millimeters ?". Try to conceptually separate the code that uses pixels from the one in millimeters.
It is coding best practice to avoid these assumptions where you can, so that if you get a bunch of images from different cameras, with different properties, you can "convert" everything to a common format (millimeters) and have a uniform treatment of data afterwards

Matlab image processing - problems with recognizing circles [duplicate]

I have the image includes circular, elipsoidal, square objects and somethings like these. I want to get only circual objects. I applyed a filter by using Solidity and Enccentricity levels of objets but I could not remove square objects. Square objects which have not sharp corners have nearly same Solidity and Enccentricity level with circular objects.
My question is that is there any other parameter or way to detect square objects?
You can compare the area of the mask to its perimeter using the following formula
ratio = 4 * pi * Area / ( Perimeter^2 )
For circles this ration should be very close to one, for other shapes it should be significantly lower.
See this tutorial for an example.
The rationale behind this formula: circles are optimal in their perimeter-area ratio - max area for given perimeter. Given Perimeter, you can estimate radius of equivalent circle by Perimeter = 2*pi*R, using this estimated R you can compute the "equivalent circle area" using eqArea = pi*R^2. Now you only need to check the ratio between the actual area of the shape and the "equivalent area" computed.
Note: since Area and Perimeter of objects in mask are estimated based on the pixel-level discretization these estimates may be quite crude especially for small shapes. Consider working with higher resolution masks if you notice quantization/discretization errors.
There exists a Hough transform (imfindcircles) in order to find circles within an image which is what you needed in the first place.

Dividing a geographic region

I have a certain geographic region defined by the bottom left and top right coordinates. How can I divide this region into areas of 20x20km. I mean in practial the shape of the earth is not flat it's round. The bounding box is just an approximation. It's not even rectangular in actual sense. It's just an assumption. Lets say the bottomleft coordinate is given by x1,y1 and the topright coordinate is given by x2,y2, the length of x1 to x2 at y1 is different than that of the length between x1 to x2 at y2. How can I overcome this issue
Actually, I have to create a spatial meshgrid for this region using matlab's meshgrid function. So that the grids are of area 20x20km.
meshgrid(x1:deltaY:x2,y1:deltaX:y2)
As you can see I can have only one deltaX and one deltaY. I want to choose deltaX and deltaY such that the increments create grid of size 20x20km. However this deltaX and deltaY are supposed to vary based upon the location. Any suggestions?
I mean lets say deltaX=del1. Then distance between points (x1,y1) to (x1,y1+del1) is 20km. BUt when I measure the distance between points (x2,y1) to (x2, y1_del1) the distance is < 20km. The meshgrid function above does creates mesh. But the distances are not consistent. Any ideas how to overcome this issue?
Bear in mind that 20km on the surface of the earth is a REALLY short distance, about .01 radians - so the area you're looking at would be approximated as flat for anything non-scientific. Assuming it is scientific...
To get something other than monotonic steps in meshgrid you should create a function which takes as its input your desired (x,y) and maps it relative to (x_0,y_0) and (x_max,y_max) in your units of choice. Here's an inline function demonstrating the idea of using a function for meshgrid steps
step=inline('log10(x)');
[x,y]=meshgrid(step(1:10),step(1:10));
image(255*x.*y)
colormap(gray(255))
So how do you determine what the function should be? That's hard for us to answer exactly without a little more information about what your data set looks like, how you're interacting with it, and what your accuracy requirements are. If you have access to the actual location at every point, you should vary one dimension at a time (if your data grid is aligned with your latitude grid, for example) and use a curve fit with model selection techniques (akaike/bayes criterion) to find the best function for your data.