Problem Statement:
Consider a gray scale image taken of a plane. Each pixel is an intensity value (Z).
I have the position in physical space of the top left most pixel. I also have the offset of each X and Y pixel from left to right (X), and up to down (Y). Consider the image to be perfectly non-distorted, so the offset is uniform for every row.
I would like to store these image in a POSTGIS database in a way that allows the user to query a polygon from an image.
Current Implementation:
Currently I believe I am doing something stupid, I create a Multipoint geometry and store each point with ( X Y Z ) using python. Since the image is uniform, I am repeating lots of data (x and y for each point), but this works when querying the database for a polygon, but is an excessive amount of extra data.
Question:
What is the best way to store such data and allow the user to query
a polygon from the image?
Is PostGIS Geometry the correct datatype?
Is the multipoint approach reasonable?
Related
I have a file of geocoded point data (not pictured) that overlays a 30m cell size raster with the pixels of interest shown in green (image below).
For each point I want to calculate the distance to nearest green pixel. I tried raster to point (an attempt to convert each pixel to a point), but this process takes a long time to complete (days). Are there other viable options for me?
Is there something I can first do to the raster to preprocess it in order to make it a smaller file (dropping pixels if they are not pixels of interest) before attempting the raster to point conversion?
One way this can be done is by reducing the number of pixels to the pixels of interest. For now, I'm using this workflow below. Although it takes some time, it works.
Reproject raster and/or point data, if necessary
Reclassify the raster (No Data applied to the non-interest pixels)
Raster to point
Near tool for distance to nearest point
Using QGIS 3.16, I have drawn polygons around shapes and found their areas in units of meters2. Next I have created a shapefile for bounding boxes around these polygons. When I click on the shapefile for the bounding boxes just created, and open the attribute table, I expect to see units for height, width, area, and perimeter in units of meters or meters2. The units displayed in the image below is unknown. How do I convert all these measurements into meters or meters**2?
to answer at your question it's necessary to know Coordinate systems worldwide EPSG code or your data reference system (SR).
Maybe that your data are in WGS84 or in other Geographic coordinate system,and so data have measure unit in degrees (or in grads).
You can recalculate your measures in field calculator using transform() function, that reprojects your data in a Projected coordinate system, that has measures in meters. For example:
transform($geometry,'EPSG:4326','EPSG:32634')
where first epsg code is you data SR e second one is the new EPSG code.
If you undestand this passage, then you can calculate
BBox area:
area( bounds( transform($geometry,'EPSG:4326','EPSG:32634')))
same formula without transform
area( bounds( $geometry))
bbox heigth
bounds_height( ( transform($geometry,'EPSG:4326','EPSG:32634')))
same formula without transform
> bounds_height( ($geometry))
bbox width:
bounds_width( ( transform($geometry,'EPSG:4326','EPSG:32634')))
same formula without transform
> bounds_width( ($geometry))
I have made 3D analysis code and I want to split or crop 3D mesh into 2 parts with 2D plane, what i expected: the final result I need is to find out what are the nodes on the left side and the right side, what you see on the image below is the nodes of the 3D object (bumpy), Do you know what method or library I need to use to solve this problem?
my problem
Here is my data structure from the 2D plane:
Column 1: Face
Column 2: X coordinate
Column 3: Y coordinate
column 4: Z coordinate
Column 5: Finite Element Value
data structure
.
The data structure from the 3D mesh is containing the same data as the table above, Thanks so much!
We can know the plane XYZ Coordinates, so I tried to find by using <= to find the axis value is larger or smaller than the plane coordinates:
find x,y,z 3D model coordinate is smaller than x,y,z cut plane coordinate
[r] = find((Name_OT(:,1)>=x) & (Name_OT(:,2)>=y) & Name_OT(:,3)>=z);
the blue line is the plane, and the coloured one is the result from my code, the ideal result is the coloured nodes full, but what happened here the colour node has a big hole or gap
not good result
You need to first decide whether you want to segment your data by a (linear) plane or not. If you prefer to keep a curved surface, please follow my earlier comment and edit your question.
To get a plane fit to the data for your cut, you can use fit.
Base on the plane, you can get a normal vector of the plane. That is reading coefficients of fit results and is in the documentation. Using that normal vector, you can rotate all your data so that the plane is normal to z axis. The rotation is matrix multiplication. From there, you can use logical indexing to segment your data set.
You can ofc also get the normal component of the data points relative to the plane cut and decide on a direction that way. You still need fit. From that point, it's basic matrix manipulation. An nx1 vector can multiply a 1xn vector in Matlab. So projectors can also be constructed from basic matrix manipulation. At a glance, this method is computationally inefficient.
Working on the Pacific Ocean, i am dealing with huge polygons covering the whole area. Some of them are quite simple and are defined by 4 points in my shapefile.
However, when i import them into SQL server 2008 r2 as new geographies, due to the shape of the earth, i end up with curved lines while I would like the North and South boundaries to stick to some specific latitudes: for example, the north boundaries should follow the 30N latitude from 120E to 120W.
How can i force my polygons to follow the latitudes? Converting them as geometry could have been an option but since i will need to do some length and area calculations, i need to keep them as geography.
Do i need to add additional vertices along my boundaries to force the polygon to stay on a specific latitude? What should be the interval between each vertex?
Thanks for your help
Sylvain
You have already answered this yourself. Long distances between latitude coordinates will create curved lines to match the Earth's curvature. Therefore if you need to "anchor" them along a specific latitude you will need to manually insert points. As for the interval, there's no right or wrong, a little experimentation here (and considering how "anal" you want to be about it hugging the line) will give you the result you desire. 1 coordinate per degree should do it, might even be a little overkill.
That said, I do question why you would want to anchor them to create a projected "straight" line as this will skew the results of length and area calculations, the bigger the polygon, the bigger the skew.
Hy!
I have a point cloud representing walls and the floor of an indoor scene.
I projected the points of the floor on a plane. That means it's a "2d point cloud" now.
All z-coordinates are zero.
I have to deal with missing parts. My idea now is to fill the holes.
Is there a way to transform the points into the image space to create a
binary mask? I would like to use techniques like: "imfill" in Matlab...
Thanks
Edit:
To make it more clear, I will explain an simple example. I have points in 2D. After I made a triangulation, I can access each triangle. For each triangle I create a binary mask with poly2mask(), and each mask I write to an final image.
Here is an example:
Now I can use morphological operations on the image.
E.G: Here is an more complex example, where the triangulation gives me bad results:
To fill the hole on the right side, I could use morphological operation.
My problem: The points can be negative, and the distance between the triangles can be very small (E.g.: x coordinates of triangle: (1.12 1.14 1.12), will give me the point 1 in the image space