Cluster analysis of a Rasterlayer - cluster-analysis

Is there a way that i can analyse a cluster of a rasterlayer directly? If modify my Raster into a Matrix it does not work.I used kmeans so far, after i turned my raster into a matrix. But still dont work. I also used this code: r <- getValues(r) to turn my raster into a matrix but still does not work.Another problem is that all my values are NA if i turn my Raster into a matrix. So i dont know how to handle this problem.
my Raster looks like this:
class : RasterLayer
dimensions : 23320, 37199, 867480680 (nrow, ncol, ncell)
resolution : 0.02, 0.02 (x, y)
extent : 341668.9, 342412.9, 5879602, 5880069 (xmin, xmax, ymin, ymax)
crs : +proj=utm +zone=33 +ellps=WGS84 +units=m +no_defs
source : r_tmp_2022-07-13_141214_9150_15152.grd
names : layer
values : 2.220446e-16, 1 (min, max)

Related

as.polygons(SpatRaster, values=FALSE) seems to dissolve cells when it should not

Maybe there is something I do not understand. According to the help page, as.polygons() applied to a SpatRaster with the option values = FALSE should not dissolve cells. But:
library(terra)
# terra 1.5.21
r <- rast(ncols=2, nrows=2, vals=1)
as.polygons(r) # correctly gives a dissolved 1x1 polygon:
# class : SpatVector
# geometry : polygons
# dimensions : 1, 1 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
# names : lyr.1
# type : <int>
# values : 1
as.polygons(r, values=FALSE) # improperly (?) gives a dissolved 1x1 polygon:
# class : SpatVector
# geometry : polygons
# dimensions : 1, 0 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
whereas it should give an undissolved polygon, such as the one obtained with dissolve=FALSE (but without the values):
as.polygons(r,dissolve=FALSE)
# class : SpatVector
# geometry : polygons
# dimensions : 4, 1 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
As you noted, the documentation is incorrect. If you do not want the cells to be dissolved, you need to use dissolve=FALSE.
If you do not want to dissolve, and do not want the values, you can do
library(terra)
r <- rast(ncols=2, nrows=2, vals=1)
p <- as.polygons(r, dissolve=FALSE, values=FALSE)
# or
p <- as.polygons(rast(r))
p
# class : SpatVector
# geometry : polygons
# dimensions : 4, 0 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
The latter works the way it does, despite the default dissolve=TRUE because there is nothing to dissolve with since rast(r) has no values. If you want the extent you can do
as.polygons(r, extent=TRUE)
# class : SpatVector
# geometry : polygons
# dimensions : 1, 0 (geometries, attributes)
# extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax)
# coord. ref. : lon/lat WGS 84
That is a (much more) efficient approach that is otherwise equivalent to dissolving (aggregating) all cells.

Extract coordinates of raster cells that overlap with multilinestring

I'm working with R and I have a raster (population) with population density data on each cell and a multilinestring (border_365_366) that represents an international border. I'd like to extract the coordinates of the raster cells that overlap with the international border.
Does anyone know how to extract this? I think one of the major issues here is that I'm working with a multilestring instead of a data.frame with coordinates.
> class(border_365_366)
[1] "sf" "data.frame"
> class(population)
[1] "RasterLayer"
attr(,"package")
[1] "raster"
> border_365_366
Simple feature collection with 1 feature and 0 fields
geometry type: MULTILINESTRING
dimension: XY
bbox: xmin: 27.32998 ymin: 57.52933 xmax: 28.21014 ymax: 59.46253
geographic CRS: WGS 84
geometry
1 MULTILINESTRING ((27.66656 ...
> population
class : RasterLayer
dimensions : 21600, 34926, 754401600 (nrow, ncol, ncell)
resolution : 0.01030751, 0.01046373 (x, y)
extent : -180, 180, -120.053, 105.9636 (xmin, xmax, ymin, ymax)
crs : +proj=longlat +datum=WGS84 +no_defs
names : pop_new
values : 0, 107475 (min, max)
I was able to solve this issue by converting the line to linestring
border_365_366<- st_cast(border_365_366,'LINESTRING')

Why does my perspective implementation fails in displaying my cube's faces?

I wrote a program that takes in entry some points, expressed in 3D coordinates and that must be drawn in a 2D canvas. I use perspective projection, homogeneous coordinates and similar triangles to do that. However, my program does not work and I actually don't know why.
I followed two tutorials. I really understood the geometrical definitions and properties I have read. However, my implementation fails... I will write references to these both courses little by little, to make your reading more confortable :).
Overview : geometrical reminders
The perspective projection is done following this workflow (cf. these 2 courses - I wrote pertinent links about the latter (HTML anchors) further down, in this post) :
Definition of the points to draw, expressed according to the world's coordinates system ; Definition of the matrix of projection, which is a matrix of transformation that "converts" a point expressed according to the world coordinates system into a point expressed according to the camera's coordinates system (NB : I believe this matrix can be understood as being the 3D object "camera")
Product of these points with this matrix (as defined in the adequat part, below, in this document) : the product of these world-expressed points results in the conversion of these points to the camera's coordinates system. Note that points and matrix are expressed in 4D (concept of homogenous coordinates).
Use of similar triangles concept to project (only computing is done at this step) on the canvas the in-camera-expressed points (using their 4D coordinates). After this operation, the points are now expressed in 3D (the third coordinate is computed but not actually used on the canvas). The 4th coordinate is removed because not useful. Note that the 3rd coordinate won't be useful, except to handle z-fighting (though, I don't want to do that).
Last step : rasterization, to actually draw the pixels on the canvas (other computing AND displaying are done at this step).
First, the problem
Well, I want to draw a cube but the perspective doesn't work. The projected points seem to be drawn withtout perspective.
What result I should expect for
The result I'm expecting is the cube displayed in "Image" part of this below PNG :
What I'm outputting
The faces of my cube are odd, as if perspective wasn't well used.
I guess I know why I'm having this problem...
I think my projection matrix (i.e. : the camera) doesn't have the good coefficients. I'm using a very simple projection matrix, without the concepts of fov, near and far clipping planes (as you can see belower).
Indeed, to get the expected result (as previouslyt defined), the camera should be placed, if I'm not mistaken, at the center (on axes x and y) of the cube expressed according to the world coordinate system and at the center (on axes x and y) of the canvas, which is (I make this assumption) placed 1 z in front of the camera.
The Scastie (snippet)
NB : since X11 is not activated on Scastie, the window I want to create won't be shown.
https://scastie.scala-lang.org/N95TE2nHTgSlqCxRHwYnxA
Entries
Perhaps the problem is bound to the entries ? Well, I give you them.
Cube's points
Ref. : myself
val world_cube_points : Seq[Seq[Double]] = Seq(
Seq(100, 300, -4, 1), // top left
Seq(100, 300, -1, 1), // top left z+1
Seq(100, 0, -4, 1), // bottom left
Seq(100, 0, -1, 1), // bottom left z+1
Seq(400, 300, -4, 1), // top right
Seq(400, 300, -1, 1), // top right z+1
Seq(400, 0, -4, 1), // bottom right
Seq(400, 0, -1, 1) // bottom right z+1
)
Transformation (Projection) matrix
Ref. : https://www.scratchapixel.com/lessons/3d-basic-rendering/perspective-and-orthographic-projection-matrix/building-basic-perspective-projection-matrix , End of the Part. "A Simple Perspective Matrix"
Note that I'm using the simplest perspective projection matrix : I don't use concept of fov, near and far clipping planes.
new Matrix(Seq(
Seq(1, 0, 0, 0),
Seq(0, 1, 0, 0),
Seq(0, 0, -1, 0),
Seq(0, 0, -1, 0)
))
Consequence of this matrix : each point P(x;y;z;w) producted with this matrix will be : P'(x;y;-z;-z).
Second, the first operation my program does : a simple product of a point with a matrix.
Ref. : https://github.com/ssloy/tinyrenderer/wiki/Lesson-4:-Perspective-projection#homogeneous-coordinates
/**
* Matrix in the shape of (use of homogeneous coordinates) :
* c00 c01 c02 c03
* c10 c11 c12 c13
* c20 c21 c22 c23
* 0 0 0 1
*
* #param content the content of the matrix
*/
class Matrix(val content : Seq[Seq[Double]]) {
/**
* Computes the product between a point P(x ; y ; z) and the matrix.
*
* #param point a point P(x ; y ; z ; 1)
* #return a new point P'(
* x * c00 + y * c10 + z * c20
* ;
* x * c01 + y * c11 + z * c21
* ;
* x * c02 + y * c12 + z * c22
* ;
* 1
* )
*/
def product(point : Seq[Double]) : Seq[Double] = {
(0 to 3).map(
i => content(i).zip(point).map(couple2 => couple2._1 * couple2._2).sum
)
}
}
Then, use of similar triangles
Ref. 1/2 : Part. "Of the Importance of Converting Points to Camera Space
" of https://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel-coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points
Ref. 2/2 : https://github.com/ssloy/tinyrenderer/wiki/Lesson-4:-Perspective-projection#time-to-work-in-full-3d
NB : at this step, the entries are points expressed according to the camera (i.e. : they are the result of the precedently defined product with the precedently defined matrix).
class Projector {
/**
* Computes the coordinates of the projection of the point P on the canvas.
* The canvas is assumed to be 1 unit forward the camera.
* The computation uses the definition of the similar triangles.
*
* #param points the point P we want to project on the canvas. Its coordinates must be expressed in the coordinates
* system of the camera before using this function.
* #return the point P', projection of P.
*/
def drawPointsOnCanvas(points : Seq[Seq[Double]]) : Seq[Seq[Double]] = {
points.map(point => {
point.map(coordinate => {
coordinate / point(3)
}).dropRight(1)
})
}
}
Finally, the drawing of the projected points, onto the canvas.
Ref. : Part. "From Screen Space to Raster Space" of https://www.scratchapixel.com/lessons/3d-basic-rendering/computing-pixel-coordinates-of-3d-point/mathematics-computing-2d-coordinates-of-3d-points
import java.awt.Graphics
import javax.swing.JFrame
/**
* Assumed to be 1 unit forward the camera.
* Contains the drawn points.
*/
class Canvas(val drawn_points : Seq[Seq[Double]]) extends JFrame {
val CANVAS_WIDTH = 820
val CANVAS_HEIGHT = 820
val IMAGE_WIDTH = 900
val IMAGE_HEIGHT = 900
def display = {
setTitle("Perlin")
setSize(IMAGE_WIDTH, IMAGE_HEIGHT)
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE)
setVisible(true)
}
override def paint(graphics : Graphics): Unit = {
super.paint(graphics)
drawn_points.foreach(point => {
if(!(Math.abs(point.head) <= CANVAS_WIDTH / 2 || Math.abs(point(1)) <= CANVAS_HEIGHT / 2)) {
println("WARNING : the point (" + point.head + " ; " + point(1) + ") can't be drawn in this canvas.")
} else {
val normalized_drawn_point = Seq((point.head + (CANVAS_WIDTH / 2)) / CANVAS_WIDTH, (point(1) + (CANVAS_HEIGHT / 2)) / CANVAS_HEIGHT)
graphics.fillRect((normalized_drawn_point.head * IMAGE_WIDTH).toInt, ((1 - normalized_drawn_point(1)) * IMAGE_HEIGHT).toInt, 5, 5)
graphics.drawString(
"P(" + (normalized_drawn_point.head * IMAGE_WIDTH).toInt + " ; "
+ ((1 - normalized_drawn_point(1)) * IMAGE_HEIGHT).toInt + ")",
(normalized_drawn_point.head * IMAGE_WIDTH).toInt - 50, ((1 - normalized_drawn_point(1)) * IMAGE_HEIGHT).toInt - 10
)
}
})
}
}
Question
What's wrong with my program ? I understood the geometrical concepts explained by these both tutorials that I read carefully. I'm pretty sure my product works. I think either the rasterization, or the entries (the matrix) could be wrong...
Note that I'm using the simplest perspective projection matrix : I don't use concept of fov, near and far clipping planes.
I think that your projection matrix is too simple. By dropping the near and far clipping planes, you are dropping perspective projection entirely.
You do not have to perform the z-clipping step, but you need to define a view frustum to get perspective to work. I believe that your projection matrix defines a cubic "view frustrum", hence no perspective.
See http://www.songho.ca/opengl/gl_projectionmatrix.html for a discussion of how the projection matrix works.
Quoting the Scratchapixel page:
... If we substitute these numbers in the above equation, we get:
Where y' is the y coordinate of P'. Thus:
This is probably one the simplest and most fundamental relation in computer graphics, known as the z or perspective divide. The exact same principle applies to the x coordinate. ...
And in your code:
def drawPointsOnCanvas(points : Seq[Seq[Double]]) : Seq[Seq[Double]] = {
points.map(point => {
point.map(coordinate => {
coordinate / point(3)
^^^^^^^^
...
The (3) index is the 4th component of point, i.e. its W-coordinate, not its Z-coordinate. Perhaps you meant coordinate / point(2)?

Centroid of Triangulated Surface in 3D

Think about the irregular 3D shape's surface (i.e stone) is triangulated.
I have:
Vertices. x,y,z coordinate of each point (pointCloud).
Faces. Contains info about, each triangle's vertex.
Area of each triangle
Volume of whole shape
With the given information, how to find exact coordinate of that whole triangulated surface's centroid?
You can compute the centroid of the surface by accumulating the centroids of each triangle weighted by each triangle mass, then in the end divide by the total mass. In algorithm, this gives:
mg : vector3d <- (0,0,0)
m : real <- 0
For each triangle t
m <- m + area(t)
mg <- mg + area(t) * centroid(t)
End for
Surfacecentroid <- mg / m
where:
centroid(t) = 1/3 (p1+p2+p3)
area(t) = 1/2 * || cross(p2-p1, p3-p1) ||
Now if what you want is the centroid of the volume enclosed by the surface, the algorithm is different, you need to decompose the volume into tetrahedra and accumulate tetrahedra centroids as follows:
mg : vector3d <- (0,0,0)
m : real <- 0
For each triangle t = (p1,p2,p3)
m <- m + signed_volume(O,p1,p2,p3)
mg <- mg + signed_volume(O,p1,p2,p3) * centroid(O,p1,p2,p3)
End for
volumeCentroid <- (1/m) * mg
where
O=(0,0,0) and
centroid(p1,p2,p3,p4) = 1/4 (p1+p2+p3+p4)
signed_volume(p1,p2,p3,p4) = 1/6 * dot(p2-p1, cross(p3-p1, p4-p1))
The formula works even when O is outside the surface because the signed volumes of the tetrahedra parts outside the surface exactly cancel-out (if you love math, another way of thinking about the algorithm is applying Stokes formula to the volume computation).

Converting 3D point clouds to range image

I have many 3D point clouds gathered by velodyne sensor. eg(x, y, z) in meter.
I'd like to convert 3D point clouds to range image.
Firstly, I've got transformtation from Catesian to spherical coordinate.
r = sqrt(x*x + y*y + z*z)
azimuth angle = atan2(x, z)
elevation angle = asin(y/r)
Now. How can I convert 3D point to Range image using these transformation in matlab?
Whole points are about 180,000 and I want 870*64 range image.
azimuth angle range(-180 ~ 180), elevation angle range(-15 ~ 15)
Divide up your azimuth and elevation into M and N ranges respectively. Now you have M*N "bins" (M = 870, N = 64).
Then (per bin) accumulate a histogram of points that project into that bin.
Finally, pick a representative value from each bin for the final range image. You could pick the average value (noisy, fast) or fit some distribution and then use that to pick the value (more precise, slow).
The pointcloud2image code available from Matlab File Exchange can help you to directly convert point cloud (in x,y,z format) to 2D raster image.