How to use v.patch or v.clean and v.clip? - qgis

I want to know that I am using GRASS GIS correctly. I'm having trouble merging these two shapefiles
http://www.gisdeveloper.co.kr/download/admin_shp/EMD_201902.zip
http://www.gisdeveloper.co.kr/download/admin_shp/LI_201902.zip
How would you go about the process of dealing with shapefiles which have errors like this?
I've tried importing and cleaning using both QGIS and GRASS but I always end up with warnings like this:
WARNING: Number of centroids exceeds number of areas: 32665 > 20038
WARNING: Number of incorrect boundaries: 62688
WARNING: Number of centroids outside area: 12461
WARNING: Number of duplicate centroids: 3210
I've tried changing the snapping threshold for v.in.ogr but it doesn't seem to make a difference
when I try doing v.patch it looks like this: https://i.imgur.com/u6Sqom5.png
I'd like to end up with something that looks like this but on one layer with no overlap so that there is a 1 to 1 relationship with every space on the map: https://i.imgur.com/5VtWSsR.png

You can use QGIS (SAGA tool) to merge the layers and then import the new layer to GRASS environment:
Vector ‣ Data Management Tools ‣ Merge vect Layers
Or you can also create a pipeline using bash in which you automate this function by importing the layer in grass.
I hope it helps you :)

Related

How to copy data from a 1 x 1 double timeseries to a 1 D Lookup Table in Matlab

I have been asked to look at a Matlab project. I'll link sceenshots to clarify the problem. I need to create a 1 D Lookup table with the data from a 1 x 1 double timeseries from another model that has been supplied. One problem is that there are a lot of data points (12500). Is it possible to copy these points across without having to drag the mouse down over the whole 12500 points? Someone has actually tried this dragging the mouse over all the points method and said it didn't work anyway, but I don't really want to try it myself, as it would be way to cumbersome for my liking, even if it did work.
Here is an example of what the 1 x 1 double timeseries looks like (just using 5 points instead of 12500 for simplicity's sake):
Here is the model with the 1D look up table highlighted in blue at the left:
Here is what the 1-D lookup table looks like when opened:
Any insight appreciated.
I've worked out how to copy the data from the timeseries table (actually from the input to this, which is a 1 x 1 struct), but he problem is that the values have no commas between them and the 1-D lookup table requires commas.
Note that the problem has now been solved using Excel, although not by the method I was trying to make work in the question. An answer has been posted which may work, but I'm not sure if i will go to the trouble of attempting to implement it or not at this stage. However, if need be and all being well, I will either do that or delete the question.
You can import a look-up table object (Simulink.LookupTable-object) from both, the MATLAB workspace or directly from Excel.
If you want to automate it, it basically comes down to these two points:
"
Open the model containing the lookup table block and in the Modeling tab, select Model Settings.
In the Model Properties dialog box, in the Callbacks tab, click PostLoadFcn callback in the model callbacks list.
... the next time you open the model, Simulink® invokes the callback and imports the data.
"
After a little thought I feel like the answer is actually fairly trivial: it just can't be done. The input field of the look up table just can't hold 12500 points, most of which have 9 or so decimal places, so there's no way to do this.
This isn't to say there's no way to put the data into the model: this can be done using Excel.

Depth map merging or Point Cloud Merging

My goal is to create a single 3D point-cloud based on 2 pairs of images(AB, BC) and their projection matrices. Each image comes from the same camera (not video) with 3 distinct positions.
I use the "standard process": Point matching (sift or surf), keeping inliers only, finding the position, doing the bundle adjustment...images rectifications. Up to now everything works well.
Next I use the Matlab function "disparity" to create the 2 disparity maps, one for each pair of images.
Next i create 2 separated 3dpoint-clouds, one for each pair of images, using the projection matrices.
But, how can i merge the 2 points clouds coming from AB and BC. Apparently, the 3D coordinates depends on the "DisparityRange" parameters of the function disparity.
Did i miss a step in the process ?
Thanks in advance for any help
Alvaro
Please see the answer on MATLAB Answers
Problem solved.
The problem was that I processed wide-baseline stereo images as if they were short-baseline. Fatal Error!

Cannot get clustering output Mahout

I am running kmeans in Mahout and as an output I get folders clusters-x, clusters-x-final and clusteredPoints.
If I understood well, clusters-x are centroid locations in each of iterations, clusters-x-final are final centroid locations, and clusteredPoints should be the points being clustered with cluster id and weight which represents probability of belonging to cluster (depending on the distance between point and its centroid). On the other hand, clusters-x and clusters-x-final contain clusters centroids, number of elements, features values of centroid and the radius of the cluster (distance between centroid and its farthest point.
How do I examine this outputs?
I used cluster dumper successfully for clusters-x and clusters-x-final from terminal, but when I used it clusteredPoints, I got an empty file? What seems to be the problem?
And how can I get this values from code? I mean, the centroid values and points belonging to clusters?
FOr clusteredPoint I used IntWritable as key, and WeightedPropertyVectorWritable for value, in a while loop, but it passes the loop like there are no elements in clusteredPoints?
This is even more strange because the file that I get with clusterDumper is empty?
What could be the problem?
Any help would be greatly appreciated!
I believe your interpretation of the data is correct (I've only been working with Mahout for ~3 weeks, so someone more seasoned should probably weigh in on this).
As far as linking points back to the input that created them I've used NamedVector, where the name is the key for the vector. When you read one of the generated points files (clusteredPoints) you can convert each row (point vector) back into a NamedVector and retrieve the name using .getName().
Update in response to comment
When you initially read your data into Mahout, you convert it into a collection of vectors with which you then write to a file (points) for use in the clustering algorithms later. Mahout gives you several Vector types which you can use, but they also give you access to a Vector wrapper class called NamedVector which will allow you to identify each vector.
For example, you could create each NamedVector as follows:
NamedVector nVec = new NamedVector(
new SequentialAccessSparseVector(vectorDimensions),
vectorName
);
Then you write your collection of NamedVectors to file with something like:
SequenceFile.Writer writer = new SequenceFile.Writer(...);
VectorWritable writable = new VectorWritable();
// the next two lines will be in a loop, but I'm omitting it for clarity
writable.set(nVec);
writer.append(new Text(nVec.getName()), nVec);
You can now use this file as input to one of the clustering algorithms.
After having run one of the clustering algorithms with your points file, it will have generated yet another points file, but it will be in a directory named clusteredPoints.
You can then read in this points file and extract the name you associated to each vector. It'll look something like this:
IntWritable clusterId = new IntWritable();
WeightedPropertyVectorWritable vector = new WeightedPropertyVectorWritable();
while (reader.next(clusterId, vector))
{
NamedVector nVec = (NamedVector)vector.getVector();
// you now have access to the original name using nVec.getName()
}
check the parameter named "clusterClassificationThreshold".
clusterClassificationThreshold should be 0.
You can check this http://mail-archives.apache.org/mod_mbox/mahout-user/201211.mbox/%3C50B62629.5020700#windwardsolutions.com%3E

Delaunay Triangulation - Removing Triangles

I made a Delaunay Triangulation using Matlab version 2013. I want to remove some of the triangles, meaning canceling their connectivity, for example triangle number 760. How can I make this change? When I tried to edit the connectivity list:
dt.ConnectivityList(760 , :) = [];
I got the message:
Cannot assign values to the triangulation.
I thought about maybe copying specific fields to a different structure, but:
a. I'm not familiar with structures so I don't know how to do it right.
b. After I copy the structure, how can I get my triangles?
dt contains 3 fields: Points, ConnectivityList and Constraints (empty field).
A brief note on MATLAB objects. When you access a field for reading, you are basically doing get(obj, fieldname);. When you try to set a field as you are doing, you are actually calling set(obj, fieldname, new_value). Objects do not necessarily allow you to do these operations.
The triangulation object is read-only, so you will have to make copies of all the fields. If, as you mentioned, you would like to make a structure with similar fields, you can do as follows:
dts = struct('Points', dt.Points, 'ConnectivityList', dt.ConnectivityList);
Now you can edit the fields.
dts.ConnectivityList(760) = [];
You may be able to plot the new structure, but the methods of the delaunayTriangulation class will not be available to you.
To plot the result, use trisurf:
trisurf(dts.ConnectivityList, dts.Points);
I was facing same problem. I found another solution. Instead of creating a new struct just create an object of its super class i.e. triangulation class with edited connectivity list.
Here is my code
P- list of points
C- Constraints (optional)
dt=delaunayTriangulation(P,C); %created triangulation but dt won't let you change connectivity list
list=dt.ConnectivityList;
%your changes here
x=triangulation(list,dt.Points);
Now you can use x as triangulation object
triplot(x)

Modelica: use of der() in a custom class/model

I'm trying to learn Modelica and I have constructed a very simple model using the multibody library. the model consists of a world object and a body (mass) connected to to beams which are then connected to 2 extended PartialOneFrame_a classes (see below) which I modified to create a constant force in one axis. essentially all this group of objects does is fall under gravity and spin around due to the two forces acting at a longituidnal offset from the body center creating a couple about the cg.
model Constant_Force
extends Modelica.Mechanics.MultiBody.Interfaces.PartialOneFrame_a;
parameter Real force = 1.0;
equation
frame_a.f = {0.0,0.0,force};
frame_a.t = {0.0,0.0,0.0};
end Constant_Force;
I next wanted see if I could create a very simple aerodynamic force component which I would connect to the end of one the rotating 'arms'. My idea was to follow the example of the Constant_force model above and for my first simple cut generate forces based on the local frame velocities. This where my problem arose - I tried to compute the velocity using der(frame_a.r_0) which I was then going to transform to local frame using resolve2 function but adding the der(...) line caused the model to not work properly - it would 'successfully' simulate (using OpenModelica) but the v11b vector (see below) would be all zeros, so would der(frame_a.r_0) that appeared for plot plotting - not only that, all the other component behaviors were now also just zero constantly - (frame_a.r_0, w_a etc of the body).
model Aerosurf
extends Modelica.Mechanics.MultiBody.Interfaces.PartialOneFrame_a;
import Modelica.Mechanics.MultiBody.Frames;
import Modelica.SIunits;
//Real[3] v11b;
SIunits.Velocity v11b[3];
//initial equation
// v11b={0.0,0.0,0.0};
algorithm
//v11b:=der(frame_a.r_0);
equation
v11b=der(frame_a.r_0);
frame_a.f = {0.0,0.0,0.0};
frame_a.t = {0.0,0.0,0.0};
end Aerosurf;
I tried a number of ways just to simply compute the velocities (you will see from the commented lines) so i could plot to check correct behavior but to no avail. I used algorithm or equation approach - I did acheive some different (but also incorrect behaviours) with the different approaches.
Any tips? I must be missing something fundamental here, it seems the frame component does not inherently carry the velocity vector so I must have to compute it??
The simplest way is to use the Modelica.Mechanics.MultiBody.Sensors.AbsoluteVelocity block from MSL and connect it to your MB frame, then just use the variable of the output connector in your equation.