I need to implement an importer to 3ds Max that will load some custom mesh data.
I have implemented most of the loading code, but I still have one problem to resolve.
The data format I need to use uses structures called 'Hard Edges' to describe surface smoothness, but the 3ds Max uses 'Smoothing Groups' and while both approaches work very well I need some way to convert one to the second.
Basically I have some mesh vertices/faces loaded into 3ds, now I need to compute Smoothing Groups for those faces, based on the list of hard edges in my file.
Can you point me to any algorithm or just any clue that will help me implement the conversion?
I tried to search google, etc., there are many tutorials and articles about smoothing groups but from the view of 3ds Max user (modeling). I can't find anything about doing the same with code (and I don't ask about API for doing this, I know the API but I need an algorithm to compute SGs).
OK, I've found some workaround...
It uses 3D MAX internal code instead of my own one but at least it works:
Let's assume I have a list or vector of edge structs:
struct Edge
{
int nEd0;
int nEd1;
};
And a function to check if an edge is in the list:
bool findHardEdge( int v1, int v2 );
Here is the code to compute Smoothing Groups from Hard Edges, using MNMesh class:
MNMesh mm = *pMesh; // pMesh contains vert/face data already and is copied to MNMesh
mm.FillInMesh(); // computes helper data in MNMesh
for( int i = 0; i < mm.nume; i++ ) // iterate over all edges
{
int v1 = mm.E(i)->v1;
int v2 = mm.E(i)->v2;
bool found = findHardEdge( v1, v2 ); // check if the edge is a 'hard' one
if( found )
mm.E(i)->SetFlag( 32 ); // mark an edge with some flag
}
mm.SmoothByCreases( 32 ); // this method does the job
mm.OutToTri( *pMesh ); // copy data back to the original mesh instance
I realize that this code is quite slow, especially for bigger meshes but it's also simplest thing I came up with. If you know any better way let me know :)
Related
I'm currently working on a model where agents move to random points in the ocean on a GIS map. However, I want them to path in such a way that they do not collide with any islands on the map. I was thinking about creating GIS regions using the perimeter of the islands and was hoping there was some access restriction option for GIS regions. However, this does not seem to be a feature yet.
Does anyone have any tips on how to make agents avoid entering certain regions while moving towards a point on a GIS map? Thanks.
Create GISRoutes and move along those
One option is to generate the routes and then assess if they do overlap an island. If they do then you change that part of the route. See code example below, note I did not do all the math to create a new route that goes around the island... You believe you can do this yourself. It is going to be a piece of work, so I did not test this or develop the completed solution.
GISRoute route = main.map.getRoute(currentLat, currentLong, toLat, toLong);
List<GISMarkupSegment> segementsToKeep = new ArrayList<GISMarkupSegment>()
SegmentForLoop:
for (int i = 0; i < route.getSegmentCount(); i++) {
GISMarkupSegment segment = route.getSegment(i);
// Check if the segmeent is too close to an island, if within 100m then we adjust the segment and stop adding segeents
for (Pair<Double, Double> pair:main.LatLongPointsOIfIslands) {
double distance = Math.sqrt(segment.getDistanceSq(new Point(pair.getFirst(), pair.getSecond())));
if (distance < 100) {
segment.setEnd(pair.getFirst() + 100, pair.getSecond()+100);
break SegmentForLoop:
}
segementsToKeep.add(segment);
}
}
GISRoute routeUpToIsland = new GISRoute(main, segementsToKeep);
//TODO
// Create a new route around the island
// Add the segements of this new route to segementsToKeep
LatLongPointsOIfIslands is a collection of Pair<Double,Double> which is just lat and long combinations of all the GIS points of the island perimeters
I hope this helps or puts you on the right direction.
Alterantively you can investigate alternative routing providers
Given a mesh in Unity & C# (that itself was created in realtime by merging simpler base meshes), how could we during runtime* turn it into a smooth, almost like wrapped-in-cloth mesh version of itself? Not quite a fully convex version, but more rounded, softening sharp edges, bridging deep gaps and so on. The surface would also ideally look like when the "smoothing angle" normals setting is applied to imported objects. Thanks!
Before & after sketch
*The mesh setup is made by people and its specifics unknown beforehand. All its basic shape parts (before we merge them) are known though. The base parts may also remain unmerged if that helps a solution, and it would be extra terrific if there was a runtime solution that would fastly apply the wrapper mash even with base parts that change their transform over time, but a static one-time conversion would be great too.
(Some related keywords may be: marching cube algorithm & metaballs, skin above bones, meshfilter converting, smoothing shader, softening, vertices subdivision.)
There are many ways to get something similar so you can pick your preferred one:
Marching Cubes
This algorithm is easy to use but the result always inherits the blocky 'style' of it. If that's the look you want then use it. If you need something more smooth and/or pixel perfect then look for other ways.
Ray Marching and Signed Distance Functions
This is quite interesting technique that may give you a lot of control. You can represent your base parts with simple cube/cylinder/etc. equations and blend them together with simple math.
Here you can see some examples:
http://iquilezles.org/www/articles/distfunctions/distfunctions.htm
The best thing here is that it's very simple to setup, you don't even need to merge your base parts, you just push your data to renderer. Worse, is that it may get computationaly hard on rendering part.
Old school mesh modifications
Here you have the most options but it's also most complicated. You start with your base parts which don't have much data by themselves so you should probably join them into one mesh using CSG Union operation.
Having this mesh you can compute neighbors data for your primitives:
for each vertex find triangles containing it.
for each vertex find edges containing it.
for each edge find triangles containing it.
etc.
With such data you may be able to do things like:
Find and cut some sharp vertex.
Find and cut some sharp edge.
Move the vertex to minimize angle between triangles/edges it creates.
and so on...
There are really a lot of details that may work for you or not, you just need to test some to see which one gives the preferred results
.
One simple thing I'd start with:
For each vertex find all vertices connected to it by any edge.
Compute average position of all those vertices.
Use some alpha parameter in [0,1] range to blend between initial vertex position and averaged one.
Implement multiple iterations of this algorithm and add parameter for it.
Experiment with alpha and number of iterations.
Using this way you also have two distinct phases: computation and rendering, so doing it with animation may become too slow, but just rendering the mesh will be faster than in Ray Marching approach.
Hope this helps.
EDIT:
Unfortunately I've never had such need so I don't have any sample code but here you have some pseudo-code that may help you:
You have your mesh:
Mesh mesh;
Array of vertex neighbors:
For any vertex index N, triNeighbors[N] will store indices of other vertices connected by edge
List<HashSet<int>> triNeighbors = new List<HashSet<int>>();
int[] meshTriangles = mesh.triangles;
// iterate vert indices per triangle and store neighbors
for( int i = 0; i < meshTriangles.Length; i += 3 ) {
// three indices making a triangle
int v0 = meshTriangles[i];
int v1 = meshTriangles[i+1];
int v2 = meshTriangles[i+2];
int maxV = Mathf.Max( Mathf.Max( v0, v1 ), v2 );
while( triNeighbors.Count <= maxV )
triNeighbors.Add( new HashSet<int>() );
triNeighbors[v0].Add( v1 );
triNeighbors[v0].Add( v2 );
triNeighbors[v1].Add( v0 );
triNeighbors[v1].Add( v2 );
triNeighbors[v2].Add( v0 );
triNeighbors[v2].Add( v1 );
}
Now, for any single vertex, with index N you can compute its new, averaged position like:
int counter = 0;
int N = 0;
Vector3 sum = Vector3.zero;
if( triNeighbors.Count > N && triNeighbors[N] != null )
{
foreach( int V in triNeighbors[N] ) {
sum += mesh.vertices[ V ];
counter++;
}
sum /= counter;
}
There may be some bugs in this code, I've just made it up but you should get the point.
I have to implement a basic tracking program in MATLAB that, given a set of frames from a videogame, it analyzes each one of them and then creates a bounding box around each object. I've used the function regionprops in order to obtain the coordinates of the bounding boxes for each object, and visualized them using the function rectangle, as follows:
for i = 1:size( frames,2 )
CC{1,i} = findConnectedComponents( frames{1,i} );
stats{1,i} = regionprops( 'struct',CC{1,i},'BoundingBox','Centroid' );
imshow( frames{1,i} ),hold on
for j = 1:size(stats{1,i},1)
r = rectangle( 'Position',stats{1,i}(j).BoundingBox );
r.FaceColor = [0 0.5 0.5 0.45];
end
end
This works just fine, but I'd like to go one step further and be able to differenciate static objects from moving objects. I thought of using the centroid to see, for each object, if it is different in each frame (which would mean that the object is moving), but in each image I have a different number of objects.
For example, if I am trying this on Space Invaders, when you kill an alien it disappears, so the number of objects is reduced. Also each projectile is a separate object and there could be a different number of projectiles in different moments of the game.
So my question is, how could I classify the objects based on wether they move or not, and paint them with two different colors?
In the case of consistent background, using optical flow is ideal for you.
The basic idea is pretty simple, consider subtracting two consecutive frames, and use this to get flow vector of the objects that moved between frames.
You can look at Lucas–Kanade method
and Horn–Schunck method.
Here is a link for matlab implementation of the same.
I am sampling data from the point cloud and trying to display the selected points using a mesh renderer.
I have the data but I can't visualize it. I am using the Augmented Reality application as template.
I am doing the point saving and mesh population in a coroutine. There are no errors but I can't see any resulting mesh.
I am wondering if there is a conflict with an existing mesh component from the point cloud example that I use for creating the cloud.
I pick a point on screen (touch) and use the index to find coordinates and populate a Vector3[]. The mesh receiveds the vertices( 5000 points out of 500000 in the point cloud)
this is where I set the mesh:
if (m_updateSubPointsMesh)
{
int[] indices = new int[ctr];
for (int i = 0; i < ctr; ++i)
{
indices[i] = i;
}
m_submesh.Clear();
m_submesh.vertices = m_subpoints;
int vertsInMesh = m_submesh.vertexCount;
m_submesh.SetIndices(indices, MeshTopology.Points, 0);
}
m_subrenderer.material.SetColor("_SpecColor", Color.yellow);
I am using Unity pro 5.3.3 and VS 2015 on windows 10.
Comments and advice are very much appreciated even if they are not themselves a solution.
Jose
I sort it out. The meshing was right it turn out to be a bug on a transform (not tango-defined). The mesh was rendered in another point. Had to walk around to find it.
Thanks
You must convert the Tango mesh data to mesh data for unity, its not structured in the same way I believe its the triangles thats different. You also need to set triangles and normals to the mesh.
I am making a script which can generate multiple objects in Pyglet. In this example (see link below) there are two pyramids in 3d space, but every triangle is recalculated in every frame. My aim is to make a swarm with a large number of pyramids flying around, but i cant seem to figure out how to implement vertex lists in a batch. (assuming this is the fastest way to do it).
Do they need to be indexed for example? (batch.add_indexed(...) )
A standard seems to be:
batch1 = pyglet.graphics.Batch()
then add vertices to batch1. and finally:
def on_draw():
batch1.draw()
So how to do the intermediate step, where pyramids are added to vertex lists? A final question: when would you suggest to use multiple batches?
Thank you!
apfz
http://www.2shared.com/file/iXq7AOvg/pyramid_move.html
Just have a look at pyglet.sprite.Sprite._create_vertex_list for inspiration. There, the vertices for simple sprites (QUADS) are generated and added to a batch.
def _create_vertex_list(self):
if self._subpixel:
vertex_format = 'v2f/%s' % self._usage
else:
vertex_format = 'v2i/%s' % self._usage
if self._batch is None:
self._vertex_list = graphics.vertex_list(4,
vertex_format,
'c4B', ('t3f', self._texture.tex_coords))
else:
self._vertex_list = self._batch.add(4, GL_QUADS, self._group,
vertex_format,
'c4B', ('t3f', self._texture.tex_coords))
self._update_position()
self._update_color()
So the required function is Batch.add(vertex_list). Your vertices should only be recalculated if your pyramid changes it's position and not at every draw call. Instead of v2f you need to use v3f for 3D-coordinates and of course you need GL_TRIANGLES instead of GL_QUADS. Here is an example of a torus rendered with pyglet.