How to identify Points Outside a 3D mesh in Matlab - matlab

I have two large data sets, one that is the outside of an object, and on that represents the fluid flow inside of the object. I am worried that with the mesh I have, some of the data might be mis-represented, or not modeled well, and is outside the first data set.
In Matlab, I used trisurf to create a mesh from the first data set and was curious if there was a way to check for points outside the mesh. Ive seen the 2D version of inpolygon, and some threshold functions, but the surface is not super regular and those don't really account for meshes. Thanks for the help!

You didn't specify how what kind of data/format your object is defined as. If for example you have a Delaunay tetrahdralization/mesh of your object (if not you can use delaunay to create one from a point cloud), you can use the tsearchn function to determine if points are in/out of the object (mesh).
https://www.mathworks.com/help/matlab/ref/tsearchn.html

Related

Implementing multi-texture shading with the marching cube algorithm (voxels)

I am currently developing an asteroid mining/exploration game with fully deformable, smooth voxel terrain using marching cubes in Unity 3D. I want to implement an "element ID" system that is kind of similar to Minecraft's, as in each type of material has a unique integer ID. This is simple enough to generate, but right now I am trying to figure out a way to render it, so that each individual face represents the element its voxel is assigned to. I am currently using a triplanar shader with a texture array, and I have gotten it set up to work with pre-set texture IDs. However, I need to be able to pass in the element IDs into this shader for the entire asteroid, and this is where my limited shader knowledge runs out. So, I have two main questions:
How do I get data from a 3D array in an active script to my shader, or otherwise how can I sample points from this array?
Is there a better/more efficient way to do this? I thought about creating an array with only the surface vertices and their corresponding ID, but then I would have trouble sampling them correctly. I also thought about possibly bundling an extra variable in with the vertices themselves, but I don't know if this is even possible. I appreciate any ideas, thanks.

How can I make dynamically generated terrain segments fit together Unity

I'm creating my game with dynamicly generated terrain. It is very simple idea. There are always three parts of terrain: segment on which stands a player and two next to it. When the player is moving(always forward) to the next segment new one is generated and the last one is cut off. It works wit flat planes, but i don't know how to do it with more complex terrain. Should I just make it have the same edge from both sides(for creating assets I'm using blender)? Or is there any other option? Please note that I'm starting to make games with unity.
It depends on what you would like your terrain to look like. If you want to create the terrain pieces in something external, like Blender, then yes all those pieces will have to fit together seamlessly. But that is a lot of work as you will have to create a lot of pieces that fit together for the landscape to remain interesting.
I would suggest that you rather generate the terrain dynamically in Unity. You can create your own mesh using code. You start by creating an object (in code), and then generating vertex and triangle arrays to assign to the object, for it to have a visible and sensible mesh. You first create vertices at specific positions and then add triangles that consist of 3 vertices at a time. If you want a smooth look instead of a low poly look, you will reuse some vertices for the next triangle, which is a little trickier.
Once you have created your block's mesh, you can begin to change your code to specify how the height of the vertices could be changed, to give you interesting terrain. As long as the first vertices on your new block are at the same height (say y position) as the last vertices on your current block (assuming they have the same x and z positions), they will line up. That said, you could make it even simpler by not using separate blocks, but by rather updating your object mesh to add new vertices and triangles, so that you are creating a terrain that is just one part that changes, rather than have separate blocks.
There are many ways to create interesting terrain. One of the most often used functions to generate semi-random and interesting terrain, is Perlin Noise. Another is his more recent Simplex noise. Like most random generator functions, it has a seed value, which you can keep track of so that you can create interesting terrain AND get your block edges to line up, should you still want to use separate blocks rather than a single mesh which dynamically expands.
I am sure there are many tutorials online about noise functions for procedural landscape generation. Amit Patel's tutorials are good visual and interactive explanations, here is one of his tutorials about noise-based landscapes. Take a look at his other great tutorials as well. There will be many tutorials on dynamic mesh generation as well, just do a google search -- a quick look tells me that CatLikeCoding's Procedural Grid tutorial will probably be all you need.

Identifying person using optical flow and clustering?

So I am using matlab and I've managed to modify one of their examples so that I can now plot the flow lines as people walk below (Camera is above a door).
I use Lucas-Kanade optical flow and the computer vision toolbox.
The lines are defined like so, I also defined the tracked points. These tracked points include cases where the original points haven't changed and so the real(tmp(:)) in this case will be zero and those points will be the same as the orgininally identified feature points.
vel_Lines = [Y(:) X(:) Y(:)+real(tmp(:)) X(:)+imag(tmp(:))];
allTrackedPoints = [Y(:)+real(tmp(:)) X(:)+imag(tmp(:))];
My question is how can I JUST get the points which have successfully been tracked a certain distance? I want to somehow only retain values which the change is large enough.
I'm not great with Matlab's syntax so was hoping this would be easy for someone.
I want to get the points that were successfuly tracked pertaining to the motion, then cluster these points to determine how many people, and then tracked these sets of points using a multiple object tracker.
If your camera is not moving, then background subtraction may work better for you than optical flow. See this example.
You can also use the vision.PeopleDetector object to detect people. See this example.
If you insist on using optical flow, try the Fareneback optical flow algorithm, available as of R2015b release.

How to get data for Matlab cascade Object Detector?

I want to use the trainCascadeObjectDetector in Matlab. It requires an array with the regions of interest of the images. I found two apps where you can put boxes around the rois and the array gets created automatically:
Cascade Trainer: Specify Ground Truth, Train a Detector
Training Image Labeler
Unfortunately they both require Matlab R2014 and I only got R2013.
Is there an other way to define the rois without manually creating the array?
Regards
Philip
I did not find an other solution so I wrote a custom Matlab script for the job. The imrect function in Matlab is well suitable for this. After the image is shown, the user can drag a rectangular over the region of interest. The coordinates of the region than get stored in a structure together with the path to the image file. Additionally the parts of the image that do not belong to the roi are stored in the negative sample folder.

Fit two binary images (panorama?)

I have several binary images which represent a partial map of an area (~4m radius) and were taken ~0.2m apart, for example:
(Sorry for the different axis limit).
If you look closely, you'll see that the first image is about 20cm to the right.
I want to be able to create a map of the area from several pictures like this.
I've tried several methods, such as Matlab's register but couldn't find any good algorithm for this purpose. Any ideas on how to approach this?
Thanks in advance!
Two possible routes:
Use imregister. This does registration based on image intensity. You will probably want a rigid transform.
However, this will require your data to be an image (matrix), which it doesn't look like it currently is.
Alternatively, you can use control points. These are common (labelled) points in each image which provide a reference to determine the transform.
Matlab has a built in function to determine control points, cpselect. However, again this requires image data. You may be better of writing your own function to do this or just selecting control points manually.
Once you have control points you can determine the transform between them using fitgeotrans