Access all tetrahedrons incident on a vertex - triangulation

I would like to know that how can we access all tetrahedrons incident on a vertex in a CGAL 3D triangulation?
I have seen cell() function for vertex, but it seems to allow access to one(arbitrarily selected?) tetrahedron only.

The answer you are looking for should be TDS::incident_cells.

I had trouble using the incident_cells() function as I was not sure how to use the OutputIterators. This CGAL forum discussion exactly answers my question(with a code example).

Related

Extra edges are generated between vertices when I move camera in game or in editor

I'm trying generate custom procedural landscape in unreal engine 4
To implement this I'm using this class https://docs.unrealengine.com/en-US/API/Plugins/ProceduralMeshComponent/UProceduralMeshComponent/index.html
and for nice noise generation on Z axis I'm using this plugin https://github.com/devdad/SimplexNoise from this library the only method I use is: USimplexNoiseBPLibrary::SimplexNoise2D
How to implement whole process I inspired from this video: https://www.youtube.com/watch?v=IKB1hWWedMk
I will try to describe flow of whole process:
define vertices count in row and column
iterate through row and column and create vertex vectors on (xscale, yscale, FMath::Lerp((-maxFallOff, maxHeight, USimplexNoiseBPLibrary::SimplexNoise2D(xperlinScale,yperlinScale)))
generate triangles using this method: https://docs.unrealengine.com/en-US/API/Plugins/ProceduralMeshComponent/UKismetProceduralMeshLibrary/ConvertQuadToTri-/index.html
generate UVs
That is all, at this point I can say everything works fine, but there is little issue, when I move camera in editor or in game on mesh appears extra edges. I also recorded video to show what I'm talking about.
https://youtube.com/watch?v=_B9Fxg5oZcE the edges I'm talking, appears on 00:05 second
Code is written in C++, I could post code here, but I think code is not problem here, I think something happens on runtime while I move camera something that I don't know...
I can say in advance if you interested that I'm not manipulating on mesh on Tick event
Problem solved...
Actually seems like I had bug in my code, I was passing too many triangle points to CreateMeshSection_LinearColor method that was the problem.
Anyway thanks for attention
Cheers

Substitute estimateGeometrictransformation

I go through the help of matlab and I found out that the function "estimateGeometrictransformation" doesn't support "Briskpoints". How can substute it. I have to implement a code to detect matching points. (like I saw in this example http://it.mathworks.com/help/vision/examples/object-detection-in-a-cluttered-scene-using-point-feature-matching.html)
Thank you.
If points is a briskPoints object, you should be able to use points.Location to get the x-y coordinates, and pass them to estimateGeometricTransformation.

slicing a 3D image data set at different angles [duplicate]

This question already has answers here:
Extract arbitrarily rotated plane of data from 3D array as 2D array
(2 answers)
Closed 8 years ago.
I am working with a 3d stack of CT data. I'm interested to define a plane and slice this 3D image dataset with this plane. I'm using MATLAB to do this. I have attempted a few different approaches, including rotating the image data set prior to slicing it, however, imrotate() only rotates the image in one direction (about the z-axis I believe).
I have also tried defining the plane and intersecting it with each image slice and defining the data points by interpolation. I thought and still think this is a clean way of approaching the problem, however I have not succeeded in finding out why the approach is not working. I understand that my image is defined as coordinates, while when I try to define the plane MATLAB does this through dimensions. As straightforward as it sounds I have been struggling with figuring out the solution for a while now.
I appreciate any help guiding me to a solution.
Thank you in advance!
I would strongly recommend using ITK (http://www.itk.org/Doxygen41/html/annotated.html) for working on medical images. MATLAB is not very helpful when working with large medical images. There are varuous filters in ITK which cna solve your purpose, e.g., ExtractSliceImageFIlter... May be a simple cropping is what you want... Its bit of a pain to learn ITK initially but totally worth it... refer the ITK Documentation and examples... all the doubts that you have about using any function etc can be understood by looking at solved examples give...
http://www.mathworks.com/products/demos/image/3d_mri/tform3.html
i hope this help,
i would also go with magarwal suggestion , matlab people are usually taking ITK filters and implementing it in Matlab,
so if you have C++, java, python , c# or any skill of the above you can use itk .
and Trust me you will be ahead than Matlab while waiting for them to implement filters they already have in ITK

fruit ninja Blade effect

i want to make fruit ninja blade. i am using cocos2d and the MotionStreak is really ugly for this. Any other approach or better settings for MotionStreak? maybe particle system? any free great tools similar to ParticleDesigner?
I have my own implementation with OpenGL triangle strips mapped with texture. The blade is very smooth if the distances between adjacent points are small enough. I use linear interpolation to insert more points between two points which the distance is greater than a predefined constant. I'm thinking of use order 2 interpolation but the implementation is more difficult and the performance may reduces.
Source code is available here https://github.com/hiepnd/CCBlade
i don't know how much effort it will take but the thing is you can create and change shape of filter and just apply a white to gray gradient as it's texture, it'll give a very good looking results. i myself am working with cocos2d-x (it's just a c++ port of cocos2d) and it has samples for dynamic filters (it's just like you create and manipulate a mesh and all the things are done automaticaly), it uses CCActionGrid class but i just didn't used this class yet if you couln't solve your problem using that ask me to search deeper.
http://pixlatedstudios.com/2012/02/fruit-ninja-like-blade-effect/
Worth Checking out!!!! based on hiepnd CCBlade tutorial.

texture minification filter in raytracing?

can someone point me to a paper/algorithm/resource/whatever that tells me how to implement a texture minification filter (applies when texels are smaller than pixels) in a raytracer?
thanks!
Since you are using ray tracing I suspect you are looking for a high quality filtering that changes sampling dynamically based on the amount of "error". Based on this assumption I would say take a look at "ray differentials". There's a nice paper on this here: http://graphics.stanford.edu/papers/trd/ and it takes effects like refraction and reflection into account.
Your answer to yourself sounds like the right approach, but since others may stumble across the page I'll add a resource link as requested. In addition to discussing mipmapping (ripmapping is basically more advanced mipmapping), they discuss the effects of reflection and refraction on derivatives and mip-level selection.
Homan Igehy. "Tracing Ray Differentials." 1999. Proceedings of SIGGRAPH. http://graphics.stanford.edu/papers/trd/
Upon closer reading I see that Rehno Lindeque mentioned this paper. At first didn't realize that it was the right reference because he says that the method samples dynamically based on the error of the sampling, which is incorrect. Filtering is done based on the size of the pixel's footprint and uses only one ray, just as you described.
Edit:
Another reference that might be useful ( http://www.cs.unc.edu/~awilson/class/238/#challenges ). Scroll to the section "Derivatives of Texture Coordinates." He suggests backward mapping of texture derivatives from the surface to the screen. I think this would be incorrect for reflected and refracted rays, but is possibly easier to implement and should be okay for primary rays.
I think you mean mipmap'ing.
Here is an article talking about using them.
But nether say how to chose which mipmap to use, but they are often blended (the bigger and smaller mipmap).
Here's a one more article about how Google Earth works, and it talks about how they mipmapping the earth.
thank you guys for your answers, but since I didn't find any appropriate techinque i created something myself which turned out to work very well:
i assume my ray to be a cone with a coneradius of half a pixel on the imageplane. when the ray hits a surface, i calculate the ellipse which is projected onto the surface (the ellipse from the plane-cone intersection). Then, using the texturecoordinate derivatives at the intersection point, i project this ellipse into texturespace. now i know which part of the texture lies under my pixel and can subsample this area
I Also use RipMaps to improve the quality - and i chose the RipMap level based on the size of the ellipse in Texturespace