I am new to CGAL. I generated a triangulated surface from a set of points, how can i visualize it in order to make sense?
The documentation of CGAL has nothing clear about visualization.
From searching the Internet I found this piece of code https://www-sop.inria.fr/geometrica/courses/slides/triangulations-2D.pdf:
void gl_draw_voronoi_edges() {
::glBegin(GL_LINES);
Edge_iterator hEdge;
for (hEdge = edges_begin(); hEdge != edges_end(); hEdge++)
{
CGAL::Object object = dual(hEdge);
Segment segment;
Ray ray;
Point source, target;
if(CGAL::assign(segment,object))
{
source = segment.source();
target = segment.target();
}
else if(CGAL::assign(ray,object))
{
source = ray.source();
target = ray.point(1);
}
::glVertex2f(source.x(),source.y());
::glVertex2f(target.x(),target.y());
}
::glEnd(); }
but I don't how to use it?
To know everything about visualization, see http://doc.cgal.org/latest/GraphicsView/index.html#Chapter_CGAL_and_the_Qt_Graphics_View_Framework
yes, demos are shipped with CGAL, as already answered above: "you can use the demo located in the directory demo/Triangulation_2 of a CGAL release."
Related
I have a plane game object on the 3D scene and I want to plot 2D graph z=f(x)=sin kx (btw, MathJaX does not work in this site), for example, on it. I am very new to Unity, could you tell me what should I do?
There are three ways to show a plot.
you create a bunch of small gameobjects and piece together lines,
you create a Texture2D, and draw into it.
When leaving Unity a litte, call Texture.GetNativeTexturePtr() and use D3D calls for this.
I think the 2 is what you might use best.
3. is leaving Unity a little and will not port across target platforms.
It leaves up to you how to do graphics on it. Using only SetPixel is not a really big graphics API.
Here's an example how to load a texture with graphics drawn at runtime.
To use it, create an object, don't forget to assign a material, and attach this script.
using UnityEngine;
public class DrawTex : MonoBehaviour
{
Material mat;
Texture2D tx;
void Start()
{
MeshRenderer rend;
rend = GetComponent<MeshRenderer>();
UnityEngine.Assertions.Assert.IsNotNull(rend);
mat = rend.material;
UnityEngine.Assertions.Assert.IsNotNull(mat);
tx = new Texture2D(128,128,TextureFormat.ARGB32,true);
// draw stuff.
for(int y=0;y<128;y++)
{
for(int x=0;x<128;x++)
{
float a,r,g,b;
r=g=b=a=0f;
if( x<20 || y<20 || x>108 || y>108 )
{a=1.0f;r=g=b=0.75f;}
else
{a=0.5f;r=b=0.25f+(x/256.0f);g=0.25f+(y/256.0f);}
tx.SetPixel(x,y,new Color(r,g,b,a));
}
tx.Apply(true); // now really load all those pixels.
}
mat.mainTexture = tx;
}
}
Hope this helps.
I recently tried to develop a flutter plugin with cameraX, but I found that there was no way to simply bind Preview to flutter's Texture.
In the past, I only needed use camera.setPreviewTexture(surfaceTexture.surfaceTexture()) to bind camera and texture, now I can't find the api.
camera.setPreviewTexture(surfaceTexture.surfaceTexture())
val previewConfig = PreviewConfig.Builder().apply {
setTargetAspectRatio(Rational(1, 1))
setTargetResolution(Size(640, 640))
}.build()
// Build the viewfinder use case
val preview = Preview(previewConfig).also{
}
preview.setOnPreviewOutputUpdateListener {
// it.surfaceTexture = this.surfaceTexture.surfaceTexture()
}
// how to bind the CameraX Preview surfaceTexture and flutter surfaceTexture?
I think you can bind texture by Preview.SurfaceProvider.
final CameraSelector cameraSelector = new CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build();
final ListenableFuture<ProcessCameraProvider> listenableFuture = ProcessCameraProvider.getInstance(appCompatActivity.getBaseContext());
listenableFuture.addListener(() -> {
try {
ProcessCameraProvider cameraProvider = listenableFuture.get();
Preview preview = new Preview.Builder()
.setTargetResolution(new Size(720, 1280))
.build();
cameraProvider.unbindAll();
Camera camera = cameraProvider.bindToLifecycle(appCompatActivity, cameraSelector, preview);
Preview.SurfaceProvider surfaceProvider = request -> {
Size resolution = request.getResolution();
surfaceTexture.setDefaultBufferSize(resolution.getWidth(), resolution.getHeight());
Surface surface = new Surface(surfaceTexture);
request.provideSurface(surface, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()), result -> {
});
};
preview.setSurfaceProvider(surfaceProvider);
} catch (Exception e) {
e.printStackTrace();
}
}, ContextCompat.getMainExecutor(appCompatActivity.getBaseContext()));
Update: CameraX has added functionality which will now allow this since this answer was written, but this might still be useful to someone. See this answer for details.
It seems as though using CameraX is difficult to impossible due to it abstracting the more complicated things away and so not exposing things you need like being able to pass in your own SurfaceTexture (which is normally created by Flutter).
So the simple answer is that you can't use CameraX.
That being said, with some work you may be able to get this to work, but I have no idea if it will work for sure. It's ugly and hacky so I wouldn't recommend it. YMMV.
If we're going to do this, let's first look at how the flutter view creates a texture
#Override
public TextureRegistry.SurfaceTextureEntry createSurfaceTexture() {
final SurfaceTexture surfaceTexture = new SurfaceTexture(0);
surfaceTexture.detachFromGLContext();
final SurfaceTextureRegistryEntry entry = new SurfaceTextureRegistryEntry(nextTextureId.getAndIncrement(),
surfaceTexture);
mNativeView.getFlutterJNI().registerTexture(entry.id(), surfaceTexture);
return entry;
}
Most of that is replicable, so we may be able to do it with the surface texture the camera gives us.
You can get ahold of the texture the camera creates this way:
preview.setOnPreviewOutputUpdateListener { previewOutput ->
SurfaceTexture texture = previewOutput.surfaceTexture
}
What you're going to have to do now is to pass a reference to your FlutterView into your plugin (I'll leave that for you to figure out). Then call flutterView.getFlutterNativeView() to get ahold of the FlutterNativeView.
Unfortunately, FlutterNativeView's getFlutterJni is package private. So this is where it gets really hacky - you can create a class in that same package that calls that package-private method in a publicly accesible method. It's super ugly, and you may have to fiddle around with Gradle to get the compilation security settings to allow it, but it should be possible.
After that, it should be simple enough to create a SurfaceTextureRegistryEntry and to register the texture with the flutter jni. I don't think you want to detach from the opengl context, and I really have no idea if this will actually work. But if you want to try it out and report back what you find I would be interested in hearing the result!
I've created an arm with a custom pivot in Unity which is essentially supposed to point wherever the mouse is pointing, regardless of the orientation of the player. Now, this arm looks weird when pointed to the side opposite the one it was drawn at, so I use SpriteRenderer.flipY = true to flip the sprite and make it look normal. I also have a weapon at the end of the arm, which is mostly fine as well. Now the problem is that I have a "FirePoint" at the end of the barrel of the weapon, and when the sprite gets flipped the position of it doesn't change, which affects particles and shooting position. Essentially, all that has to happen is that the Y position of the FirePoint needs to become negative, but Unity seems to think that I want the position change to be global, whereas I just want it to be local so that it can work with whatever rotation the arm is at. I've attempted this:
if (rotZ > 40 || rotZ < -40) {
rend.flipY = true;
firePoint.position = new Vector3(firePoint.position.x, firePoint.position.y * -1, firePoint.position.z);
} else {
rend.flipY = false;
firePoint.position = new Vector3(firePoint.position.x, firePoint.position.y * -1, firePoint.position.z);
}
But this works on a global basis rather than the local one that I need. Any help would be much appreciated, and I hope that I've provided enough information for this to reach a conclusive result. Please notify me should you need anything more. Thank you in advance, and have a nice day!
You can use RotateAround() to get desired behaviour instead of flipping stuff around. Here is sample code:
public class ExampleClass : MonoBehaviour
{
public Transform pivotTransform; // need to assign in inspector
void Update()
{
transform.RotateAround(pivotTransform.position, Vector3.up, 20 * Time.deltaTime);
}
}
I have a KML file which defines a region (or polygon). I would like to make a function that checks if a given coordinate is inside or outside that polygon.
This is the KML if you want to take a look: http://pastebin.com/LGfn3L8H
I don't want to show any map, I just want to return a boolean.
Point-in-polygon (PiP) is a very well-studied computational geometry problem, so there are lots of algorithms and implementations out there that you can use. Searching SO will probably find several you can copy-paste, even.
There's a catch, though—you're dealing with polygons on the surface of the Earth... which is a sphere, not the infinite Euclidean plane that most PiP algorithms expect to work with. (You can, for example, have triangles whose internal angles add up to greater than π radians.) So naively deploying a PiP algorithm will give you incorrect answers for edge cases.
It's probably easiest to use a library that can account for differences between Euclidean and spherical (or, more precisely, Earth-shaped) geometry—that is, a mapping library like MapKit. There are tricks like the one in this SO answer that let you convert a MKPolygon to a CGPath through map projection, after which you can use the CGPathContainsPoint function to test against the flat 2D polygon corresponding to your Earth-surface polygon.
Of course, to do that you'll also need to get your KML file imported to MapKit. Apple has a sample code project illustrating how to do this.
This can be done with GMSGeometryContainsLocation.
I wrote a method that makes use of the GoogleMapsUtils library's GMUKMLParser.
func findPolygonName(_ location: CLLocationCoordinate2D) {
var name: String?
outerLoop: for placemark in kmlParser.placemarks {
if let polygon = (placemark as? GMUPlacemark)?.geometry as? GMUPolygon {
for path in polygon.paths {
if GMSGeometryContainsLocation(location, path, true) {
name = (placemark as? GMUPlacemark)?.title
break outerLoop
}
}
}
}
if let n = name, !n.isEmpty {
locationLabel.text = n
} else {
locationLabel.text = "We do not deliver here"
}
}
This function iterates over the polygon and its path to determine whether the given coordinates are within the path.
I've uppgraded to PhysX 3.2 and have been struggling for days to have my test box moved by gravity but it simply won't to it.
I've followed the PhysX documentation but implemented it in my way. It's pretty much a default setup:
physx::PxSceneDesc sceneDesc = physx::PxSceneDesc((physx::PxTolerancesScale()));
sceneDesc.gravity = physx::PxVec3(0.0f, -9.8f, 0.0f);
if(!sceneDesc.cpuDispatcher)
{
physx::PxDefaultCpuDispatcher* mCpuDispatcher = physx::PxDefaultCpuDispatcherCreate(4);
if(!mCpuDispatcher)
LOG("PxDefaultCpuDispatcherCreate failed!");
sceneDesc.cpuDispatcher = mCpuDispatcher;
}
if(!sceneDesc.filterShader)
sceneDesc.filterShader = &physx::PxDefaultSimulationFilterShader;
physxScene = physMgr->getSDK()->createScene(sceneDesc);
Creating the dynamic actor:
PxRigidDynamic* body = mPxSDK->createRigidDynamic(Convert::toPxTransform(transform));
PxRigidBodyExt::updateMassAndInertia(*body, 1.0f);
mPxScene->addActor(*body);
Add the box shape:
PxBoxGeometry geometry = PxBoxGeometry(Convert::toPxVector3(size));
if(geometry.isValid())
{
PxMaterial* material = api->createMaterial(0.5f, 0.5f, 0.1f);
PxShape* shape = createShape(actor, geometry, material);
PxRigidBodyExt::updateMassAndInertia(*body, 33.0f);
}
Simulating the scene as:
float elapsedTime = float((float)mTime.getElapsedTime() / 1000.0f);
mAccumulator += elapsedTime;
if(mAccumulator < mStepSize)
{
return;
}
else
{
mAccumulator -= mStepSize;
mPxScene->simulate(mStepSize);
mDynamicBodySys->updateGameObjectPositions();
mPxScene->fetchResults(true);
mTime.restart();
}
When I look into the Visual Debugger I can see the box and the frame count increasing. But it's not moving. The actor's and the box shape seem to have the correct propeties. LinearVelocity is increasing in negative Y axis, its mass is 33 etc. But the pose is still zero/identity. What am I missing?
Solved. The error was in my own logic. There was a sync logic problem where PhysX was trying to update my graphics while in the same time my positioning logic was telling PhysX to update with its previous position. So it got stuck and appeared to never be simulated.