I am getting the surface shown in the picture that i am attaching below ,I want triangle with the same color and texture Could you please guide me on the same.
and one more thing ,This is because the triangles are not oriented consistently which only makes sense if the output is a true oriented surface without any artifact.if any one suggest me which visualization tool where it shows the triangles without considering orientation in WPF, i have used helix toolkit in WPF but it gives same result this is 3d model:
You can either try to fix the orientation of the triangles or define the BackMaterial of your GeometryModel3D to be the same as the Material.
Related
I'm looking for either a 3D model or an image file over which I can apply my own custom graphical elements, such as eyeliner or lipstick.
In the ARCore docs, the solution to this issue is very well described. You can get either an FBX file or a PSD template, over which you place your own elements.
From what I can tell, the principle of ARCore and ARKit are very much the same - there's a standard face mesh which gets contorted to the shape of a detected face, however, I'm unable to find any such materials using Google.
Just use the same face model and use slightly larger copies of it for the makeup. No one is going to get close enough to see how thick its caked on, because all the polys would start disappearing anyway...
This question relates to using shaders (probably in the Unity3D milieu, but Metal or OpenGL is fine), to achieve rounded edges on a mesh-minimal cube.
I wish to use only 12-triangle minimalist mesh cubes,
and then via the shader,
Achieve the edges (/corners) of each block being slightly bevelled.
In fact, can this be done with a shader?
I recently finished creating such shader. The only way it can work is by providing 4 normal vectors instead of one for each vertex (smooth, sharp and one for each edge of the triangle for the given vertex). You will also need one float3 to detect edges.
To add such data in a mesh I made a custom mesh editor, comes with Playtime Painter Asset from Unity Asset Store. Will post the shader with the next update. Also will post to public GitHub.
You can see some dark lines, it's because it starts to interpolate to a normal vector which facing away from light source, but since there are no additional triangles, the result is visible on a triangle which is facing the camera.
Update (2/12/2018)
Realised that by clipping pixels that end up having a normal facing away from the camera, it is possible to smooth the outline shape. It wasn't tested for all possible scenarios but works great for simple shapes:
As per request added a comparison cube:
Currently, Playtime Painter has a simplified version of that shader, which interpolates between 2 normal vectors and gives ok results on some edges.
Wrote an article.
In general the Relief Mapping is able to modify the object silhouette like on this picture. You'd need to prepare a heightmap that lowers at the borders and that's it. However I think that using such shader might be an overkill for such a simple effect so maybe it's better to just make it in your geometry.
I'm currently experimenting with OpenGL ES 1.1 on the iPhone and trying to get my head around some of the basics. So far I've managed to draw a grid of objects which are lit with one GL_LIGHT. Here is a screenshot of the current output (question to follow)...
So you can see that my test consists of a grid of about 140 cubes - some slightly elevated so I can see how the shaded areas work. Each cube consists of this model (from Blender) and have normals / texture coordinates...
What's puzzling me, is why I don't get a 'uniform' lighting across the entire surface. Each cube seems to be lit individually and I can kind of understand why that would be... but is it not possible to have the light transition 'normally' like it would if you arranged this model out of blocks and shone a light across it. I'd expect to not see a dark edge on each individual cube, but rather a smooth transition across the whole area.
(I'm still inwardly chuffed that I managed to get this far!)
Any help or explanations would be awesome.
Thanks,
Simon
The reason why you don't get 'uniform' lighting is because I presume you are using per vertex lighting. That is the lighting is calculated per vertex and interpolated over each triangle making up the model. Since your cube has a pretty low polygon count the transition of light across the model won't look smooth.
Using OpenGL ES 1.1 there are two solutions to this. You can use higher polygon count models or implement per-pixel (DOT3) lighting. I've not implemented this myself but have come across this problem before (my solution was to switch to OpenGL ES 2.0 and use shaders to perform per-pixel lighting).
Here is a link, which may be of use: What is DOT3 lighting?
All the best!
We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.
I'm not looking for a library or even open source code. I want to learn how to do this on my own.
Where do I start to find an online tutorial, a book chapter, or other educational material for generating a polygonal model of a 3D sphere suitable for feeding to Open GL ES on an iPhone, and then mapping the polygons to some sort of 2D map data so I can texture map the sphere? Is there some sort of software tool (blender? maya?) with a tutorial on how to do generate this data? Where is the best place to start?
How about these articles?
Procedural Spheres in OpenGL ES
OpenGL ES From the Ground Up, Part 6: Textures and Texture Mapping
I've heard good stuff about "iPhone 3D Programming". Jeff LaMarche also recommends it here.
Hope this helps!
While not OpenGL ES, I once tried porting across the examples from this chapter in the Red Book where they show how to create an icosahedron and subdivide it to produce smooth spheres. I only got as far as using a simple icosahedron to crudely represent a sphere in the code for my Molecules application. Perhaps you could extend that.
Apple has a Mac sample application, GLSLShowpiece, that textures a sphere in a couple of places, but they use gluSphere() to generate the sphere vertices, which is unavailable in OpenGL ES.
To be honest, I'm in the process of replacing the sphere rendering code in Molecules with a 2-D billboarding approach that uses shaders to generate the sphere coloring. This should allow for far smoother spheres without having to resort to massive amounts of geometry. See this paper for the kind of results you can produce this way.