A recent question here made me think of SceneKit again, and I remembered a problem I never solved.
My app displays antenna designs using SK. Most antennas use metal rods and mesh reflectors so I used SCNCylinder for the rods, SCNPlane for the reflector and SCNFloor for the ground. The whole thing took a couple of hours, and I'm utterly noob at 3D.
But some antennas use wires bent into arcs or helixes, and I punted here and made crappy segmented objects using several cylinders end-to-end. It looks ass-tastic.
Ideally I would like a single object that renders the arc or helix with a cylindrical cross section. Basically SCNTorus, but with a start and end angle. This post talks about using a UIBezierPath in SK, but it uses extrude to produce a ribbon-like shape. Is there a way to do something similar but with a cylinder cross section (like a partial SCNTorus)?
I know I can make a custom shape by creating the vertexes (and normals and such) but I'm hoping I missed a simpler solution.
An arc you can do with SCNShape. Start with the technique from my other answer to get an extruded, ribbon-like arc. You'll want to make sure that the part where your path traces back on itself is offset by a distance the same as your extrusion depth, so you end up with a shape that's square in cross section.
To make it circular in cross section, use the chamferProfile property — give it a path that's a quarter circle, and set the chamfer radius equal to half the extrusion depth, and the four quarter-circle chamfers will meet, forming a circular cross section.
A helix is another story. SCNShape takes a planar path — one that varies in only two dimensions — and extrudes it to make a three-dimensional solid. A helix is a path that varies in three dimensions to start with. SceneKit doesn't have anything that describes a shape in such terms, so there's no super simple answer here.
The shader modifier solution #HalMueller alludes to is interesting, but problematic. It's simple to use a modifier at the geometry entry point to make a simple bend — say, offset every y coordinate by some amount, even by an amount that's a function of why. But that's a one-dimensional transform, so you can't use it to wrap a wire around on itself. (It also changes the cross section.) And on top of that, shader modifiers happen on the GPU at render time, so their effects are an illusion: the "real" geometry in SceneKit's model is still a cylinder, so features like hit testing apply to that and not to the transformed geometry.
The best solution to making something like a helix is probably custom geometry — generating your own vertex data (SCNGeometrySource). The math for finding the set of points on a helix is pretty simple if you follow that shape's definition. To wrap a cross section around it, follow the Frenet formulas to create a local coordinate frame at each point on the helix. Then make an index buffer (SCNGeometryElement) to stitch all those points into a surface with triangles or tristrips. (Okay, that's a lot of hand-waving around a deep topic, but a full tutorial is too big for an SO answer. This should be enough of a breadcrumb to get started, though...)
Here are some starting points that might help.
One approach would be to use more cylinders and make them shorter. That's the same idea behind the various segmentCount properties on the SCNGeometry primitives. Can we see a screenshot of the current linked cylinders version?
If you increase the heightSegmentCount, you could use the approach outlined here: scenekit, how to bend an object.
I just took a look at SCNShape. I was thinking you could use a shader modifier to warp the extruded shape into a circular cross section. But SCNShape doesn't seem to expose a segment count property, which I think you'd need to create enough extrusion segments for a good look. The chamferRadius and chamferProfile properties look interesting. I wonder if you could use those to create an extrusion that looks good.
Related
I'm creating my game with dynamicly generated terrain. It is very simple idea. There are always three parts of terrain: segment on which stands a player and two next to it. When the player is moving(always forward) to the next segment new one is generated and the last one is cut off. It works wit flat planes, but i don't know how to do it with more complex terrain. Should I just make it have the same edge from both sides(for creating assets I'm using blender)? Or is there any other option? Please note that I'm starting to make games with unity.
It depends on what you would like your terrain to look like. If you want to create the terrain pieces in something external, like Blender, then yes all those pieces will have to fit together seamlessly. But that is a lot of work as you will have to create a lot of pieces that fit together for the landscape to remain interesting.
I would suggest that you rather generate the terrain dynamically in Unity. You can create your own mesh using code. You start by creating an object (in code), and then generating vertex and triangle arrays to assign to the object, for it to have a visible and sensible mesh. You first create vertices at specific positions and then add triangles that consist of 3 vertices at a time. If you want a smooth look instead of a low poly look, you will reuse some vertices for the next triangle, which is a little trickier.
Once you have created your block's mesh, you can begin to change your code to specify how the height of the vertices could be changed, to give you interesting terrain. As long as the first vertices on your new block are at the same height (say y position) as the last vertices on your current block (assuming they have the same x and z positions), they will line up. That said, you could make it even simpler by not using separate blocks, but by rather updating your object mesh to add new vertices and triangles, so that you are creating a terrain that is just one part that changes, rather than have separate blocks.
There are many ways to create interesting terrain. One of the most often used functions to generate semi-random and interesting terrain, is Perlin Noise. Another is his more recent Simplex noise. Like most random generator functions, it has a seed value, which you can keep track of so that you can create interesting terrain AND get your block edges to line up, should you still want to use separate blocks rather than a single mesh which dynamically expands.
I am sure there are many tutorials online about noise functions for procedural landscape generation. Amit Patel's tutorials are good visual and interactive explanations, here is one of his tutorials about noise-based landscapes. Take a look at his other great tutorials as well. There will be many tutorials on dynamic mesh generation as well, just do a google search -- a quick look tells me that CatLikeCoding's Procedural Grid tutorial will probably be all you need.
In my web application I only need to add static objects to my scene. It worked slow so I started searching and I found that merging geometries and merging vertices were the solution. When I implemented it, it indeed worked a lot better. All the articles said that the reason for this improvement is the decrease in number of WebGL calls. As I am not very familiar with things like OpenGL and WebGL (I use Three.js to avoid their complexity), I would like to know why exactly it reduces the WebGL calls?
Because you send one large object instead of many littles, the overhead reduces. So I understand that loading one big mesh to the scene goes faster than many small meshes.
BUT I do not understand why merging geometries also has a positive influence on the rendering calculation? I would also like to know the difference between merging geometries and merging vertices?
Thanks in advance!
three.js is a framework that helps you work with the WebGL API.
What a "mesh" is to three.js, to webgl, it's a series of low level calls that set up state and issue calls to the GPU.
Let's take a sphere for example. With three.js you would create it with a few lines:
var sphereGeometry = new THREE.SphereGeometry(10);
var sphereMaterial = new THREE.MeshBasicMaterial({color:'red'});
var sphereMesh = new THREE.Mesh( sphereGeometry, sphereMaterial);
myScene.add( sphereMesh );
You have your renderer.render() call, and poof, a sphere appears on screen.
A lot of stuff happens under the hood though.
The first line, creates the sphere "geometry" - the cpu will a bunch of math and logic describing a sphere with points and triangles. Points are vectors, three floats grouped together, triangles are a structure that groups these points by indecis (groups of integers).
Somewhere there is a loop that calculates the vectors based on trigonometry (sin, cos), and another, that weaves the resulting array of vectors into triangles (take every N , N + M , N + 2M, create a triangle etc).
Now these numbers exist in javascript land, it's just a bunch of floats and ints, grouped together in a specific way to describe shapes such as cubes, spheres and aliens.
You need a way to draw this construct on a screen - a two dimensional array of pixels.
WebGL does not actually know much about 3D. It knows how to manage memory on the gpu, how to compute things in parallel (or gives you the tools), it does know how to do mathematical operations that are crucial for 3d graphics, but the same math can be used to mine bitcoins, without even drawing anything.
In order for WebGL to draw something on screen, it first needs the data put into appropriate buffers, it needs to have the shader programs, it needs to be setup for that specific call (is there going to be blending - transparency in three.js land, depth testing, stencil testing etc), then it needs to know what it's actually drawing (so you need to provide strides, sizes of attributes etc to let it know where a 'mesh' actually is in memory), how it's drawing it (triangle strips, fans, points...) and what to draw it with - which shaders will it apply on the data you provided.
So, you need a way to 'teach' WebGL to do 3d.
I think the best way to get familiar with this concept is to look at this tutorial , re-reading if necessary, because it explains what happens pretty much on every single 3d object in perspective, ever.
To sum up the tutorial:
a perspective camera is basically two 4x4 matrices - a perspective matrix, that puts things into perspective, and a view matrix, that moves the entire world into camera space. Every camera you make, consists of these two matrices.
Every object exists in it's object space. TRS matrix, (world matrix in three.js terms) is used to transform this object into world space.
So this stuff - a concept such as "projective matrix" is what teaches webgl how to draw perspective.
Three.js abstracts this further and gives you things like "field of view" and "aspect ratio" instead of left right, top bottom.
Three.js also abstracts the transformation matrices (view matrix on the camera, and world matrices on every object) because it allows you to set "position" and "rotation" and computes the matrix based on this under the hood.
Since every mesh has to be processed by the vertex shader and the pixel shader in order to appear on the screen, every mesh needs to have all this information available.
When a draw call is being issued for a specific mesh, that mesh will have the same perspective matrix, and view matrix as any other object being rendered with the same camera. They will each have their own world matrices - numbers that move them around around your scene.
This is transformation alone, happening in the vertex shader. These results are then rasterized, and go to the pixel shader for processing.
Lets consider two materials - black plastic and red plastic. They will have the same shader, perhaps one you wrote using THREE.ShaderMaterial, or maybe one from three's library. It's the same shader, but it has one uniform value exposed - color. This allows you to have many instances of a plastic material, green, blue, pink, but it means that each of these requires a separate draw call.
Webgl will have to issue specific calls to change that uniform from red to black, and then it's ready to draw stuff using that 'material'.
So now imagine a particle system, displaying a thousand cubes each with a unique color. You have to issue a thousand draw calls to draw them all, if you treat them as separate meshes and change colors via a uniform.
If on the other hand, you assign vertex colors to each cube, you don't rely on the uniform any more, but on an attribute. Now if you merge all the cubes together, you can issue a single draw call, processing all the cubes with the same shader.
You can see why this is more efficient simply by taking a glance at webglrenderer from three.js, and all the stuff it has to do in order to translate your 3d calls to webgl. Better done once than a thousand times.
Back to those 3 lines, the sphereMaterial can take a color argument, if you look at the source, this will translate to a uniform vec3 in the shader. However, you can also achieve the same thing by rendering the vertex colors, and assigning the color you want before hand.
sphereMesh will wrap that computed geometry into an object that three's webglrenderer understands, which in turn sets up webgl accordingly.
I'm currently trying to implement a silhouette algorithm in my project (using Open GLES, it's for mobile devices, primarily iPhone at the moment). One of the requirements is that a set of 3D lines be drawn. The issue with the default OpenGL lines is that they don't connect at an angle nicely when they are thick (gaps appear). Other subtle artifacts are also evident, which detract from the visual appeal of the lines.
Now, I have looked into using some sort of quad strip as an alternative to this. However, drawing a quad strip in screen space requires some sort of visibility detection - lines obscured in the actual 3D world should not be visible.
There are numerous approaches to this problem - i.e. quantitative invisibility. But such an approach, particularly on a mobile device with limited processing power, is difficult to implement efficiently, considering raycasting needs to be employed. Looking around some more I found this paper, which describes a couple of methods for using z-buffer sampling to achieve such an effect. However, I'm not an expert in this area, and while I understand the theory behind the techniques to an extent, I'm not sure how to go about the practical implementation. I was wondering if someone could guide me here at a more technical level - on the OpenGLES side of things. I'm also open to any suggestions regarding 3D line visibility in general.
The technique with z-buffer will be too complex for iOS devices - it needs heavy pixel shader and (IMHO) it will bring some visual artifacts.
If your models are not complex you can find geometric silhouette in runtime - for example by comparing normals of polygons with common edge: if z value of direction in view space has different sings (one normal is directed to camera and other is from camera) then this edge should be used for silhouette.
Another approach is more "FPS friendly": keep extruded version of your model. And render firstly extruded model with color of silhouette (without textures and lighting) and normal model over it. You will need more memory for vertices, but no real-time computations.
PS: In all games I have look at silhouettes were geometric.
I have worked out a solution that works nicely on an iPhone 4S (not tested on any other devices). It builds on the idea of rendering world-space quads, and does the silhouette detection all on the GPU. It works along these lines (pun not intended):
We generate edge information. This consists of a list of edges/"lines" in the mesh, and for each we associate two normals which represent the tris on either side of the edge.
This is processed into a set of quads that are uploaded to the GPU - each quad represents an edge. Each vertex of each quad is accompanied by three attributes (vec3s), namely the edge direction vector and the two neighbor tri normals. All quads are passed w/o "thickness" - i.e. the vertices on either end are in the same position. However, the edge direction vector is opposite for each vertex in the same position. This means they will extrude in opposite directions to form a quad when required.
We determine whether a vertex is part of a visible edge in the vertex shader by performing two dot products between each tri norm and the view vector and checking if they have opposite signs. (see standard silhouette algorithms around the net for details)
For vertices that are part of visible edges, we take the cross product of the edge direction vector with the view vector to get a screen-oriented "extrusion" vector. We add this vector to the vertex, but divided by the w value of the projected vertex in order to create a constant thickness quad.
This does not directly resolve the gaps that can appear between neighbor edges but is far more flexible when it comes to combating this. One solution may involve bridging the vertices between large angled lines with another quad, which I am exploring at the moment.
I am able to draw shapes using the UIBezierPath object. Now I want to identify different shapes drawn using this eg. Rectangle , Square , Triangle , Circle etc. Then next thing I want to do is that user should be able to select a particular shape and should be able to move the whole shape to different location on the screen. The actual requirement is even more complex , but If I could make this much then I can work out on the rest.
Any suggestion or links or points on how do I start with this is welcome . I am thinking of writing a separate view to handle every shape but not getting how do I do that..
Thank You all in advance !!
I recommend David Gelphman’s Programming with Quartz.
In his chapter “Drawing with Paths” he has a section on “Path Construction Primitives” which provides a crossroads:
If you use CGContextAddLineToPoint your user could make straight lines defined by known Cartesian points. You would use basic math to deduce the geometric shapes defined by those points.
If you use CGContextAddCurveToPoint your user could make curved lines defined by known points, and I’m pretty sure that those lines would run through the points, so you could still use basic math to determine at least an approximation of the types of shapes formed.
But if you use CGContextAddQuadCurveToPoint, the points define a framework outside of the drawn curve. You’d need more advanced math to determine the shapes formed by curves along tangents.
Gelphman also discusses “Path Utility Functions,” like getting a bounding box and checking whether a given point is inside the path.
As for moving the completed paths, I think you would use CGContextTranslateCTM.
can someone point me to a paper/algorithm/resource/whatever that tells me how to implement a texture minification filter (applies when texels are smaller than pixels) in a raytracer?
thanks!
Since you are using ray tracing I suspect you are looking for a high quality filtering that changes sampling dynamically based on the amount of "error". Based on this assumption I would say take a look at "ray differentials". There's a nice paper on this here: http://graphics.stanford.edu/papers/trd/ and it takes effects like refraction and reflection into account.
Your answer to yourself sounds like the right approach, but since others may stumble across the page I'll add a resource link as requested. In addition to discussing mipmapping (ripmapping is basically more advanced mipmapping), they discuss the effects of reflection and refraction on derivatives and mip-level selection.
Homan Igehy. "Tracing Ray Differentials." 1999. Proceedings of SIGGRAPH. http://graphics.stanford.edu/papers/trd/
Upon closer reading I see that Rehno Lindeque mentioned this paper. At first didn't realize that it was the right reference because he says that the method samples dynamically based on the error of the sampling, which is incorrect. Filtering is done based on the size of the pixel's footprint and uses only one ray, just as you described.
Edit:
Another reference that might be useful ( http://www.cs.unc.edu/~awilson/class/238/#challenges ). Scroll to the section "Derivatives of Texture Coordinates." He suggests backward mapping of texture derivatives from the surface to the screen. I think this would be incorrect for reflected and refracted rays, but is possibly easier to implement and should be okay for primary rays.
I think you mean mipmap'ing.
Here is an article talking about using them.
But nether say how to chose which mipmap to use, but they are often blended (the bigger and smaller mipmap).
Here's a one more article about how Google Earth works, and it talks about how they mipmapping the earth.
thank you guys for your answers, but since I didn't find any appropriate techinque i created something myself which turned out to work very well:
i assume my ray to be a cone with a coneradius of half a pixel on the imageplane. when the ray hits a surface, i calculate the ellipse which is projected onto the surface (the ellipse from the plane-cone intersection). Then, using the texturecoordinate derivatives at the intersection point, i project this ellipse into texturespace. now i know which part of the texture lies under my pixel and can subsample this area
I Also use RipMaps to improve the quality - and i chose the RipMap level based on the size of the ellipse in Texturespace