So, I'm attempting to create a simple dynamic endless terrain using simplex noise.
So far I've got the noise working just fine - however I am having issues with the terrain having discontinuities at the edges. At first I thought this was due to the fact that I was not calling SetNeighbors on the Terrain objects, but adding this did not seem to yield any improvement.
terrain.GetComponent<Terrain>().SetNeighbors(left, top, right, bottom);
This problem seems to be caused by the slight differences in height between each terrain position - but making these set the same will effect the terrain quality (will reduce how jagged the terrain can be in certain cases) and generally seems inelegant. I've been going through the unity docs trying to find how to address this, but have yet to find anything.
Is there something I'm missing? Or is my only option to fiddle the heights on one of the sides to match the other?
Thanks for reading, appreciated as always.
Terrain image for reference
A couple things-
First, make sure you're setting SetNeighbors() on ALL the terrain objects, not just one.
Secondly, if the terrain don't match up exactly, it either means that the terrains aren't calculating their data quite correctly, or there's some floating point error going on. However, I have a suspicion that it's the first one, given that manually changing the points affects the quality. Make sure you know that terrains have n^2 + 1 points, and also make sure that the point to query from your simplex function with is calculated in world space.
If you can't figure it out, post your code and I'll take a look.
Also, your terrain might look better if you used octaved (a.k.a factal) noise on your Simplex noise function, depending on what you're looking for.
Cheers!
Related
I'm creating a puzzle game that generates random sized pieces with 2D meshes. The images contain transparent portions and sometimes a piece is completely transparent. I need to detect what percentage of a piece is transparent. One way I found to do this is to go pixel by pixel. I posted my solution to this HERE. However, this process adds a few seconds during loading which I'd like to avoid and I'm looking for other ideas
I've considered using the selection outline of a MeshCollider to somehow to get a surface area I can compare to the surface area of the mesh but everything I find is on the rendering of outline with specialized shaders. Does anyone have any ideas on to solve this?
.
1) I guess you could add a PolygonCollider2D to your sprite and use its Path for the outline and calculation of the surface area. Not sure however if this will be faster.
PolygonCollider2D.GetPath:
A path is a cyclic sequence of line segments between points that define the outline of the Collider
Checking PolygonCollider2D.GetTotalPointCount or path length may be good enough to determine if the sprite is 'empty'.
Sprite.vertices, Sprite.triangles may also be helpful.
2) You could also improve performance of your first approach:
instead of calling GetPixel as you do now use GetPixels or GetPixels32 and loop through the array in one for loop.
Using GetPixels can be faster than calling GetPixel repeatedly, especially for large textures. In addition, GetPixels can access individual mipmap levels. For most textures, even faster is to use GetPixels32 which returns low precision color data without costly integer-to-float conversions.
check only every 2nd or nth pixel as it should be good enough for approximation
limit number of type casts
A recent question here made me think of SceneKit again, and I remembered a problem I never solved.
My app displays antenna designs using SK. Most antennas use metal rods and mesh reflectors so I used SCNCylinder for the rods, SCNPlane for the reflector and SCNFloor for the ground. The whole thing took a couple of hours, and I'm utterly noob at 3D.
But some antennas use wires bent into arcs or helixes, and I punted here and made crappy segmented objects using several cylinders end-to-end. It looks ass-tastic.
Ideally I would like a single object that renders the arc or helix with a cylindrical cross section. Basically SCNTorus, but with a start and end angle. This post talks about using a UIBezierPath in SK, but it uses extrude to produce a ribbon-like shape. Is there a way to do something similar but with a cylinder cross section (like a partial SCNTorus)?
I know I can make a custom shape by creating the vertexes (and normals and such) but I'm hoping I missed a simpler solution.
An arc you can do with SCNShape. Start with the technique from my other answer to get an extruded, ribbon-like arc. You'll want to make sure that the part where your path traces back on itself is offset by a distance the same as your extrusion depth, so you end up with a shape that's square in cross section.
To make it circular in cross section, use the chamferProfile property — give it a path that's a quarter circle, and set the chamfer radius equal to half the extrusion depth, and the four quarter-circle chamfers will meet, forming a circular cross section.
A helix is another story. SCNShape takes a planar path — one that varies in only two dimensions — and extrudes it to make a three-dimensional solid. A helix is a path that varies in three dimensions to start with. SceneKit doesn't have anything that describes a shape in such terms, so there's no super simple answer here.
The shader modifier solution #HalMueller alludes to is interesting, but problematic. It's simple to use a modifier at the geometry entry point to make a simple bend — say, offset every y coordinate by some amount, even by an amount that's a function of why. But that's a one-dimensional transform, so you can't use it to wrap a wire around on itself. (It also changes the cross section.) And on top of that, shader modifiers happen on the GPU at render time, so their effects are an illusion: the "real" geometry in SceneKit's model is still a cylinder, so features like hit testing apply to that and not to the transformed geometry.
The best solution to making something like a helix is probably custom geometry — generating your own vertex data (SCNGeometrySource). The math for finding the set of points on a helix is pretty simple if you follow that shape's definition. To wrap a cross section around it, follow the Frenet formulas to create a local coordinate frame at each point on the helix. Then make an index buffer (SCNGeometryElement) to stitch all those points into a surface with triangles or tristrips. (Okay, that's a lot of hand-waving around a deep topic, but a full tutorial is too big for an SO answer. This should be enough of a breadcrumb to get started, though...)
Here are some starting points that might help.
One approach would be to use more cylinders and make them shorter. That's the same idea behind the various segmentCount properties on the SCNGeometry primitives. Can we see a screenshot of the current linked cylinders version?
If you increase the heightSegmentCount, you could use the approach outlined here: scenekit, how to bend an object.
I just took a look at SCNShape. I was thinking you could use a shader modifier to warp the extruded shape into a circular cross section. But SCNShape doesn't seem to expose a segment count property, which I think you'd need to create enough extrusion segments for a good look. The chamferRadius and chamferProfile properties look interesting. I wonder if you could use those to create an extrusion that looks good.
Given a set of non-rotated AABB bounds, I'm hoping to create a simpler set of bounds from the original set, that allows for a specified amount of inaccuracy.
Some examples:
I'm working with this in Unity with Bounds, but it's just basic AABB comparison stuff, nothing Unity-specific. I figure someone must have worked out a system for this at some point in the past, but I had no luck searching around. Encapsulating bounds are easy but this is harder, since you can't just iterate through each bounds one by one. Sometimes a simpler solution can only be seen by looking at the whole thing.
Fast performance isn't critical but would be nice. Inaccuracy is OK in both directions (i.e. the bounds may cover a little less than the actual size or a little more). If it helps, I can expect all bounds in the original set to be connected somewhere - no free-floating pieces in a separate group.
I don't expect anyone to write up a whole system to solve this, I'm more hoping that it's already been solved or that maybe there's an obvious process to achieve it that I haven't thought of yet.
This sounds something that could be handled with Surface Area Heuristics (SAH). SAH is commonly used in ray tracing to build better tree like structures were the triangles are stored. There are multiple sources discussing it more. One good is Wald's thesis chapter 7.3.
The basic idea in the SAH built is to start with the whole space and divide it recursively. Division position is decided by sweeping through all reasonable positions and calculating surface area of both child nodes. The reasonable positions are the positions were any triangle has its upper or lower bound. After sweeping through all the candidates, the division with the smallest total surface area in the children is used.
If SAH is not a good idea for your application, you could use similar sweeping through all candidates, but calculate for example the extra space inside the AABBs.
I'm currently in the process of coding a procedural terrain generator for a game. For that purpose, I divide my world into chunks of equal size and generate them one by one as the player strolls along. So far, nothing special.
Now, I specifically don't want the world to be persistent, i.e. if a chunk gets unloaded (maybe because the player moved too far away) and later loaded again, it should not be the same as before.
From my understanding, implicit approaches like treating 3D Simplex Noise as a density function input for Marching Cubes don't suit my problem. That is because I would need to reseed the generator to obtain different return values for the same point in space, leading to discontinuities along chunk borders.
I also looked into Midpoint Displacement / Diamond-Square. By seeding each chunk's heightmap with values from the borders of adjacent chunks and randomizing the chunk corners that don't have any other chunks nearby, I was able to generate a tileable terrain that exhibits the desired behavior. Still, the results look rather dull. Specifically, since this method relies on heightmaps, it lacks overhangs and the like. Moreover, even with the corner randomization, terrain features tend to be confined to small areas, i.e. there are no multiple-chunk hills or similar landmarks.
Now I was wondering if there are other approaches to this that I haven't heard of/thought about yet. Any help is highly appreciated! :)
Cheers!
Post process!
After you do the heightmaps, run back through adding features.
This is how Minecraft does it to get the various caverns and cliff overhangs.
can someone point me to a paper/algorithm/resource/whatever that tells me how to implement a texture minification filter (applies when texels are smaller than pixels) in a raytracer?
thanks!
Since you are using ray tracing I suspect you are looking for a high quality filtering that changes sampling dynamically based on the amount of "error". Based on this assumption I would say take a look at "ray differentials". There's a nice paper on this here: http://graphics.stanford.edu/papers/trd/ and it takes effects like refraction and reflection into account.
Your answer to yourself sounds like the right approach, but since others may stumble across the page I'll add a resource link as requested. In addition to discussing mipmapping (ripmapping is basically more advanced mipmapping), they discuss the effects of reflection and refraction on derivatives and mip-level selection.
Homan Igehy. "Tracing Ray Differentials." 1999. Proceedings of SIGGRAPH. http://graphics.stanford.edu/papers/trd/
Upon closer reading I see that Rehno Lindeque mentioned this paper. At first didn't realize that it was the right reference because he says that the method samples dynamically based on the error of the sampling, which is incorrect. Filtering is done based on the size of the pixel's footprint and uses only one ray, just as you described.
Edit:
Another reference that might be useful ( http://www.cs.unc.edu/~awilson/class/238/#challenges ). Scroll to the section "Derivatives of Texture Coordinates." He suggests backward mapping of texture derivatives from the surface to the screen. I think this would be incorrect for reflected and refracted rays, but is possibly easier to implement and should be okay for primary rays.
I think you mean mipmap'ing.
Here is an article talking about using them.
But nether say how to chose which mipmap to use, but they are often blended (the bigger and smaller mipmap).
Here's a one more article about how Google Earth works, and it talks about how they mipmapping the earth.
thank you guys for your answers, but since I didn't find any appropriate techinque i created something myself which turned out to work very well:
i assume my ray to be a cone with a coneradius of half a pixel on the imageplane. when the ray hits a surface, i calculate the ellipse which is projected onto the surface (the ellipse from the plane-cone intersection). Then, using the texturecoordinate derivatives at the intersection point, i project this ellipse into texturespace. now i know which part of the texture lies under my pixel and can subsample this area
I Also use RipMaps to improve the quality - and i chose the RipMap level based on the size of the ellipse in Texturespace