Alternatives to diamond-square for incremental procedural terrain generation? - fractals

I'm currently in the process of coding a procedural terrain generator for a game. For that purpose, I divide my world into chunks of equal size and generate them one by one as the player strolls along. So far, nothing special.
Now, I specifically don't want the world to be persistent, i.e. if a chunk gets unloaded (maybe because the player moved too far away) and later loaded again, it should not be the same as before.
From my understanding, implicit approaches like treating 3D Simplex Noise as a density function input for Marching Cubes don't suit my problem. That is because I would need to reseed the generator to obtain different return values for the same point in space, leading to discontinuities along chunk borders.
I also looked into Midpoint Displacement / Diamond-Square. By seeding each chunk's heightmap with values from the borders of adjacent chunks and randomizing the chunk corners that don't have any other chunks nearby, I was able to generate a tileable terrain that exhibits the desired behavior. Still, the results look rather dull. Specifically, since this method relies on heightmaps, it lacks overhangs and the like. Moreover, even with the corner randomization, terrain features tend to be confined to small areas, i.e. there are no multiple-chunk hills or similar landmarks.
Now I was wondering if there are other approaches to this that I haven't heard of/thought about yet. Any help is highly appreciated! :)
Cheers!

Post process!
After you do the heightmaps, run back through adding features.
This is how Minecraft does it to get the various caverns and cliff overhangs.

Related

The right way to handle procedural LOD, distant terrain chunks

I would really appreciate your thoughts on compute-generated terrain, LOD, etc. and what the 'right way' to do it is.
Here's my current plan:
I'm procedurally generating a large finite world, where at any point most or all of the map is visible.
Texturing/Colouring is done in vert/frag shader.
I'm about 50% through implementing:
Generate the closest chunks (eg a 5x5 heightmap around the player) on the CPU using a noise function.
In the middle distance, using instances of a compute shader to generate the vertices of heightmap chunks and passing buffer to vert/frag shader.
In the far distance, generating 1 huge combined chunk with much less dense vertices and passing to vert/frag.
My questions are:
Is this the (or A) right way to handle LOD/Chunks/Distant terrain?
Should I instead generate everything on the GPU and pass a mesh back to CPU for collision, instead of using CPU for near chunks?
What function should I be using to generate and draw the map? I'm currently using DrawProceduralNow in OnRenderObject().
I'm just starting experimenting with using MaterialPropertyBlocks and DrawProcedural in Update().
One idea is to have semi-autonomous chunks that change their settings depending on player location.
Or relative chunks that are based on the space around the player, so a middle distance chunk will always be a middle LOD (distance between vertices), but the heightmap is updated as the player walks towards it.
I'm trying to avoid spending too much time going down the wrong rabbit hole.
If I can establish the right concepts first, that will save me a lot of time.
Edit: I'm also considering pre-generating heightmap textures to save on procedural calculation time. But that's a moot point, my questions are around what to do after I already have the vertices.

How do I make a Maze Generator on Scratch?

I am currently in High School, and I am in an APCSP (AP Computer Science Principles) class, which in my case is learning in Scratch programming. I am confused and have practically no idea what I'm doing. Scratch is very confusing and I feel like it's pointless to learn.
My question is this: Can anyone help me on how to make a Maze Generator on Scratch, as this is my project and it's giving me struggles.
Thank you.
It's actually possible to build with scratch but depends on what you are looking for. I assume you want to generate a simple maze like in old fashioned 8-bit games like boulder dash.
First decide on the size of your maze: for example 5 x 5 blocks.
If you want to create a maze, imagine drawing it on a grid on paper. Blocks are either "empty" or filled in. Our maze can be represented by numbers. The empty blocks are represented by a 0 and the filled blocks with a 1.
You could visualize that matrix like this if all blocks are empty:
0,0,0,0,0,
0,0,0,0,0,
0,0,0,0,0,
0,0,0,0,0,
0,0,0,0,0
Adding a border wall while keeping the inside empty would look like:
1,1,1,1,1,
1,0,0,0,1,
1,0,0,0,1,
1,0,0,0,1,
1,1,1,1,1
Using a "list" variable to store this information would fit best within the possibilities of MIT Scratch.
In this case, you need to understand that each block in our maze is represented by a position in above matrix. You could draw numbers on a piece of paper in the shape and size of your grid / matrix as a reference to remember the position of each block if that makes it easier.
We also need to look at how our maze will relate to the Stage size. The width and height in pixels of a default scratch project is 480x360.
A 5 x 5 maze is divided in blocks of 480 / 5 = 96 width and 360 / 5 = 72 height. In other words, a block needs to be 96x72 pixels, based on a full screen maze.
Next step, is creating a sprite representing the visualization of the blocks of the maze. I would keep the first "costume" of our block sprite empty, and create a fully filled block to represent the walls of the maze.
After that, we need to programmatically create our maze. I made an example you can explore of random drawing of the blocks of a maze:
https://scratch.mit.edu/projects/278731659/
(You can change the rows & columns value to see it scale up, but remember the limit to the amount of clones the block sprite can have is 300)
This is just to get you started and by no means a complete solution. I just hope this helps you think in the right direction.
You can make this more advanced, by adding a function to explore and correct our randomly drawn grid to generate a walkable path from position x to position y. A rule you can program is for example: Every empty position in the grid should have at least two other empty positions in the spaces above, below, left and right of it.
There are many different ways to do this; whether this is with sprites and stamp or 2D lists and pen. Either way, the main component is the algorithm. This wikipedia page gives details on how maze generation works and few different algorithms. There is also a video series by The Coding Train here where he creates a maze generator with the 2D list method from above (this method is a bit harder on scratch, however). Either way, the best thing to do is to look at examples others have made, figure out how they work, and try to recreate them or make them better. Here's a good place to get started.
Scratch IS truly pointless! A simple maze generator would have you use the pen to draw predefined shapes (Such as a long hallway or intersection). You should also make (invisible) squares to separate everything and have the program draw in the squares.
I will put a link later that leads to a sample project that has the code.
Check out this video by griffpatch
https://www.youtube.com/watch?v=22Dpi5e9uz8
This was one of my projects, and the instructor provided this video for everyone to follow and expand from.

Unity Terrain Stitching Gaps

So, I'm attempting to create a simple dynamic endless terrain using simplex noise.
So far I've got the noise working just fine - however I am having issues with the terrain having discontinuities at the edges. At first I thought this was due to the fact that I was not calling SetNeighbors on the Terrain objects, but adding this did not seem to yield any improvement.
terrain.GetComponent<Terrain>().SetNeighbors(left, top, right, bottom);
This problem seems to be caused by the slight differences in height between each terrain position - but making these set the same will effect the terrain quality (will reduce how jagged the terrain can be in certain cases) and generally seems inelegant. I've been going through the unity docs trying to find how to address this, but have yet to find anything.
Is there something I'm missing? Or is my only option to fiddle the heights on one of the sides to match the other?
Thanks for reading, appreciated as always.
Terrain image for reference
A couple things-
First, make sure you're setting SetNeighbors() on ALL the terrain objects, not just one.
Secondly, if the terrain don't match up exactly, it either means that the terrains aren't calculating their data quite correctly, or there's some floating point error going on. However, I have a suspicion that it's the first one, given that manually changing the points affects the quality. Make sure you know that terrains have n^2 + 1 points, and also make sure that the point to query from your simplex function with is calculated in world space.
If you can't figure it out, post your code and I'll take a look.
Also, your terrain might look better if you used octaved (a.k.a factal) noise on your Simplex noise function, depending on what you're looking for.
Cheers!

fractal microscope simulator

I've done work on software used for controlling imaging hardware, such as microscopes, that are sometimes hard to get time on. This means it is difficult to test out new/different algorithms which would require access to the instrument. I'd like to create a synthetic instrument that could be used for some of these testing purposes, and I was thinking of using some kind of fractal image generation to create the synthetic images. The key would be to be able to generate features at many different 'magnifications' and locations in some sort of deterministic manner. This is because some of the algorithms being tested may need to pan/zoom and relocate previously 'imaged' areas. Onto these base images I can then apply whatever instrument 'defects' are appropriate (focus, noise, saturation, etc.).
I'm at a bit of a loss on how to select/implement a good fractal algorithm for the base image. Any help would be appreciated. Preferably it would have the following qualities:
Be fast at rendering new image areas.
Fairly wide 'feature' coverage at as many locations and scales as possible.
Be deterministic (but initialized from random starting parameters).
Ability to tune to make images look more like 'real' images.
Item 2 is important, for example a mandelbrot set, with its large smooth/empty regions, might not be good since the software controlling the synthetic scope might fall into one of these areas.
So far I've thought of using something like a mandelbrot, but randomly shifting/rotating/scaling and merging two or more fractal sets to get more complete 'feature' coverage.
I've also seen images of the fractal flame algorithms and they seem to generate images that might be useful (and nice to look at).
Finally, I've thought of using some sort of paused particle simulation run to generate images that are more cell-like (my current imaging target), but I'm not sure if this approach can be made to work with the other requirements.
Edit:
#Jeffrey - So it sounds like some kind of terrain generation might be the way to go, as long as I have complete control over the PSRNG. Perhaps I can use some stored initial seed + x position + y position to generate my random numbers? But then I am unsure of how to consistently generate the terrains across scales, except, as you mentioned, to create the base terrain at the coursest scale, and at certain pre-determined 'magnifications' add new deterministic pseudo-random variations to this base. I'd also have to be careful about when to generate the next level of terrain, since if I'm too aggressive I'd have to generate and integrate the results appropriately for display at the coarser level... This is why I initially was leaning toward a more 'traditional' fractal, since this integration from finer scales would be handled more implicitly (I think).
The idea behind a fractal terrain creation algorithm is to build the image at each scale separately. For a landscape it's easy: just make a small array of height values, and set them randomly. Then scale it up to a larger array, averaging the values so that the contour is smooth, and then add small random amounts to those values. Then scale it up, etc. The original small bumps have become mountains, and they are filled with complex terrain.
There are two particular difficulties with the problem posed here, though. First, you don't want to store any of these values, since it would be potentially huge. Secondly, the features at each scale are of a different kind than the features at other scales.
These problems are not insurmountable.
Basically, you would divide the image up into a grid, and using deterministic psedorandom numbers establish the key features of each square in the grid. For example, each square could have a certain density of cell types.
At the next level of magnification, subdivide each square into another grid, apply a gradiant of values across the grid that is based on the values of the containing square and its surrounding squares. Then apply pseudorandom variations to that seeded with the containing square's grid coordinates. For the random seed, always use the coordinates of the immediately containing square of the subdivision under consideration regardless of where the image is cropped, in order to ensure that it is recreated correctly accross multiple runs.
At some level of magnification the random values go from being densities of paticles types to particle locations. Then for each particle, there are partical features. Then features on those features.
Although arbitrary left/right and up/down scrolling will be desired, the image at all levels of magnification above the current scene will have to be calculated each time the frame is shifted to ensure that all necessary features are included. This way the image can be scrolled from one cell to another without loss of consistancy. Partical simulations can be used to ensure that cells or cell features don't overlap. This could be done in a repeatable, deterministic manner.
And don't forget to apply a smoothing gradient based on averages of surrounding squares at higher levels before adding in the random variations. Otherwise, the abrupt changes will make the squares themselves appear in the images!
This answer is somewhat rambling and probably confusing, but that is best I can explain it right now. I hope it helps!

Jelly physics 3d

I want to ask about jelly physics ( http://www.youtube.com/watch?v=I74rJFB_W1k ), where I can find some good place to start making things like that ? I want to make simulation of cars crash and I want use this jelly physics, but I can't find a lot about them. I don't want use existing physics engine, I want write my own :)
Something like what you see in the video you linked to could be accomplished with a mass-spring system. However, as you vary the number of masses and springs, keeping your spring constants the same, you will get wildly varying results. In short, mass-spring systems are not good approximations of a continuum of matter.
Typically, these sorts of animations are created using what is called the Finite Element Method (FEM). The FEM does converge to a continuum, which is nice. And although it does require a bit more know-how than a mass-spring system, it really isn't too bad. The basic idea, derived from the study of continuum mechanics, can be put this way:
Break the volume of your object up into many small pieces (elements), usually tetrahedra. Let's call the entire collection of these elements the mesh. You'll actually want to make two copies of this mesh. Label one the "rest" mesh, and the other the "world" mesh. I'll tell you why next.
For each tetrahedron in your world mesh, measure how deformed it is relative to its corresponding rest tetrahedron. The measure of how deformed it is is called "strain". This is typically accomplished by first measuring what is known as the deformation gradient (often denoted F). There are several good papers that describe how to do this. Once you have F, one very typical way to define the strain (e) is:
e = 1/2(F^T * F) - I. This is known as Green's strain. It is invariant to rotations, which makes it very convenient.
Using the properties of the material you are trying to simulate (gelatin, rubber, steel, etc.), and using the strain you measured in the step above, derive the "stress" of each tetrahdron.
For each tetrahedron, visit each node (vertex, corner, point (these all mean the same thing)) and average the area-weighted normal vectors (in the rest shape) of the three triangular faces that share that node. Multiply the tetrahedron's stress by that averaged vector, and there's the elastic force acting on that node due to the stress of that tetrahedron. Of course, each node could potentially belong to multiple tetrahedra, so you'll want to be able to sum up these forces.
Integrate! There are easy ways to do this, and hard ways. Either way, you'll want to loop over every node in your world mesh and divide its forces by its mass to determine its acceleration. The easy way to proceed from here is to:
Multiply its acceleration by some small time value dt. This gives you a change in velocity, dv.
Add dv to the node's current velocity to get a new total velocity.
Multiply that velocity by dt to get a change in position, dx.
Add dx to the node's current position to get a new position.
This approach is known as explicit forward Euler integration. You will have to use very small values of dt to get it to work without blowing up, but it is so easy to implement that it works well as a starting point.
Repeat steps 2 through 5 for as long as you want.
I've left out a lot of details and fancy extras, but hopefully you can infer a lot of what I've left out. Here is a link to some instructions I used the first time I did this. The webpage contains some useful pseudocode, as well as links to some relevant material.
http://sealab.cs.utah.edu/Courses/CS6967-F08/Project-2/
The following link is also very useful:
http://sealab.cs.utah.edu/Courses/CS6967-F08/FE-notes.pdf
This is a really fun topic, and I wish you the best of luck! If you get stuck, just drop me a comment.
That rolling jelly cube video was made with Blender, which uses the Bullet physics engine for soft body simulation. The bullet documentation in general is very sparse and for soft body dynamics almost nonexistent. You're best bet would be to read the source code.
Then write your own version ;)
Here is a page with some pretty good tutorials on it. The one you are looking for is probably in the (inverse) Kinematics and Mass & Spring Models sections.
Hint: A jelly can be seen as a 3 dimensional cloth ;-)
Also, try having a look at the search results for spring pressure soft body model - they might get you going in the right direction :-)
See this guy's page Maciej Matyka, topic of soft body
Unfortunately 2d only but might be something to start with is JellyPhysics and JellyCar