Traditional physics simulation in games or graphics industry was basically discrete.
But engines nowadays such as box2d or bullet3d implement continuous physics simulation. I know basic principles of discrete simulation, but I have no idea about continuous simulation. That's magical to me, and using magic is hard and dangerous. So I want to make the magic into tools by understanding them.
So I want to know:
(1) What are the basic ideas and implementation principles of these continuous physics simulations? (2) Could the idea be generalized to other kind of discrete simulation? Please let me understand this!
I know only what I've read in this document, which certainly has better information and better references than are worth simply repeating here.
Nonetheless, it sounds like the collision detection is what's continuous. Consider a bullet (coincidence?). If you simulate it with ∆t = 1/30 s, there's a pretty high probability that it'll be 5m in front of you at one timestep and 5m behind you at the next. From what I understand, a continuous physics engine would treat the bullet as a ray which intersects me precisely at the time that I die. It sounds like this method solves directly for when and where collisions will occur. I suspect the algebra for rotating and translating bodies gets complex, but if you really want to explore that, there seem to be some PhD theses referenced.
I hope that's not too obvious and condescending, but the document looks to have the relevant references. Good luck!
Related
Is the 3D Physics of Unity deterministic across different platforms, like Android and IOS?
I saw the 2D physics is not (acros different platforms):
https://support.unity3d.com/hc/en-us/articles/360015178512-Determinism-with-2D-Physics
"However, for your application, you might want strict determinism. As
such, you would need to know whether Box2D can produce identical
results on different binaries and different platforms. Generally,
Box2D cannot do this. The reason is that different compilers and
different processors implement floating point math differently, which
affects the results of the simulation."
But I saw the 3D physics maybe is?
https://docs.unity3d.com/ScriptReference/Physics.Simulate.html
"To achieve deterministic physics results, you should pass a fixed
step value to Physics.Simulate every time you call it. Usually, step
should be a small positive number. Using step values greater than
0.03 is likely to produce inaccurate results."
https://blogs.unity3d.com/pt/2018/11/12/physics-changes-in-unity-2018-3-beta/
In the article above says:
Enhanced determinism PhysX guarantees the same simulation result when
all the inputs are exactly the same."
So in theory it is possible, but in reality (you know dealing with physics across different platforms, like Android, IOS and others) is very complex.
I'd like to know from someone how tried to implement this in 2019, if this is possible nowadays?
So if it is deterministic, what should I put in code to achieve this?
PhysX, used in Unity3D is not deterministic. However, there is a new physics package in unity 2018.3, that in theory does what you need
https://docs.unity3d.com/Packages/com.unity.physics#0.0/manual/index.html
I'm creating an evolution-artificial-life-simulation game in 2D (purely for fun purposes). It combines neural networks (for behaviour controlling) and genetic algorithm (for breeding and mutations).
On input I give them X,Y position of nearest food (normalized) and X,Y position of the "look at" vector.
Currently they fly around and when they collide with food (let's call it "eating apples") their fitness index is increased by one and the apple's position is randomed - after 2000 turns the GA interrupts and does its magic.
After about 100 generations they learn that eating apples is good and try to fly to the nearest ones.
But my question, as a neural network newbie, is - if I created a room where apples spawn way more frequent than on the rest of the map, would they learn and understand that? Would they fly to that room more often? And is it possible to tell how many generations would it take for them to learn?
What they can learn and how fast depends a lot on the information you give them access to. For instance, if they have no way of knowing that they are in the room where food generates more frequently, then there is no way for them to evolve to go there more frequently.
It's not entirely clear from your question what the "look at" vector is. If it, for instance, shows them what's directly in front of them, then it might be enough information for them to figure out that they're in the room of plenty, particularly if that room "looks" distinctive somehow. A more useful input to give them might be their current X and Y coordinates. If you did that, then I would definitely expect them to evolve to be in the good room more frequently (in proportion to how good it is, of course), because it would be possible for them to take action to go to and stay in that room.
As for how many generations it will take, that is incredibly hard to predict (especially without knowing more about your setup). If it takes them 100 generations to learn to eat food, then I would expect it to be on the order of hundreds. But the best way to find out is just to try it.
If it's all about location, they may keep a state of the map in their mind and simple statistics will let them learn where the food may be located. Neural nets is an overkill there.
If there are other features of locations (for example color, smell, height etc...) to map those features to the label (food exists or not) is good for neural nets. Especially if some of features not available or not reliable randomly at the moment.
If they need many decisions to reach the goal, you will need reinforcement learning. Forexample, they may go to a direction which is good for a time, but make them away from resources they will need later.
I believe that a recurrent neural network could learn to expect apples to spawn in a certain region.
OpenAL makes use of HRTF algorithms to fake surround sound with stereo headphones. However, there is an important dependency between HRTF and the shape of the users head and ears.
Simplified, this means: If your head / ears differ too much from the standard HRTF function they have implemented, the surround sound effect fades towards boring stereo.
I haven't yet found a way to adjust the various factors contributing to the HRTF algorithm, such as head diameter, pinna / external ear size, ear-to-ear distance, nose length and other important properties influencing the HRTF.
Is there any known way of setting these parameters for best surround sound experience?
I don't believe you can alter the HRTF in OpenAL. You certainly couldn't do it by putting in parametric values such as nose or pinna size. The only way to find out your HRTF is to put some very tiny, very accurate microphones in your ears, go into an anechoic chamber and take frequency response measurements at every angle around your head. Obviously this is time consuming, expensive and impractical. It would be fantastic to be able to work out your HRTF from measuring your head, but unfortunately acoustics isn't that deterministic and your ear is very sensitive to inaccuracies as you pointed out. I think the OpenAL HRTF is based on some KEMAR dummy head measurements (these perhaps?).
So, I think the short answer is that you can't alter the HRTF for OpenAL. Because HRTF is such a complex function that your ear is so sensitive to, there's no accurate way to approximate it with parametric values.
You might be able to make a "configuration game" out of optimizing the HRTF. I've been looking for an answer to the question if any of the virtual surround headsets or soundcards allow you adjust them to fit your personal HRTF.
Idea: You vary the different HRTF variables and play a sound. The user has to close his eyes and move the mouse into the direction he thought the sound came from. You measure how right he was.
You could use something like a thin plate spline or statistical curve fitting to plot the accuracy results and sample different regions of the multidimensional HRTF space to optimize the solution. This would be a kind of "brute force" method to find a solution that is not necessary accurate, but as good as the user has patience to optimize his personal HRTF.
According to a readme in the OpenALSoft sourcecode it uses a 32-sample convolution filter and you can create using custom HRTF samples.
It looks like it is now possible. I stumbled upon this comment which describes how to use hrtf_tables for approximations of your own ears. Google is showing me results for something called hrtf-paths as well but I'm not sure what that is.
I need to program an algorithm to navigate a robot through a "maze" (a rectangular grid with a starting point, a goal, empty spaces and uncrossable spaces or "walls"). It can move in any cardinal direction (N, NW, W, SW, S, SE, E, NE) with constant cost per move.
The problem is that the robot doesn't "know" the layout of the map. It can only view it's 8 surrounding spaces and store them (it memorizes the surrounding tiles of every space it visits). The only other input is the cardinal direction in which the goal is on every move.
Is there any researched algorithm that I could implement to solve this problem? The typical ones like Dijkstra's or A* aren't trivialy adapted to the task, as I can't go back to revisit previous nodes in the graph without cost (retracing the steps of the robot to go to a better path would cost the moves again), and can't think of a way to make a reasonable heuristic for A*.
I probably could come up with something reasonable, but I just wanted to know if this was an already solved problem, and I need not reinvent the wheel :P
Thanks for any tips!
The problem isn't solved, but like with many planning problems, there is a large amount of research already available.
Most of the work in this area is based on the original work of R. E. Korf in the paper "Real-time heuristic search". That paper seems to be paywalled, but the preliminary results from the paper, along with a discussion of the Real-Time A* algorithm are still available.
The best recent publications on discrete planning with hidden state (path-finding with partial knowledge of the graph) are by Sven Koenig. This includes the significant work on the Learning Real-Time A* algorithm.
Koenig's work also includes some demonstrations of a range of algorithms on theoretical experiments that are far more challenging that anything that would be likely to occur in a simulation. See in particular "Easy and Hard Testbeds for Real-Time Search Algorithms" by Koenig and Simmons.
I find genetic algorithm simulations like this to be incredibly entrancing and I think it'd be fun to make my own. But the problem with most simulations like this is that they're usually just hill climbing to a predictable ideal result that could have been crafted with human guidance pretty easily. An interesting simulation would have countless different solutions that would be significantly different from each other and surprising to the human observing them.
So how would I go about trying to create something like that? Is it even reasonable to expect to achieve what I'm describing? Are there any "standard" simulations (in the sense that the game of life is sort of standardized) that I could draw inspiration from?
Depends on what you mean by interesting. That's a pretty subjective term. I once programmed a graph analyzer for fun. The program would first let you plot any f(x) of your choice and set the bounds. The second step was creating a tree holding the most common binary operators (+-*/) in a random generated function of x. The program would create a pool of such random functions, test how well they fit to the original curve in question, then crossbreed and mutate some of the functions in the pool.
The results were quite cool. A totally weird function would often be a pretty good approximation to the query function. Perhaps not the most useful program, but fun nonetheless.
Well, for starters that genetic algorithm is not doing hill-climbing, otherwise it would get stuck at the first local maxima/minima.
Also, how can you say it doesn't produce surprising results? Look at this vehicle here for example produced around generation 7 for one of the runs I tried. It's a very old model of a bicycle. How can you say that's not a surprising result when it took humans millennia to come up with the same model?
To get interesting emergent behavior (that is unpredictable yet useful) it is probably necessary to give the genetic algorithm an interesting task to learn and not just a simple optimisation problem.
For instance, the Car Builder that you referred to (although quite nice in itself) is just using a fixed road as the fitness function. This makes it easy for the genetic algorithm to find an optimal solution, however if the road would change slightly, that optimal solution may not work anymore because the fitness of a solution may have grown dependent on trivially small details in the landscape and not be robust to changes to it. In real, cars did not evolve on one fixed test road either but on many different roads and terrains. Using an ever changing road as the (dynamic) fitness function, generated by random factors but within certain realistic boundaries for slopes etc. would be a more realistic and useful fitness function.
I think EvoLisa is a GA that produces interesting results. In one sense, the output is predictable, as you are trying to match a known image. On the other hand, the details of the output are pretty cool.