I've been playing with solar system simulation lately using Barnes-Hut algorithm to speed things up.
Now simulation works fine when feed with our solar system data, but I'd like to test it on something bigger.
Now, I tried to generate 500+ random bodies, and even add initial orbital motion around center of gravity - but every time after short time while most of the bodies end up ejected far away into space.
Are there any methods to generate random sets of planets/stars for simulations like this that will remain relatively stable ?
You should probably ask this question on the Physics or Mathematics stackexchange.
I think this is a very difficult question, to the point that great mathematicians have studied the stability of the solar system. Things are "easy" for the two body problem, but the three body problem is notorious for its chaotic behavior (Poincare studied it carefully and in the process laid out the fundament of the qualitative theory of dynamical systems). If I am not mistaken (feel free to check this online), instability of orbital dynamics of large number of bodies (large meaning three or more) is a condition, whose probability of occurrence is very high. Meanwhile, coming across stable configurations has a vary low probability.
Now, for so called integrable systems ("exactly solvable"),
like n copies of decoupled sun-one-planed models of a solar/star system, small perturbations are more likely to yield stable dynamics, due to the Kolmogorov-Arnold-Moser's theorem. So I can say that it is more likely for you to come across stability, if you first set up the bodies in your simulation to be comparatively small gravity sources orbiting one significantly larger gravitational source. Each body has one dominating force from the large source and many much smaller perturbations from the rest of the bodies (or the averaged sources of your Barnes-Hut algorithm). If you consider only the dominating force, and turn off the perturbations, you would have a solar system with n decoupled two-body systems (each body following elliptical motion around a common gravitational center). If you turn on the perturbations, this dynamics changes, but it tends to deviate from the unperturbed one very slowly, and is more likely to be stable. So start with highly ordered dynamics and start changing slightly the body's masses and their positions and velocities. You could follow how the dynamics changes when you alter the parameters and the initial conditions.
One more thing, it is always a good idea to place the inertial coordinate system, with respect to which the positions and the velocities of the bodies are represented, in the center of mass of the group of bodies. This is more or less guaranteed when the initial momenta sum up to the zero vector. This set up yields the center of mass of the system is always fixed at some point in space, so a simple translation will move it to the origin of the coordinate system.
Related
LBM focuses on fluid clusters, and uses the macro fluid density and velocity to calculate the equilibrium distribution function, and then uses the evolution equation to achieve system iteration. But if we add the same fluid to the lattice grid points in the LBM or reduce the existing fluid continuously, how should we recalculate the macro fluid density and velocity? Or how should the distribution function at the lattice grid point be recalculated? Can LBM simulate a scenario where fluid is continuously added or reduced to the system? For example, water keeps flowing from the tap.
The traditional lattice-Boltzmann method (e.g. the D2Q9 lattice in 2D) can only be applied to incompressible flows. Put in simple terms this means that there can't be more mass entering the domain than exiting it: The mass inside the domain is roughly the same throughout the simulation. This simplification of the generally compressible Navier-Stokes equations can not only be applied to incompressible fluids (such as water) but also to low-Mach number flows like the flow around a car (for more details see here). Yet the traditional lattice-Boltzmann method can't describe multi-phase and free-surface flows as well as flows with sinks and sources (which all result in a change of density of the system).
Any inlet or outlet conditions in the incompressible lattice-Boltzmann method falls in one of the following categories:
Periodic boundaries (the populations that exit the domain on one side enter it again on the other side)
Pressure-drop-periodic boundaries (such as Zhang/Kwok) for periodic flow but with an additional term for compensating for a pressure drop inside the domain due to friction
Velocity and pressure boundaries (generally a velocity inlet and a pressure outlet): There exist various formulations of these to make sure that the moments of the distribution are actually conserved and they have different characteristics regarding numeric stability. Most of them enforce some sort of symmetry and extrapolation of higher moments. The simplest ones are the ones by Zou/He but others like Guo's extrapolation method are significantly more stable for under-resolved and turbulent (high Reynolds number flows). This review discusses different ones in more detail.
You can have a look at this small code I have written in C++ for 2D and 3D simulations if you are interested in more details on how this actually works.
That being said there exist though several variations of lattice-Boltzmann methods in research that allow for multi-component or multi-phase flows (e.g. by introducing additional distributions) or compressible flows (with lattices with more discrete velocities and potentially a second lattice) but they are still exotics and you won't find many implementations around.
I implemented a rather simple SPH simulation using a cubic-spline-kernel and a simple non-iterative pressure solver as described in this PDF in equation 9. I followed algorithm 1 of that paper (including gravity).
The resulting particle behaviour is certainly fluid-like (with quite some compressibility as is expected from such a simple pressure solver). However as you can see in this screenshot the particles are not evenly spread when in equilibrium, but instead arrange into small clusters of about 3 particle.
Is this normal behaviour ? It appears strange to me, so I wanted to make sure this is either correct or someone would have an idea what could be wrong here.
The screenshot shows the so-called pairing instability, which is one of the most frequent instability problems in SPH computations.
Pairing instability is the consequence of the application of bell-shaped kernel functions with too large smoothing radii. Since polynomial kernel functions of at least third order have an infection point, particles, which are getting too close to each other, experience lower and lower repulsive forces and gradually stick together. This can be overcome by choosing a suitable smoothing radius leading to a rather optimal number of neighbors, which depends on the applied kernel function but usually is around 25 in 2D.
You can read about the pairing instability and other issues of SPH simulations here. Pairing instability is briefly discussed on page 9.
I'm trying to understand Unity Physics engine (PhysX), Can somebody explain that what exactly Default Solver Iterations and Default Solver Velocity Iterations are?
This is from Unity documentation :
Default Solver Iterations: Define how many solver processes Unity runs
on every physics frame. Solvers are small physics engine tasks which
determine a number of physics interactions, such as the movements of
joints or managing contact between overlapping Rigidbody components.
This affects the quality of the solver output and it’s advisable to
change the property in case non-default Time.fixedDeltaTime is used,
or the configuration is extra demanding. Typically, it’s used to
reduce the jitter resulting from joints or contacts.
Please provide some example of how it works and how does increase or decreasing it affects the final result?
I asked this question on Unity Forum and Hyblademin answered it:
In mathematics, an iterative solution method is any algorithm which
approximately solves a system of unknown values like [x1, x2, x3 ...
xn] by repeating a set of steps (iterating). Often, the system of
interest is a set of linear equations exactly like those seen in
algebra class but with a prohibitively high number of unknowns.
Starting with a guess for the solution to each unknown, which could be
based on a similar, known system or could be from a common starting
point like [1, 1, 1 ... 1], a procedure is carried out which gives an
approximate solution which will be closer to the exact values. After
only one iteration, the approximation won't be a very good one unless
the initial guess was already close. But the procedure can be repeated
with the first approximation as the new input, which will give a
closer approximation.
After repeating a few more times, we can expect a reliable
approximation. It still isn't exact, which we could confirm by just
plugging in our answers into our original system and seeing that it
isn't quite right (after simplifying, we would end up with things like
10=10.001 or something to that effect). That said, if the
approximation is close enough for our application, we stop iterating
and use it.
These lecture notes courtesy of a Notre Dame course give a nice
example of this in action using the well-known Jacobi method. Carrying
out an iteration of an iterative method outputs an approximation that
is better than the input because the methods are defined in a way that
causes this to happen, and this is a property called convergence. When
looking at why any given method converges, things get abstract pretty
quickly. I think this is outside the scope of your question,
especially since I don't know what method(s) Unity uses anyway.
When physics is calculated in Unity, we end up with a lot of systems
of equations. We could draw a free-body diagram to show forces and
torques during a collision for a given FixedUpdate in a Unity runtime
to show this. We could try to solve them "directly", which means to
use logical relationships to determine the exact results of the values
(like solving for x in algebra class), but even if the systems are on
the simple side, doing a lot of them will slow the execution to a
crawl. Luckily, iterative, "indirect" methods can be used to get a
pretty good approximation at a fraction of the computing cost.
Increasing the number of iterations will lead to more precise
approximate solutions. There is a point where increasing the number of
iterations gives an increase in precision that is not at all worth the
processing overhead of doing another iteration. But the number of
iterations for this point depends on what you need your project to do.
Sometimes a given arrangement of physics objects will result in jitter
with the default settings that might be improved with more solver
iterations, which is mentioned in the manual entry. There isn't a
great way to determine if changing solver iteration counts will
improve behavior or performance in the way that you need, except for
just trial and error (use the Profiler for a more-objective indication
of performance impact).
https://forum.unity.com/threads/what-does-default-solver-iteration-means.673912/#post-4512004
i am reading in soft computing algorithms ,currently in "Particle Swarm Optimization ",i understand the technique in general but ,i stopped at mathematical or physics part which i can't imagine or understand how it works or how it affect the flying,that part is the first part in the equation which update the velocity which is called the "Inertia Factor"
the complete update velocity equation is :
i read in one article in section 2.3 "Ineteria Factor" that:
"This variation of the algorithm aims to balance two possible PSO tendencies (de-
pendent on parameterization) of either exploiting areas around known solutions
or explore new areas of the search space. To do so this variation focuses on the
momentum component of the particles' velocity equation 2. Notice that if you
remove this component the movement of the particle has no memory of the pre-
vious direction of movement and it will always explore close to a found solution.
On the other hand if the velocity component is used, or even multiplied by a w
(inertial weight, balances the importance of the momentum component) factor
the particle will tend to explore new areas of the search space since it cannot
easily change its velocity towards the best solutions. It must rst \counteract"
the momentum previously gained, in doing so it enables the exploration of new
areas with the time \spend counteracting" the previous momentum. This vari-
ation is achieved by multiplying the previous velocity component with a weight
value, w."
the full pdf at: https://www.google.com.eg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDIQFjAA&url=http%3A%2F%2Fweb.ist.utl.pt%2F~gdgp%2FVA%2Fdata%2Fpso.pdf&ei=0HwrUaHBOYrItQbwwIDoDw&usg=AFQjCNH8vChXHXWz_ydHxJKAY0cUa94n-g
but i can't also imagine how physicaly or numerically this is happend and how this factor affect going from exploration level to exploitative level ,so need a numerical example to see how it's work and imagine how it's work.
also ,in Genetic Algorithm there's a schema theorem which is a proof of GA success of finding optimum solution,is there's such athoerm for PSO.
It's not easy to explain PSO using mathematics (see Wikipedia article for example).
But you can think like this: the equation has 3 parts:
particle speed = inertia + local memory + global memory
So you control the 'importance' of these components by varying the coefficientes in each part.
There's no analytical way to see this, unless you make the stocastic part constant and ignore things like particle-particle interation.
Exploit: take advantage of the best know solutions (local and global).
Explore: search in new directions, but don't ignore the best know solutions.
In a nutshell, you can control how much importance to give for the particle current speed (inertia), the particle memory of the best know solution, and the particle memory of the swarm best know solution.
I hope it can help you!
Br's
Inertia was not the part of the original PSO algorithm introduced by Kennedy and Eberhart in 1995. It's been three years until Shi and Eberhart published this extension and showed (to some extent) that it works better.
One can set that value to a constant (supposedly [0.8 to 1.2] is best).
However, the point of the parameter is to balance exploitation and exploration of space, and
authors got best results when they defined the parameter with a linear function, that decreases over time from [1.4 to 0].
Their rationale was that first one should exploit solutions to find a good seed and later exploit area around the seed.
My feeling about it is that the closer you are to 0, the more chaotic turns particles make.
For a detailed answer refer to Shi, Eberhart 1998 - "A modified Particle Swarm Optimizer".
Inertia controls the influence of the previous velocity.
When high, cognitive and social components are less relevant. (particle keeps going its way, exploring new portions of the space)
When low, particle explores better the space where the best-so-far optimum has been found
Inertia can change over time: Start high, later decrease
I am working on optical flow, and based on the lecture notes here and some samples on the Internet, I wrote this Python code.
All code and sample images are there as well. For small displacements of around 4-5 pixels, the direction of vector calculated seems to be fine, but the magnitude of the vector is too small (that's why I had to multiply u,v by 3 before plotting them).
Is this because of the limitation of the algorithm, or error in the code? The lecture note shared above also says that motion needs to be small "u, v are less than 1 pixel", maybe that's why. What is the reason for this limitation?
#belisarius says "LK uses a first order approximation, and so (u,v) should be ideally << 1, if not, higher order terms dominate the behavior and you are toast. ".
A standard conclusion from the optical flow constraint equation (OFCE, slide 5 of your reference), is that "your motion should be less than a pixel, less higher order terms kill you". While technically true, you can overcome this in practice using larger averaging windows. This requires that you do sane statistics, i.e. not pure least square means, as suggested in the slides. Equally fast computations, and far superior results can be achieved by Tikhonov regularization. This necessitates setting a tuning value(the Tikhonov constant). This can be done as a global constant, or letting it be adjusted to local information in the image (such as the Shi-Tomasi confidence, aka structure tensor determinant).
Note that this does not replace the need for multi-scale approaches in order to deal with larger motions. It may extend the range a bit for what any single scale can deal with.
Implementations, visualizations and code is available in tutorial format here, albeit in Matlab not Python.