i am reading in soft computing algorithms ,currently in "Particle Swarm Optimization ",i understand the technique in general but ,i stopped at mathematical or physics part which i can't imagine or understand how it works or how it affect the flying,that part is the first part in the equation which update the velocity which is called the "Inertia Factor"
the complete update velocity equation is :
i read in one article in section 2.3 "Ineteria Factor" that:
"This variation of the algorithm aims to balance two possible PSO tendencies (de-
pendent on parameterization) of either exploiting areas around known solutions
or explore new areas of the search space. To do so this variation focuses on the
momentum component of the particles' velocity equation 2. Notice that if you
remove this component the movement of the particle has no memory of the pre-
vious direction of movement and it will always explore close to a found solution.
On the other hand if the velocity component is used, or even multiplied by a w
(inertial weight, balances the importance of the momentum component) factor
the particle will tend to explore new areas of the search space since it cannot
easily change its velocity towards the best solutions. It must rst \counteract"
the momentum previously gained, in doing so it enables the exploration of new
areas with the time \spend counteracting" the previous momentum. This vari-
ation is achieved by multiplying the previous velocity component with a weight
value, w."
the full pdf at: https://www.google.com.eg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDIQFjAA&url=http%3A%2F%2Fweb.ist.utl.pt%2F~gdgp%2FVA%2Fdata%2Fpso.pdf&ei=0HwrUaHBOYrItQbwwIDoDw&usg=AFQjCNH8vChXHXWz_ydHxJKAY0cUa94n-g
but i can't also imagine how physicaly or numerically this is happend and how this factor affect going from exploration level to exploitative level ,so need a numerical example to see how it's work and imagine how it's work.
also ,in Genetic Algorithm there's a schema theorem which is a proof of GA success of finding optimum solution,is there's such athoerm for PSO.
It's not easy to explain PSO using mathematics (see Wikipedia article for example).
But you can think like this: the equation has 3 parts:
particle speed = inertia + local memory + global memory
So you control the 'importance' of these components by varying the coefficientes in each part.
There's no analytical way to see this, unless you make the stocastic part constant and ignore things like particle-particle interation.
Exploit: take advantage of the best know solutions (local and global).
Explore: search in new directions, but don't ignore the best know solutions.
In a nutshell, you can control how much importance to give for the particle current speed (inertia), the particle memory of the best know solution, and the particle memory of the swarm best know solution.
I hope it can help you!
Br's
Inertia was not the part of the original PSO algorithm introduced by Kennedy and Eberhart in 1995. It's been three years until Shi and Eberhart published this extension and showed (to some extent) that it works better.
One can set that value to a constant (supposedly [0.8 to 1.2] is best).
However, the point of the parameter is to balance exploitation and exploration of space, and
authors got best results when they defined the parameter with a linear function, that decreases over time from [1.4 to 0].
Their rationale was that first one should exploit solutions to find a good seed and later exploit area around the seed.
My feeling about it is that the closer you are to 0, the more chaotic turns particles make.
For a detailed answer refer to Shi, Eberhart 1998 - "A modified Particle Swarm Optimizer".
Inertia controls the influence of the previous velocity.
When high, cognitive and social components are less relevant. (particle keeps going its way, exploring new portions of the space)
When low, particle explores better the space where the best-so-far optimum has been found
Inertia can change over time: Start high, later decrease
Related
I am working on driving industrial robots with neural nets and so far it is working well. I am using the PPO algorithm from the OpenAI baseline and so far I can drive easily from point to point by using the following rewarding strategy:
I calculate the normalized distance between the target and the position. Then I calculate the distance reward with.
rd = 1-(d/dmax)^a
For each time step, I give the agent a penalty calculated by.
yt = 1-(t/tmax)*b
a and b are hyperparameters to tune.
As I said this works really well if I want to drive from point to point. But what if I want to drive around something? For my work, I need to avoid collisions and therefore the agent needs to drive around objects. If the object is not straight in the way of the nearest path it is working ok. Then the robot can adapt and drives around it. But it gets more and more difficult to impossible to drive around objects which are straight in the way.
See this image :
I already read a paper which combines PPO with NES to create some Gaussian noise for the parameters of the neural network but I can't implement it by myself.
Does anyone have some experience with adding more exploration to the PPO algorithm? Or does anyone have some general ideas on how I can improve my rewarding strategy?
What you describe is actually one of the most important research areas of Deep RL: the exploration problem.
The PPO algorithm (like many other "standard" RL algos) tries to maximise a return, which is a (usually discounted) sum of rewards provided by your environment:
In your case, you have a deceptive gradient problem, the gradient of your return points directly at your objective point (because your reward is the distance to your objective), which discourage your agent to explore other areas.
Here is an illustration of the deceptive gradient problem from this paper, the reward is computed like yours and as you can see, the gradient of your return function points directly to your objective (the little square in this example). If your agent starts in the bottom right part of the maze, you are very likely to be stuck in a local optimum.
There are many ways to deal with the exploration problem in RL, in PPO for example you can add some noise to your actions, some other approachs like SAC try to maximize both the reward and the entropy of your policy over the action space, but in the end you have no guarantee that adding exploration noise in your action space will result in efficient of your state space (which is actually what you want to explore, the (x,y) positions of your env).
I recommend you to read the Quality Diversity (QD) literature, which is a very promising field aiming to solve the exploration problem in RL.
Here is are two great resources:
A website gathering all informations about QD
A talk from ICLM 2019
Finally I want to add that the problem is not your reward function, you should not try to engineer a complex reward function such that your agent is able to behave like you want. The goal is to have an agent that is able to solve your environment despite pitfalls like the deceptive gradient problem.
I implemented a rather simple SPH simulation using a cubic-spline-kernel and a simple non-iterative pressure solver as described in this PDF in equation 9. I followed algorithm 1 of that paper (including gravity).
The resulting particle behaviour is certainly fluid-like (with quite some compressibility as is expected from such a simple pressure solver). However as you can see in this screenshot the particles are not evenly spread when in equilibrium, but instead arrange into small clusters of about 3 particle.
Is this normal behaviour ? It appears strange to me, so I wanted to make sure this is either correct or someone would have an idea what could be wrong here.
The screenshot shows the so-called pairing instability, which is one of the most frequent instability problems in SPH computations.
Pairing instability is the consequence of the application of bell-shaped kernel functions with too large smoothing radii. Since polynomial kernel functions of at least third order have an infection point, particles, which are getting too close to each other, experience lower and lower repulsive forces and gradually stick together. This can be overcome by choosing a suitable smoothing radius leading to a rather optimal number of neighbors, which depends on the applied kernel function but usually is around 25 in 2D.
You can read about the pairing instability and other issues of SPH simulations here. Pairing instability is briefly discussed on page 9.
I'm trying to understand Unity Physics engine (PhysX), Can somebody explain that what exactly Default Solver Iterations and Default Solver Velocity Iterations are?
This is from Unity documentation :
Default Solver Iterations: Define how many solver processes Unity runs
on every physics frame. Solvers are small physics engine tasks which
determine a number of physics interactions, such as the movements of
joints or managing contact between overlapping Rigidbody components.
This affects the quality of the solver output and it’s advisable to
change the property in case non-default Time.fixedDeltaTime is used,
or the configuration is extra demanding. Typically, it’s used to
reduce the jitter resulting from joints or contacts.
Please provide some example of how it works and how does increase or decreasing it affects the final result?
I asked this question on Unity Forum and Hyblademin answered it:
In mathematics, an iterative solution method is any algorithm which
approximately solves a system of unknown values like [x1, x2, x3 ...
xn] by repeating a set of steps (iterating). Often, the system of
interest is a set of linear equations exactly like those seen in
algebra class but with a prohibitively high number of unknowns.
Starting with a guess for the solution to each unknown, which could be
based on a similar, known system or could be from a common starting
point like [1, 1, 1 ... 1], a procedure is carried out which gives an
approximate solution which will be closer to the exact values. After
only one iteration, the approximation won't be a very good one unless
the initial guess was already close. But the procedure can be repeated
with the first approximation as the new input, which will give a
closer approximation.
After repeating a few more times, we can expect a reliable
approximation. It still isn't exact, which we could confirm by just
plugging in our answers into our original system and seeing that it
isn't quite right (after simplifying, we would end up with things like
10=10.001 or something to that effect). That said, if the
approximation is close enough for our application, we stop iterating
and use it.
These lecture notes courtesy of a Notre Dame course give a nice
example of this in action using the well-known Jacobi method. Carrying
out an iteration of an iterative method outputs an approximation that
is better than the input because the methods are defined in a way that
causes this to happen, and this is a property called convergence. When
looking at why any given method converges, things get abstract pretty
quickly. I think this is outside the scope of your question,
especially since I don't know what method(s) Unity uses anyway.
When physics is calculated in Unity, we end up with a lot of systems
of equations. We could draw a free-body diagram to show forces and
torques during a collision for a given FixedUpdate in a Unity runtime
to show this. We could try to solve them "directly", which means to
use logical relationships to determine the exact results of the values
(like solving for x in algebra class), but even if the systems are on
the simple side, doing a lot of them will slow the execution to a
crawl. Luckily, iterative, "indirect" methods can be used to get a
pretty good approximation at a fraction of the computing cost.
Increasing the number of iterations will lead to more precise
approximate solutions. There is a point where increasing the number of
iterations gives an increase in precision that is not at all worth the
processing overhead of doing another iteration. But the number of
iterations for this point depends on what you need your project to do.
Sometimes a given arrangement of physics objects will result in jitter
with the default settings that might be improved with more solver
iterations, which is mentioned in the manual entry. There isn't a
great way to determine if changing solver iteration counts will
improve behavior or performance in the way that you need, except for
just trial and error (use the Profiler for a more-objective indication
of performance impact).
https://forum.unity.com/threads/what-does-default-solver-iteration-means.673912/#post-4512004
I am working on optical flow, and based on the lecture notes here and some samples on the Internet, I wrote this Python code.
All code and sample images are there as well. For small displacements of around 4-5 pixels, the direction of vector calculated seems to be fine, but the magnitude of the vector is too small (that's why I had to multiply u,v by 3 before plotting them).
Is this because of the limitation of the algorithm, or error in the code? The lecture note shared above also says that motion needs to be small "u, v are less than 1 pixel", maybe that's why. What is the reason for this limitation?
#belisarius says "LK uses a first order approximation, and so (u,v) should be ideally << 1, if not, higher order terms dominate the behavior and you are toast. ".
A standard conclusion from the optical flow constraint equation (OFCE, slide 5 of your reference), is that "your motion should be less than a pixel, less higher order terms kill you". While technically true, you can overcome this in practice using larger averaging windows. This requires that you do sane statistics, i.e. not pure least square means, as suggested in the slides. Equally fast computations, and far superior results can be achieved by Tikhonov regularization. This necessitates setting a tuning value(the Tikhonov constant). This can be done as a global constant, or letting it be adjusted to local information in the image (such as the Shi-Tomasi confidence, aka structure tensor determinant).
Note that this does not replace the need for multi-scale approaches in order to deal with larger motions. It may extend the range a bit for what any single scale can deal with.
Implementations, visualizations and code is available in tutorial format here, albeit in Matlab not Python.
I need to program an algorithm to navigate a robot through a "maze" (a rectangular grid with a starting point, a goal, empty spaces and uncrossable spaces or "walls"). It can move in any cardinal direction (N, NW, W, SW, S, SE, E, NE) with constant cost per move.
The problem is that the robot doesn't "know" the layout of the map. It can only view it's 8 surrounding spaces and store them (it memorizes the surrounding tiles of every space it visits). The only other input is the cardinal direction in which the goal is on every move.
Is there any researched algorithm that I could implement to solve this problem? The typical ones like Dijkstra's or A* aren't trivialy adapted to the task, as I can't go back to revisit previous nodes in the graph without cost (retracing the steps of the robot to go to a better path would cost the moves again), and can't think of a way to make a reasonable heuristic for A*.
I probably could come up with something reasonable, but I just wanted to know if this was an already solved problem, and I need not reinvent the wheel :P
Thanks for any tips!
The problem isn't solved, but like with many planning problems, there is a large amount of research already available.
Most of the work in this area is based on the original work of R. E. Korf in the paper "Real-time heuristic search". That paper seems to be paywalled, but the preliminary results from the paper, along with a discussion of the Real-Time A* algorithm are still available.
The best recent publications on discrete planning with hidden state (path-finding with partial knowledge of the graph) are by Sven Koenig. This includes the significant work on the Learning Real-Time A* algorithm.
Koenig's work also includes some demonstrations of a range of algorithms on theoretical experiments that are far more challenging that anything that would be likely to occur in a simulation. See in particular "Easy and Hard Testbeds for Real-Time Search Algorithms" by Koenig and Simmons.