Relative sizes of pymunk forces - simulation

In Pymunk, is the magnitude of gravity the same as the magnitude of apply_force_at_local_point or apply_force_at_world_point, relatively speaking. In other words, is the magnitude of gravity=(20,40) equal to the magnitude of apply_force_at_world_point((20,40),object's position).
I used the equation of motion, final position = initial position + intial velocity * time + 1/2 * acceleration * t^2, to test that. It turns out that these magnitudes are not equal. For instance, it took a force of (0,-7888) to equal gravity of (0,-1750).
I am trying to determine apply_force_at_world_point force that would equal/cancel out gravity. I know I can just set the body's gravity to zero to achieve that effect but my goal is to determine magnetic force that would be enough to levitate a magnet of given weight and magnetic strength.
How can I find the magnitude of force (without testing a bunch of random values) that would equal gravity.
I hope the information given is enough to understand the issue

You can see how the velocity of a body is updated in the Chipmunk source code: https://github.com/viblo/Chipmunk2D/blob/master/src/cpBody.c#L501
body->v = cpvadd(cpvmult(body->v, damping), cpvmult(cpvadd(gravity, cpvmult(body->f, body->m_inv)), dt));
Translated to Python/Pymunk this would be something like this:
body.velocity = body.velocity * damping + (gravity + body.force / body.mass) * dt
From this I think this should work to make a opposing force matching the gravity:
body.apply_force_at_local_point(-space.gravity * body.mass)
(I tested this in a simple simulation with some gravity and a ball shape/body and it seems to work as expected, instead of falling the ball stayed put)

Related

Projectile Motion in Unity

So my friend and I are making some 2D game, we are using some custom character controller, so we are not using rigidbody2D. Now we have some sort of catapult which needs to eject the player in a projectile-motion style.
We've done it for the catapult which shoots the player straight up
In inspector you can decide how much units do you want player to jump and how much does it need to get to reach max height.
So here is the code for the catapult that shoots the player up.
float ejectInicialY = (jumpHeight - ( player.physics.gravity * Mathf.Pow(timeToReachMaxHeight, 2) / 2)) / timeToReachMaxHeight;
float ejectVelocityY = ejectInicialY + player.physics.gravity * Time.deltaTime;
player.physics.playerVelocity = new Vector2(ejectVelocityY, 0f);
I tried to apply the same formulas for the X coordinate, but it doesn't work well.
Any help would be greatly appreciated.
This is ultimately a physics problem.
You are calculating current velocities by determining the acceleration of the object. Acceleration of an object can be determined from the net force acting on the object (F) and the mass of the object (m) through the formula a = F / m. I highly recommend reading some explanations of projectile motion and understanding the meaning of the motion equations you are using.
Vertical Direction
For the vertical direction, the net vertical force during the jump (assuming no air drag, etc.) is player.physics.gravity. So you apply your motion formulas assuming a constant acceleration of player.physics.gravity, which you've seemed to have accomplished already.
Horizontal Direction
Becausegravity does not commonly act in the horizontal direction, the net horizontal force during the jump (assuming no air drag, etc.) is 0. So again you can apply your motion formulas, but this time using 0 as your acceleration. By doing this, you should realize that velocityX does not change (in the absence of net horizontal force). Therefore the X coordinate can be determined through (in pseudo-code) newPositionX = startPositionX + Time.deltaTime * velocityX

How does SpriteKit physics move bodies?

How does SpriteKit's physics engine (Box2d) move bodies and apply gravity to them?
is it just the standard:
velocity = velocity + gravity
position = position + velocity * deltaTime
or is there a more complex equation.
I ask this because I am trying to calculate the trajectory of the body and plot it.
Simplified, this is correct. However there can be other forces acting on a body (collisions, joints) and thresholds (ie stop moving if velocity below threshold, etc) and floating point rounding errors can add up.
So if you're looking for a forward calculation it depends on how precise it needs to be.
The most precise option would be to actually run the simulation to advance it to see where bodies will be - however since SK doesn't give you the Box2D sources this can't be done, ie you can't copy the world state and advance it manually in a copy of the current world.

How many pixels is an impulse in box2d

I have a ball that you blow on with air. I want the ball to be blown more if it is close to the blower and blown less if it is farther away from the blower. I am using box2d and I am using the impulse function."body->ApplyLinearImpulse(force, body->GetPosition())". I can't seem to find a formula or a way to accomplish this. If I want the ball to blow to a total distance of 300 pixels right, how could I accomplish this? Please help.
If you want to calculate the distance before simulation you have to take a look at box2d sources. When simulating the velocity of the body is modified according to gravity, extra applied forces, linear damping, angular damping and possibly something more. Also velocity relies on velocity iterations.
But I think if you want a really smooth motion (like from a blow) you'd better use applyForce function instead of impulse. But be sure you are applying the force each simulation step.
EDIT:
Also you can simulate the air resistance as:
Fa = -k*V*V. I've simulated movement in the pipe this way. Worked great.
So each step you can make something like this:
BlowForce = k1 / distance; // k1 - coefficient
Resistance = -k2 * V * V; //k2 - another coefficient
TotalForce = BlowForce + Resistance;
body->ApplyForce(TotalForce);
I am not a box 2d expert but what i would do is create a small box which is actually invisible and let the ball hit the box...if the blower is blowing more i would give more speed to the box in opposite direction. As far as 300 pixel length is concerned you have to adjust the forces and velocity such that the ball goes
300/<your_rendering_window_to_physics_world_ratio>
in physical world.
Force = mass * acceleration, so take the mass you set your body to, calculate the acceleration you want (remember to divide 300px by PTM_RATIO) and then multiply the two together.

How to detect iPhone movement in space using accelerometer?

I am trying to make an application that would detect what kind of shape you made with your iPhone using accelerometer.
As an example, if you draw a circle with your hand holding the iPhone, the app would be able to redraw it on the screen.
This could also work with squares, or even more complicated shapes.
The only example of application I've seen doing such a thing is AirPaint (http://vimeo.com/2276713), but it doesn't seems to be able to do it in real time.
My first try is to apply a low-pass filter on the X and Y parameters from the accelerometer, and to make a pointer move toward these values, proportionally to the size of the screen.
But this is clearly not enought, I have a very low accuracy, and if I shake the device it also makes the pointer move...
Any ideas about that ?
Do you think accelerometer data is enought to do it ? Or should I consider using other data, such as the compass ?
Thanks in advance !
OK I have found something that seems to work, but I still have some problems.
Here is how I proceed (admiting the device is hold verticaly) :
1 - I have my default x, y, and z values.
2 - I extract the gravity vector from this data using a low pass filter.
3 - I substract the normalized gravity vector from each x, y, and z, and get the movement acceleration.
4 - Then, I integrate this acceleration value with respect to time, so I get the velocity.
5 - I integrate this velocity again with respect to time, and find a position.
All of the below code is into the accelerometer:didAccelerate: delegate of my controller.
I am trying to make a ball moving according to the position i found.
Here is my code :
NSTimeInterval interval = 0;
NSDate *now = [NSDate date];
if (previousDate != nil)
{
interval = [now timeIntervalSinceDate:previousDate];
}
previousDate = now;
//Isolating gravity vector
gravity.x = currentAcceleration.x * kFileringFactor + gravity.x * (1.0 - kFileringFactor);
gravity.y = currentAcceleration.y * kFileringFactor + gravity.y * (1.0 - kFileringFactor);
gravity.z = currentAcceleration.z * kFileringFactor + gravity.z * (1.0 - kFileringFactor);
float gravityNorm = sqrt(gravity.x * gravity.x + gravity.y * gravity.y + gravity.z * gravity.z);
//Removing gravity vector from initial acceleration
filteredAcceleration.x = acceleration.x - gravity.x / gravityNorm;
filteredAcceleration.y = acceleration.y - gravity.y / gravityNorm;
filteredAcceleration.z = acceleration.z - gravity.z / gravityNorm;
//Calculating velocity related to time interval
velocity.x = velocity.x + filteredAcceleration.x * interval;
velocity.y = velocity.y + filteredAcceleration.y * interval;
velocity.z = velocity.z + filteredAcceleration.z * interval;
//Finding position
position.x = position.x + velocity.x * interval * 160;
position.y = position.y + velocity.y * interval * 230;
If I execute this, I get quite good values, I mean I can see the acceleration going into positive or negative values according to the movements I make.
But when I try to apply that position to my ball view, I can see it is moving, but with a propencity to go more in one direction than the other. This means, for example, if I draw circles with my device, i will see the ball describing curves towards the top-left corner of the screen.
Something like that : http://img685.imageshack.us/i/capturedcran20100422133.png/
Do you have any ideas about what is happening ?
Thanks in advance !
The problem is that you can't integrate acceleration twice to get position. Not without knowing initial position and velocity. Remember the +C term that you added in school when learning about integration? Well by the time you get to position it is a ct+k term. And it is is significant. That's before you consider that the acceleration data you're getting back is quantised and averaged, so you're not actually integrating the actual acceleration of the device. Those errors will end up being large when integrated twice.
Watch the AirPaint demo closely and you'll see exactly this happening, the shapes rendered are significantly different to the shapes moved.
Even devices that have some position and velocity sensing (a Wiimote, for example) have trouble doing gesture recognition. It is a tricky problem that folks pay good money (to companies like AILive, for example) to solve for them.
Having said that, you can probably quite easily distinguish between certain types of gesture, if their large scale characteristics are different. A circle can be detected if the device has received accelerations in each of six angle ranges (for example). You could detect between swiping the iphone through the air and shaking it.
To tell the difference between a circle and a square is going to be much more difficult.
You need to look up how acceleration relates to velocity and velocity to position. My mind is having a wee fart at the moment, but I am sure it the integral... you want to intergrate acceleration with respect to time. Wikipedia should help you with the maths and I am sure there is a good library somewhere that can help you out.
Just remember though that the accelerometers are not perfect nor polled fast enough. Really sudden movements may not be picked up that well. But for gently drawing in the air, it should work fine.
Seems like you are normalizing your gravity vector before subtraction with the instantaneous acceleration. This would keep the relative orientation but remove any relative scale. The latest device I tested (admittedly not an Idevice) returned gravity at roughly -9.8 which is probably calibrated to m/s. Assuming no other acceleration, if you were to normalize this then subtract it from the filtered pass, you would end up with a current accel of -8.8 instead of 0.0f;
2 options:
-You can just subtract out the gravity vector after the filter pass
-Capture the initial accel vector length, normalize the accel and the gravity vectors, scale the accel vector by the dot of the accel and gravity normals.
Also worth remembering to take the orientation of the device into account.

Smoothing out didAccelerate messages

I'm working on something similar to an inclinometer. I rotate an image based on the angle calculated in didAccelerate (from UIAccelerometerDelegate) using the passed in UIAcceleration variable. The image is jittery or twitchy. Even when the phone is laying on its back and not moving. There are some inclinometers in the app store that are super smooth. I've tried to smooth it out by checking the last reading and if it is within a range, don't do anything. That stops some of the twitching but then you get into something that looks like stop animation. How can I get a smoother affect?
I would suggest smoothing using a 'critically damped spring'.
Essentially what you do is calculate your target rotation (which is what you have now).
Instead of applying this directly to the image rotation, you attach the image rotation to the target rotation with a spring, or piece of elastic. The spring is damped such that you get no oscillations.
You will need one constant to tune how quickly the system reacts, we'll call this the SpringConstant.
Basically we apply two forces to the rotation, then integrate this over time.
The first force is that applied by the spring, Fs = SpringConstant * DistanceToTarget.
The second is the damping force, Fd = -CurrentVelocity * 2 * sqrt(SpringConstant)
The CurrentVelocity forms part of the state of the system, and can be initialised to zero.
In each step, you multiply the sum of these two forces by the time step.
This gives you the change in the value of the CurrentVelocity.
Multiply this by the time step again and it will give you the displacement.
We add this to the actual rotation of the image.
In c++ code:
float CriticallyDampedSpring( float a_Target,
float a_Current,
float & a_Velocity,
float a_TimeStep )
{
float currentToTarget = a_Target - a_Current;
float springForce = currentToTarget * SPRING_CONSTANT;
float dampingForce = -a_Velocity * 2 * sqrt( SPRING_CONSTANT );
float force = springForce + dampingForce;
a_Velocity += force * a_TimeStep;
float displacement = a_Velocity * a_TimeStep;
return a_Current + displacement;
}
I imagine you have two dimensions of rotation. You just need to use this once for each dimension, and keep track of the velocity of each dimension separately.
In systems I was working with 5 was a good value for the spring constant, but you will have to experiment for your situation. Setting it too high will not get rid of the jerkiness, setting it too low it will react too slowly.
Also, you might be best to make a class that keeps the velocity state rather than have to pass it into the function over and over.
I hope this is helpful :)