Head movement detection using 1 accelerometer - accelerometer

I've been assigned a small project of designing a turning jacket for cyclist, which is used to indicate when the cyclist turns right or left using LEDs attached to the back of the jacket. I plan to use 1 accelerometer to detect the head movement. There are simply two kinds of movement: left and right. Is it possible to use 1 accelerometer to specify the movement of the head?
Thanks!

Related

Unreal Engine 4 - Add offset to character movement

I just started (yesterday) using unreal engine and I need to simulate a drunk character using BPs.
I'm using two camera shakes (one for standing still and one for walking) but I want to add some "displacement" on charater when he's walking.
Basically I want to define a random float to be added to X axis location in order to make character wobble smoothly.
It will be acceptable even if there's a way to make the character move along with the camera when it's shaking.
What I tried until now is using AddActorLocalOffset and a timeline to lerp between actor's location and actor's location+offset, but both are very choppy to me.
Maybe it's a noob question but as I told I'm very new to this and need it for a quick work.
Any suggestion?
Thanks
If you are targetting physically correct model, you should use AddForce (UE Docs). But this approach would require implementation of a "drunk animation" where your character will modify it's movement animation to "compensate" this force by stepping aside etc.
Another (much more simple) approach is by using AddMovementInput. This example can be seen here: UE Aswers. In this case, you are basically simulate player's input by adding small amount of side force here and there.

Fluent movement with NetworkTransform & NetworkAnimator

My player character moves around the world using the Animator with root motion activated. Basically, the AI system sets the velocities on Animator, which in turn controls the Animation clips that control character motion. As this is a standard feature that ensures very realistic animation without noticable sliding, I thought this was a good idea ...until I added Network synchronization.
Synching the characters over the Network using NetworkTransform and NetworkAnimation causes those two components to conflict:
NetworkTransform moves the character to whichever position the host commands.
NetworkAnimator syncs the animation vars and plays the Animation clips as host instructs it to, while those Animation clips also apply root motion.
The result is precise (meaning the character reaches the exact target destination), but very stuttering movement (noticable jumps).
Removing NetworkTransform, the host and client instances of the characters desynchronize very quickly, meaning they will end up at different positions in the world when solely controlled by the timing-dependant Animator.
Removing NetworkAnimator, client instances won't play the same animations as the host, if any animations at all.
I tried keeping both Components while disabling root motion for the Animator (on client only). In that case however, NetworkTransform does not seem to interpolate at all. The character just jumps from synched position to synched position in steps of about 0.02 units. Same with rotation.
NetworkTransform is configured to "Sync Transform", as the character neither has a RigidBody nor a CharacterController. All other values are the defaults: sync rate of 9 (also tried higher values there), movement threshold of 0.001, snap threshold of 5, interpolate movement = 1.
How do I get fluent root motion based movement on the Network? I expected that to be a standard scenario...
What you need is to disasble the Root Motion flag on non local instances but also to interpolate the rotation, not just the movement.
Moreover, an interpolation of 1 seems high, same as the thereshold of 5: those seem not realistic unless you are not using Unity standard where 1 unit = 1 meter. I would have a 25cm (0.25) interpolation for the movementand a 3 degrees for the rotation. The sync rate of 9 could be enough but in my experience it has to be recomputed based on the packet loss.

Unity 3d Player moves out of play area

I have player,as shown in the image on a bridge.I want his movement to be restricted to the bridge.(at present he can run outside the bridge in air).How should i achive this.?
1 method i have thought of is to use use continous collision detection between the bridge and the player,to check if he stays within the area.is this the right way to approach?and are there any other alternate ways
Continuous Collision Detection isn't strictly necessary unless it's moving really fast.
I see to way:
Use 3 collider: one for the player the other 2 for limit the sides of the bridge. This way the player can't compenetrate colliders and won't fall dawn.
Manually check inside input handling function that the limits of player movements (since the bridge has a simple shape, it shouldn't be difficult)

iPhone cocos2d box2d body collision detection without applying force

I am writing Cocos2D box2d game for iPhone.
I've 2 dynamic bodies, and I hope they are applied force from outside, but they don't apply force each other and detect their collision.
How can I achieve this?
And also I hope they move together at the same position after collision.
How can I do this?
they don't apply force each other and detect their collision
Sounds like you might want to look at collision filtering. This answer has a bit of code that changes the collision filtering index of a body dynamically https://stackoverflow.com/a/11283206/735204
they move together at the same position after collision
Probably some kind of joint (eg weldjoint?)
From the manual: http://www.box2d.org/manual.html
Joints are used to constrain bodies to the world or to each other. Typical examples in games include ragdolls, teeters, and pulleys. Joints can be combined in many different ways to create interesting motions.
Some joints provide limits so you can control the range of motion. Some joint provide motors which can be used to drive the joint at a prescribed speed until a prescribed force/torque is exceeded.
Joint motors can be used in many ways. You can use motors to control position by specifying a joint velocity that is proportional to the difference between the actual and desired position. You can also use motors to simulate joint friction: set the joint velocity to zero and provide a small, but significant maximum motor force/torque. Then the motor will attempt to keep the joint from moving until the load becomes too strong.
Sorry about last answer, just checking that I can write it.
What about this?
bodyDef.isSensor = true;
and use ContactListener to detect collision. Box2d for collision detection
Also you can use box2d filters. For example:
REMEMBER: if groupIndex < 0, same bodies never collide with each other. That is what you need.
b2Filter bodyFilter;
bodyFilter.groupIndex = -1;
bodyFilter.categoryBits = 0x0002;
fixtureDef.filter = bodyFilter;

iPhone 3D compass

I am trying to build an app for the iPhone 4 which enables the user to "point" at a hardcoded destination and a dot appears where the destination is located.
First, i use the compass to make a horizontal compass(this will cover the left/right rotation):
// Heading
nowHeading = heading.trueHeading;
// Shift image (horizontal compass)
float shift = bearing - nowHeading;
destinationImage.center = CGPointMake(shift+160, destinationImage.center.y);
I shift the dot 160 pixels because the screen is 320 pixels width. My question is now, how can I expand this code to handle up and down? Meaning that if i point the phone down in the table, the dot wont show.. I have to point (like taking a picture) at the destination in order for it to be drawn on the screen. I've already implemented the accelerator. But i don't know how to unite these components to solve my problem.
Bearing should depend on the field of vision of the camera. For iPhone 4 the horizontal angular view is 47.5 so 320 points/47.5 = xxx points per degree, use that to shift horizontally. You also have to add an adaptive filter to the accelerometers, you can get one from the AccelerometerGraph project from Apple.
You have the rotation in one axis (bearing) you should get the rotation on the other two from the accelerometers. The atan2 of two axis give you the rotation on the third. Go to UIAcceleration and imagine an axis physically piercing the device if that helps and do double xAngle = atan2(acceleration.y, acceleration.z); So once you have the rotation upside down you can repeat what you did for the horizontal with the vertical field of view, eg: 60 for the iPhone.
That is going to be one rough implementation :) but achieving smooth movement is difficult. One thing you can do is use the gyros to get a faster response and correct their signal periodically with the accelerometers. See this talk for the troubles ahead: Sensor Fusion on Android Devices. Here is a website dedicated to the Kalman Filter. If you dare with Quaternions I recommend "Visualizing Quaternions" from Andrew J. Hanson.
It sounds like you are trying to do a style of Augmented Reality. If that. Is the case there are several libraries and sample code suggested here:
Augmented Reality