Improving iPhone AR (Tool)Kit by using the Gyroscope - iphone

I'm using iPhone AR Kit and its fork, iPhone AR Toolkit, but I'm trying to improve the user experience by using the gyroscope when it's available.
For those of you who used the kits, do you have any idea on how to do this ? My first thought was to get the gyroscope yaw to get a more precise azimuth value.
So I have to questions :
Does anyone used the AR Kit linked above, and have thoughts on including gyroscope in it ?
Is it a good idea to mix gyroscope and compass data to get a more precise value of the azimuth ?

Gyroscopes measure rotational velocity, so the gyro output will be in change in yaw per second (e.g rad/s) rather than an absolute yaw. There are various methods for trying to use gyros for "dead reckoning" of orientation, but in practice while they're very accurate over the short term, integrating gyro read-outs to determine orientation "drifts" significantly, so you have to keep recalibrating against some absolute measure.
It would be very trivial to use the gyro to interpolate between compass readings, or calculate the bearing based on the gyro only for short fast motions while the compass catches up, but properly fusing the compass and gyro isn't trivial. There's a talk here on integrating sensor for Android that might be a good start. The standard method of fusing sensors is to use a Kalman Filter, there's an introduction here. They're fairly involved tools, you need a good model of your sensor errors for example.

Related

How to uniform the reading data from the gyroscope sensor from different devices?

I'm developing an app based on AR and orientation of the device, the problem is that different devices give me different data from the gyroscope. Probably because the different hardware, but at this point there is a system to uniforming the results?
I also have the problem of working on Unity and not on Android Studio or Xcode, so the methods I can use are limited.
I'm using the Input.gyro.attitude to get the data and the results are not the same. Any suggestion?
Quaternion direction = Input.gyro.attitude;

Making a trackable human body - Oculus Rift

I'm very new to this. During my research for my PhD thesis I found a way to solve a problem and for that I need to move my lab testing in the virtual environment. Anyway, I have an Oculus Rift and an OPTOTRAK system that allows me to motion capture a full body for VR (in theory). What my question is, can someone point me in the right direction, of what materials do I need to check out to start working on a project. I have a background in programming, so it's just that I need a nudge in the right direction (or if someone knows a similar project)
https://www.researchgate.net/publication/301721674_Insert_Your_Own_Body_in_the_Oculus_Rift_to_Improve_Proprioception - I want to make something like this :)
Tnx a lot
Nice challenge too.. how accurate and how real time is the image of your body in the Oculus Rift world ? my two - or three - cents
A selfie-based approach would be the most comfortable to the user.. there's an external camera somewhere and the software transforms your image to reflect the correct perspective, as you would see your body, through the oculus, at any moment. This is not trivial and quite expensive vision software. To let it work 360 degrees there should be more than 1 camera, watching all individual oculus users in a room !
An indirect approach could be easier.. model your body, only show dynamics. There's WII style electronics in bracelets and on/in special user clothing, involving multiple tilt and acceleration sensors. They form a cluster of "body state" sensor information, to be accessed by the modeller in the software. No camera is needed, and the software is not that complicated when you'd use a skeleton model.
Combine. Use the camera for the rendering texture and drive the skeleton model via dynamics drive by the clothing sensors. Maybe deep learning could be applied, in conjunction with a large number of tilt sensors in the clothing, a variety of body movement patterns are to be trained and connected to the rendering in the oculus. This would need the same hardware as the previous solution, but the software could be easier and your body looks properly textured and it moves less "mechanistic". There will be some research needed to find the correct deep learning strategy..

iOS 3D indoor navigation application

what are the steps needed to create an indoor 3d navigation application. I have some auto cad files for a building and it would not be a problem to create a 3d model using 3dmax. Inertial sensors will be used for localization, bit After getting the model, how can I integrate it in iOS and create the visualization?
Depending on what your complete requirements are i believe it sounds like you do require openGL programming in order to create that 3D environment. And for navigation, i would suggest using the GPS in order to specify where you are located as opposed to inertial sensors. Or maybe a Mix of both so as to reduce your errors. I am guessing you want to be able to locate yourself in a building where GPS and wifi or 3G signals are not available. Just making use of inertial sensors would definitely be error prone.

track small movements of iphone with no GPS

I have to write an application for iphone that tracks the movement of the iphone itself, given its initial position, without ever using GPS. That is, I can only use data provided by the gyroscope and the accelerometer. The distances I need to measure are rather small and the precision I'm looking for is 40-50cm (~2 feet) at the very most.
Is this possible? If so, what's the best way to go about it? Also, do you know of any existing (and possibly open source) projects that have implemented this already?
Thanks a lot!
If you integrate the acceleration twice you get position but the error is horrible. It is useless in practice.
Here is an explanation why (Google Tech Talk) at 23:20. I highly recommend this video.
I answered a similar question here and here.

Writing test cases for iOS 4 accelerometer/gyroscope data collection

I'm developing an app for iPhone (iOS 4.2) which needs to be able to collect large amounts of data from the accelerometer and gyroscope. I'm current looking at using the CoreMotion framework to get the data into an acceptor class (from which I'm writing it to a database).
However, for code quality I want to write some test cases to test my acceptor class. Through research there doesn't seem to be any clear way to do this - CoreMotion just outputs data as floats, but I don't just want to feed a load of floats into the acceptor class, because that won't replicate how CoreMotion behaves - rather just how a feed of floats will.
Is it fair to assume that as CoreMotion is an apple-produced class when "they" say it will produce data at x-hertz, and this data will be between the range of y and z that this is a given?
Any ideas/hints relating to writing and developing test cases, and additionally relating to my overall design would be greatly appreciated.
Download the core motion teapot sample on the developer website. You'll be able to set the rate (hertz) of returned data per second. You can also request timestamp from core motion to get the exact time of the accelerometer / gyroscope data.
The accelerometer isn't accurate. You can use it to get a general idea of current acceleration which is useful for detecting direction of movement but not for getting distances or velocity.