Writing test cases for iOS 4 accelerometer/gyroscope data collection - iphone

I'm developing an app for iPhone (iOS 4.2) which needs to be able to collect large amounts of data from the accelerometer and gyroscope. I'm current looking at using the CoreMotion framework to get the data into an acceptor class (from which I'm writing it to a database).
However, for code quality I want to write some test cases to test my acceptor class. Through research there doesn't seem to be any clear way to do this - CoreMotion just outputs data as floats, but I don't just want to feed a load of floats into the acceptor class, because that won't replicate how CoreMotion behaves - rather just how a feed of floats will.
Is it fair to assume that as CoreMotion is an apple-produced class when "they" say it will produce data at x-hertz, and this data will be between the range of y and z that this is a given?
Any ideas/hints relating to writing and developing test cases, and additionally relating to my overall design would be greatly appreciated.

Download the core motion teapot sample on the developer website. You'll be able to set the rate (hertz) of returned data per second. You can also request timestamp from core motion to get the exact time of the accelerometer / gyroscope data.
The accelerometer isn't accurate. You can use it to get a general idea of current acceleration which is useful for detecting direction of movement but not for getting distances or velocity.

Related

How Do I Get Reliable Timing for my Audio App?

I have an audio app in which all of the sound generating work is accomplished by pure data (using libpd).
I've coded a special sequencer in swift which controls the start/stop playback of multiple sequences, played by the synth engines in pure data.
Until now, I've completely avoided using Core Audio or AVFoundation for any aspect of my app, because I know nothing about them, and they both seem to require C or Objective C coding, which I know nearly nothing about.
However, I've been told from a previous q&a on here, that I need to use Core Audio or AVFoundation to get accurate timing. Without it, I've tried everything else, and the timing is totally messed up (laggy, jittery).
All of the tutorials and books on Core Audio seem overwhelmingly broad and deep to me. If all I need from one of these frameworks is accurate timing for my sequencer, how do you suggest I achieve this as someone who is a total novice to Core Audio and Objective-C, but otherwise has a 95% finished audio app?
If your sequencer is Swift code that depends on being called just-in-time to push audio, it won't work with good timing accuracy. e.g. you can't get the timing you need.
Core Audio uses a real-time pull-model (which excludes Swift code of any interesting complexity). AVFoundation likely requires you to create your audio ahead of time, and schedule buffers. An iOS app needs to be designed nearly from the ground up for one of these two solutions.
Added: If your existing code can generate audio samples a bit ahead of time, enough to statistically cover using a jittery OS timer, you can schedule this pre-generated output to be played a few milliseconds later (e.g. when pulled at the correct sample time).
AudioKit is an open source audio framework that provides Swift access to Core Audio services. It includes a Core Audio based sequencer, and there is plenty of sample code available in the form of Swift Playgrounds.
The AudioKit AKSequencer class has the transport controls you need. You can add MIDI events to your sequencer instance programmatically, or read them from a file. You could then connect your sequencer to an AKCallbackInstrument which can execute code upon receiving MIDI noteOn and noteOff commands, which might be one way to trigger your generated audio.

Changing the pitch of a sound in realtime (Swift)

I have been trying, for the past weeks, to find a simple way to control audio frequency (I.E. change the pitch of an audio file) from within swift in realtime.
I have tried with AVAudioPlayer.rate but that only changes the speed.
I even tried connecting an AVAudioUnitTimePitch to an audio engine with no success.
AVAudioUnitTimePitch just gives me an error, and rate changes the playback speed which is not what I need.
What I would like to do is, make a sound higher or lower pitch, say from -1.0 to 2.0 (audio source.duration/2 so it would play twice as fast.
Do you guys know of any way to do this even if I have to use external libraries or classes?
Thank you, I am stumped as to how to proceed.
If you use the Audio Unit API, the iOS built-in kAudioUnitSubType_NewTimePitch Audio Unit component can be used to change pitch while playing audio at the same speed. e.g. the kNewTimePitchParam_Rate and kNewTimePitchParam_Pitch parameters are independent.
The Audio Unit framework has a (currently non-deprecated) C API, but one can call C API functions from Swift code.

How to record accelerometer data from Nike+?

I'm not sure whether this is possible or not.
I'd like to receive accelerometer data from Nike+.
Actually, some data related with running history is recorded in XML format on the iPhone.
But the XML file doesn't include accelerometer data.
Is is possible for iPhone to receive accelerometer data from Nike+, then to write the x-y-z signals to some file, like an XML file on the iPhone?
Nike+ may use an iPhone but it doesn't record the data using the accelerometer, it either uses data transmitted from the pedometer or by using the in built GPS.
Having retrieved data from the Nike+ website, they have only recently started collecting/exposing more data and accelerometer information isn't one of them.
It is possible that the 7th generation Nano (maybe previous ones too) returns some sort of accelerometer data, as they don't rely on any sensors to collect running data but act as a pedometer themselves. I doubt it, but there's a very small possibility.

Getting the voltage applied to an iPhone's microphone port

I am looking at a project where we send information from a peripheral device to an iPhone through the microphone input.
In a nutshell, the iPhone would act as a voltmeter. (In reality, the controller we developed will send data encoded as voltages to the iPhone for further processing).
As there are several voltmeter apps on the AppStore that receive their input through the microphone port, this seems to be technically possible.
However, scanning the AudioQueue and AudioFile APIs, there doesn't seem to be a method for directly accessing the voltage.
Does anyone know of APIs, sample code or literature that would allow me to access the voltage information?
The A/D converter on the line-in is a voltmeter, for all practical purposes. The sample values map to the voltage applied at the time the sample was taken. You'll need to do your own testing to figure out what voltages correspond to the various sample values.
As far as I know, it won't be possible to get the voltages directly; you'll have to figure out how to convert them to equivalent 'sounds' such that the iOS APIs will pick them up as sounds, which you can interpret as voltages in your app.
If I were attempting this sort of app, I would hook up some test voltages to the input (very small ones!), capture the sound and then see what it looks like. A bit of reverse engineering should get you to the point where you can interpret the inputs correctly.

Improving iPhone AR (Tool)Kit by using the Gyroscope

I'm using iPhone AR Kit and its fork, iPhone AR Toolkit, but I'm trying to improve the user experience by using the gyroscope when it's available.
For those of you who used the kits, do you have any idea on how to do this ? My first thought was to get the gyroscope yaw to get a more precise azimuth value.
So I have to questions :
Does anyone used the AR Kit linked above, and have thoughts on including gyroscope in it ?
Is it a good idea to mix gyroscope and compass data to get a more precise value of the azimuth ?
Gyroscopes measure rotational velocity, so the gyro output will be in change in yaw per second (e.g rad/s) rather than an absolute yaw. There are various methods for trying to use gyros for "dead reckoning" of orientation, but in practice while they're very accurate over the short term, integrating gyro read-outs to determine orientation "drifts" significantly, so you have to keep recalibrating against some absolute measure.
It would be very trivial to use the gyro to interpolate between compass readings, or calculate the bearing based on the gyro only for short fast motions while the compass catches up, but properly fusing the compass and gyro isn't trivial. There's a talk here on integrating sensor for Android that might be a good start. The standard method of fusing sensors is to use a Kalman Filter, there's an introduction here. They're fairly involved tools, you need a good model of your sensor errors for example.