How to record accelerometer data from Nike+? - iphone

I'm not sure whether this is possible or not.
I'd like to receive accelerometer data from Nike+.
Actually, some data related with running history is recorded in XML format on the iPhone.
But the XML file doesn't include accelerometer data.
Is is possible for iPhone to receive accelerometer data from Nike+, then to write the x-y-z signals to some file, like an XML file on the iPhone?

Nike+ may use an iPhone but it doesn't record the data using the accelerometer, it either uses data transmitted from the pedometer or by using the in built GPS.
Having retrieved data from the Nike+ website, they have only recently started collecting/exposing more data and accelerometer information isn't one of them.
It is possible that the 7th generation Nano (maybe previous ones too) returns some sort of accelerometer data, as they don't rely on any sensors to collect running data but act as a pedometer themselves. I doubt it, but there's a very small possibility.

Related

How to offline debug augmented reality in Unity?

I was wondering if there was a way to record the sensor and video data from my iPhone, save it in some way, and then feed it into Unity to test an AR app.
I'd like to see how different algorithms behave on identical input, and that's hard to do when the only way to test is to pick up my phone and wave it around.
What do you can do is capture the image buffer. I've done something similar using ARCore. Not sure if ARKit has a similar implementation. I found this when I did a brief search https://forum.unity.com/threads/how-to-access-arframe-image-in-unity-arkit.496372/
In ARCore, you can take this image buffer and using ImageConversion.EncodeToPNG you can create PNG files with the timestamp. You can pull your sensor data in parallel. Depending on what you want, you can write it to a file using a similar approach: https://support.unity3d.com/hc/en-us/articles/115000341143-How-do-I-read-and-write-data-from-a-text-file-
After which, you can use FFMPEG to convert these PNGs into a video. If you want to try different algorithms, there's a good chance the PNGs alone will be enough. Else you can use a command like so: http://freesoftwaremagazine.com/articles/assembling_video_png_stream_ffmpeg/
You should be able to pass these images and the corresponding sensor data to your algorithm to check.

Getting the voltage applied to an iPhone's microphone port

I am looking at a project where we send information from a peripheral device to an iPhone through the microphone input.
In a nutshell, the iPhone would act as a voltmeter. (In reality, the controller we developed will send data encoded as voltages to the iPhone for further processing).
As there are several voltmeter apps on the AppStore that receive their input through the microphone port, this seems to be technically possible.
However, scanning the AudioQueue and AudioFile APIs, there doesn't seem to be a method for directly accessing the voltage.
Does anyone know of APIs, sample code or literature that would allow me to access the voltage information?
The A/D converter on the line-in is a voltmeter, for all practical purposes. The sample values map to the voltage applied at the time the sample was taken. You'll need to do your own testing to figure out what voltages correspond to the various sample values.
As far as I know, it won't be possible to get the voltages directly; you'll have to figure out how to convert them to equivalent 'sounds' such that the iOS APIs will pick them up as sounds, which you can interpret as voltages in your app.
If I were attempting this sort of app, I would hook up some test voltages to the input (very small ones!), capture the sound and then see what it looks like. A bit of reverse engineering should get you to the point where you can interpret the inputs correctly.

Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone)

I'd like to get real-time video from the iPhone to another device (either desktop browser or another iPhone, e.g. point-to-point).
NOTE: It's not one-to-many, just one-to-one at the moment. Audio can be part of stream or via telephone call on iphone.
There are four ways I can think of...
Capture frames on iPhone, send
frames to mediaserver, have
mediaserver publish realtime video
using host webserver.
Capture frames on iPhone, convert to
images, send to httpserver, have
javascript/AJAX in browser reload
images from server as fast as
possible.
Run httpServer on iPhone, Capture 1 second duration movies on
iPhone, create M3U8 files on iPhone, have the other
user connect directly to httpServer on iPhone for
liveStreaming.
Capture 1 second duration movies on
iPhone, create M3U8 files on iPhone,
send to httpServer, have the other
user connected to the httpServer
for liveStreaming. This is a good answer, has anyone gotten it to work?
Is there a better, more efficient option?
What's the fastest way to get data off the iPhone? Is it ASIHTTPRequest?
Thanks, everyone.
Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.
Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
Write your own parser for the H.264/AAC output (very hard)
Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
"Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions)."
I have just wrote such a code, but it is quite possible to eliminate such a gap by overlapping two AVAssetWriters. Since it uses the hardware encoder, I strongly recommend this approach.
We have similar needs; to be more specific, we want to implement streaming video & audio between an iOS device and a web UI. The goal is to enable high-quality video discussions between participants using these platforms. We did some research on how to implement this:
We decided to use OpenTok and managed to pretty quickly implement a proof-of-concept style video chat between an iPad and a website using the OpenTok getting started guide. There's also a PhoneGap plugin for OpenTok, which is handy for us as we are not doing native iOS.
Liblinphone also seemed to be a potential solution, but we didn't investigate further.
iDoubs also came up, but again, we felt OpenTok was the most promising one for our needs and thus didn't look at iDoubs in more detail.

Writing test cases for iOS 4 accelerometer/gyroscope data collection

I'm developing an app for iPhone (iOS 4.2) which needs to be able to collect large amounts of data from the accelerometer and gyroscope. I'm current looking at using the CoreMotion framework to get the data into an acceptor class (from which I'm writing it to a database).
However, for code quality I want to write some test cases to test my acceptor class. Through research there doesn't seem to be any clear way to do this - CoreMotion just outputs data as floats, but I don't just want to feed a load of floats into the acceptor class, because that won't replicate how CoreMotion behaves - rather just how a feed of floats will.
Is it fair to assume that as CoreMotion is an apple-produced class when "they" say it will produce data at x-hertz, and this data will be between the range of y and z that this is a given?
Any ideas/hints relating to writing and developing test cases, and additionally relating to my overall design would be greatly appreciated.
Download the core motion teapot sample on the developer website. You'll be able to set the rate (hertz) of returned data per second. You can also request timestamp from core motion to get the exact time of the accelerometer / gyroscope data.
The accelerometer isn't accurate. You can use it to get a general idea of current acceleration which is useful for detecting direction of movement but not for getting distances or velocity.

Streaming Audio Clips from iPhone to server

I'm wondering if there are any examples atomic examples out there for streaming audio FROM the iPhone to a server. I'm not interested in telephony or SIP style solutions, just a simple socket stream to send an audio clip, in .wav format, as it is being recorded. I haven't had much luck with the google or other obvious avenues, although there seem to be many examples of doing this the other way around.
i cant figure out how to register the unregistered account i initially posted with.
anyway, I'm not really interested in the audio format at present, just the streaming aspect. i want to take the microphone input, and stream it from the iphone to a server. i dont presently care about the transfer rate as ill initially just test from a wifi connection, not the 3g setup. the reason i cant cache it is because im interested in trying out some open source speech recognition stuffs for my undergraduate thesis. caching and then sending the recording is possible but then it takes considerably longer to get the voice data to the server. if i can start sending the data as soon as i start recording, then the response time is considerably improved because most of the data will have already reached the server by the time i let go of the record button. furthermore, if i can get this streaming functionality to work from the iphone then on the server side of things i can also start the speech recognizer as soon as the first bit of audio comes through. again this should considerably speech up the final amount of time that the transaction takes from the user perspective.
colin barrett mentions the phones and phone networks, but these are actually a pretty suboptimal solution for asr, mainly because they provide no good way to recover from errors - doing so over a voip dialogue is a horrible experience. however, the iphone and in particular the touch screen provide a great way to do that, through use of an ime or nbest lists for the other recognition candidates.
if i can figure out the basic architecture for streaming the audio, then i can start thinking about doing flac encoding or something to reduce the required transfer rate. maybe even feature extraction, although that limits the later ability to retrain the system with the recordings.