I have to write an application for iphone that tracks the movement of the iphone itself, given its initial position, without ever using GPS. That is, I can only use data provided by the gyroscope and the accelerometer. The distances I need to measure are rather small and the precision I'm looking for is 40-50cm (~2 feet) at the very most.
Is this possible? If so, what's the best way to go about it? Also, do you know of any existing (and possibly open source) projects that have implemented this already?
Thanks a lot!
If you integrate the acceleration twice you get position but the error is horrible. It is useless in practice.
Here is an explanation why (Google Tech Talk) at 23:20. I highly recommend this video.
I answered a similar question here and here.
Related
I want to film a batter swinging at a baseball, but the bat is blurry. The video is 30 fps.
Through research I have found that deconvolution seems to be the way to minimize motion blur, but I have no idea if or how I can implement it in my iOS app post processing.
I was hoping someone could point me in the right direction like how to apply a deconvolution algorithm in iOS or what I might need to do...or if it is even possible. I imagine it takes some processing power.
Any suggestions at all are welcome...
Thanks, this is driving me crazy...
After a lot of research and talks with developers about deconvolusion on iOS (Thanks to Brad Larson for taking the time to give me detailed information) I am confident that it is not possible and/or not worth the time. If the hardware can handle the computations (No guarantee) it would be EXTREMELY slow and consume much of the device's battery. I have also been told it could take months to implement the algorithms...if it is possible at all.
Here is the response I received from Apple...
Deconvolution algorithms are generally difficult to implement and can be very computationally intensive. I suggest you starting with a simple sharpening technique. Depending on the amount of the motion blur in your video, it might just suffice.
The sharpen filters, including CISharpenLuminance and CIUnsharpMask, are now available in iOS 6, so it is moderately easy to test them out.
Core Image Filter Reference
https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html
Core Image sample code from this year's WWDC session 511 "Core Image Techniques". It's called "Attempt3". This sample demonstrates best practices for applying CIFilter's to a live video taken by the iPhone/iPad camera. You may download the session video from the following page: https://developer.apple.com/videos/wwdc/2012/.
Just wanted to pass this information along.
I'm very new to audio programming, but I know this must be possible. (This is an iOS/iPhone related question).
How would I go about changing the tempo of a loaded audio file without changing the pitch, and then playing it back?
I think I need to delve into the CoreAudio framework, but I'm not sure where to begin.
If anyone could let me know what classes I need to look at, or the general process involved, that would help me get started and I'd really appreciate it!
Cheers!
This question is highly related: it relates to pitch shifting, rather than time shifting, but I'd check out the comments and links.
Real-time Pitch Shifting on the iPhone
What you are looking for is a time-pitch modification library. Core Audio on iOS currently does not contain such, but there appear to be some 3rd party libraries available (commercially). There are also time pitch tutorials on the web, such as at dspdimention, which require a large amount of DSP development to get working.
I have to develop an iPhone app which is a DJ app. It includes everything right from playing song to scratching the audio.
Is there a way I can scratch the audio? Are there any good frameworks which you can suggest? Which are best possible options?
I have referred to this links below but didn't help my cause
http://blog.glowinteractive.com/2011/01/vinyl-scratch-emulation-on-iphone/
Scratching Audio
I'd start with reading Kjetil Falkenberg Hansen's recent PhD thesis: The Acoustics and Performance of DJ Scratching to get to grips with the nature of the problem. This should provide you with some effective parameters for your program.
I imagine you'll want to buffer a certain amount of the audio to be 'scratched' and simply advance through said buffer at varying speeds both forward and backwards.
Consider this link (and similar ones) for how to build a buffer.
If the iphone API doesn't provide a useful way to advance through the buffer at different speeds you might consider making your own temporary buffer, then using this to populate the buffer used by the iPhone based on some interpolating function.
BTW - the first link you posted looks very useful indeed! What's it missing?
Has anyone experimented with eyetracking for the iPhone or heard of projects related to eyetracking in iOS?
Is it technically feasible at all?
How problematic would recording that data be in the light of ongoing privacy discussions?
There is a technique introduced by Johny Lee
I found this, that applies such technique.
Head tracking for iPad
Hope you find it useful.
I think this is feasible as long as the phone's camera is pointed at the user's head. It would probably require pretty good light for the image to be crisp enough for it to be able to recognize the face and eyes properly, though.
When the user isn't looking directly at the camera, you would probably need to do some sort of "head recognition" and determine the orientation of the user's head. This would give you a rough direction on where they are looking.
I'm not aware of any face recognition related projects for iOS, but you could probably google and find some existing project in some other language and port it to iOS with a bit of effort.
As a side note, there's a DIY project for the head orientation tracking part for PC. It uses infrared lights placed for example on your headphones and a camera which determine the orientation of your head based on that. Perhaps it'll give you some ideas.
I know it's nearly 3 years late but I just came accross this commercial cross-platform SDK, which amongst other things does eye tracking. I think this kind of technology will be comming in future iOS/Android versions.
I have a children's iPhone application that I am writing and I need to be able to shift the pitch of a sound sample using Core Audio. Does anyone have any example code I could look at where this is done. There are many music and game apps in the app store that do this so I know I am not the first one. However, I cannot find any examples of it being done.
you can use dirac-2 from dsp dimension for pitch shifting on the iphone. quote: -
"DIRAC2 is available as both a commercial object library offering unlimited sample rates and phase locked multichannel support and as a free single channel, 44.1/48kHz LE version."
use the soundtouch open source project to change pitch
Here is the link : http://www.surina.net/soundtouch/
Once you add soundtouch to your project, you have to give the input sound file path, output sound file path and pitch change as the input.
Since it takes more time to process your sound its better to modify soundtouch so that when you record the voice, directly give the data for processing. It will make your application better.
I know it's too late for the person who asked but it is really a valuable link (As I found) for any one else who is looking for the solution of the same problem.
So Here we have latest DIRAC3 with it's own audio player classes which will take care of run time pitch and speed(explore for god knows what more) shifting. Run the sample and have huge round of applause for that.
Try Dirac - it's the best technology out there and it's available on Win, Linux, MacOS X and iOS. We're using it in all our products (and a couple of others do as well, search for "Capo" on the App Store). They're at version 3 now which has seen a huge increase in performance since previous versions. Hope this helps.
See: Related question
How much control over pitch do you need... could you precalculate all the different sounds?
If the answer is yes, then you can just pick the right sounds and play them.
You could also use Audio Converter Services in conjunction with AVAudioPlayer, which will allow you to resample the audio (which will effectively repitch them, though they'll change duration).
Alternatively, as the related question points out, you could use OpenAL and AL_PITCH