I'm building an app that measures sound volume. I understand that audio hardware in the iPhone is not as accurate as professional hardware, which is OK, but I need to know if there are any differences between the different iPhone models. For example, is it possible that the volume measured on an iPhone 3G will be different on an iPhone 4? Unfortunately I do not possess any models earlier than the 4 so I'm unable to test this myself.
The audio frameworks seem to be identical for identical iOS versions (except for the 2G). However the physical microphones (and acoustical environments) are different. People have published various test results, such as:
http://blog.faberacoustical.com/2009/iphone/iphone-microphone-frequency-response-comparison/
and
http://blog.faberacoustical.com/2010/iphone/iphone-4-audio-and-frequency-response-limitations/
But it's possible that the mic response may vary with manufacturing batches as well. YMMV.
I'd just like to add that no matter what I try, there seems to be no way of exporting audio through AVAssetExportSession using an iPhone 3G. It's working with 3GS, iPod Touches, 4, iPad and so on.
Related
I have a simple synth that plays a 100hz tone using an OscillatorNode. My synth is about a whole step flat on safari iPad 4 ios 7.1.1, compared to all the other browsers I've tried (chrome iPad 4, safari ipad 2 ios 7.1.1, safari iPhone 5, chrome and safari on my mac). I've verified that the sample rate of the out-of-tune browser, iPad 4 safari, is 44100hz. The in-tune browsers report the same sample rate, 4400hz.
My code is pretty simple and I don't see how this could be a programming error on my part. Especially considering the iPad 2 and iPad 4 are running the same OS (and presumably the same version of safari). It seems like there's something weird, low-level and hardware-dependent going on.
Is this a known issue? If so, is there any way to test for it or work around it?
===== edit ========
Here's an example (safari only) -- dead simple oscillator test. Plays at one pitch on my iPhone 5s, a different pitch on my iPad 4. http://www.morganpackard.com/webaudio_test/OscillatorTest.html
var context = new webkitAudioContext();
var osc = context.createOscillator();
osc.connect(context.destination);
osc.frequency.value = 440;
osc.start(0);
This is probably due to one device playing at 44.1kHz and the other playing at 48kHz. There is probably a browser bug preventing the change of sample rate, and the subsequent misreporting of sample rate.
Chrome on Android has a similar issue where the record and playback sample rates must be identical. Since this doesn't typically happen when recording from the on-board microphone, for while it would seem that recording audio was always silent.
Recently, I'm working on a project using the build-in microphones recording the stereo sound. And then doing some signal processing. However, it seems like there is no specific solution to this question.
There is a link showing that it is quite reasonable only using the build-in microphones doing the stereo recording.
https://audioboo.fm/boos/1102187-recording-in-stereo-from-the-iphone-5#t=0m20s
However, I still do not know how to do it! Is there someone solving this problem?
There are some resources showing how to access different build-in mic.
use rear microphone of iphone 5
Also, it may quite easy using an Android phone implementing this project.
How to access the second mic android such as Galaxy 3 ,
How can I capture audio input from 2 mics of my android phone real time and simultaneously
I am working on an app that analyzes incoming audio from the built in microphone on iPhone/iPad using the iOS 6.0 SDK.
I have been struggling some time with very low levels of the lower frequencies (i.e. below 200 Hz) and I have (on the web) found others having the same problems without any answers to the problem.
Various companies working with audio tools for iOS states that there was (previous to iOS 6.0) a built in low-frequency rolloff filter that was causing these low signals on the lower frequencies BUT those sources also states that starting with iOS 6.0, it should be possible to turn off this automatic low-frequency filtering of the input audio signals.
I have gone through the audio unit header files, the audio documentation in Xcode as well as audio-related sample code without success. I have played with the different parameters and properties of the AudioUnit (which mentions low-pass filters and such) without solving the problem.
Does anybody know how to turn off the automatic low-frequency rolloff filter for RemoteIO input in iOS 6.0?
Under iOS 6.0 there is the possibility to set the current AVAudioSession to AVAudioSessionModeMeasurement like this:
[[AVAudioSession sharedInstance] setMode: AVAudioSessionModeMeasurement error:NULL];
This removes the low frequency filtering.
Link:
http://developer.apple.com/library/ios/#documentation/AVFoundation/Reference/AVAudioSession_ClassReference/Reference/Reference.html
I hope this helps.
I'm not sure if there is any way to ever accomplish this on these devices. Most microphones have difficulty with frequencies below 200 HZ (and above the 20 kHZ range as well). In fact, a lot of speakers can barely play audio at that range either. In order to get a clean signal at the <200 HZ range, you would require good enough hardware, which I think is a bit beyond the capabilities of the built in microphones of the iPhone/iPad. That's probably why Apple has filtered out these low frequency sounds, as they cannot guarantee a good enough recording, OR a good enough playback. Here's a link describing the situation better for the older devices (iPhone 4, iPhone 3GS, and iPad 1).
Apple is also very picky about what they will and won't let you play with. Even if you do find out where this filtering is taking place, interrupting that code will most likely result in your app being rejected by the app store. And due to hardware limitations, you probably wouldn't be able to achieve what you want to anyways.
Hope that Helps!
Is it possible to analyze the images without taking a foto on the iPhone ?
I want to analyze some matrix codes, without taking any foto. I saw this on some Nokia models and it's quite impressive: really fast!
On the iPhone, I've seen analyzing codes after taking a snapshot (or photo), and it's slower.
Also the iPhone camera is not good as some other mobile cameras.
I'm refering to iPhone 3G/3GS.
thanks,
r.
Yes.
Under iOS 4.x, you can use the new AVFoundation Framework to access camera pixel buffers on the 3G and 3GS, as well as the newer iPhone 4, without taking a photo/snapshot. The newer devices have higher resolution camera's.
I don't know if it's possible. But I think you should take a look at the AVCamDemo Apple's Code from de WWDC 2010. I think it could help you even if I didn't read a lot of code (juste compiled tje xCode project and tried it)
(Sorry but I can't find back the link)
You should be able to find the code in your emails if you are registred as an iPhone Developer. Or maybe on the developer.apple.com/iphone/ website.
By the way, don't think doing it on iPhone 3G (impossible I think). The iPhone 3GS should be fast enough for this kind of app.
Good Luck !
We are using http live streaming for on demand video from within our iPhone app and on the 3GS models the videos play as they are meant to. However, on the models pre 3GS it gives an error saying this movie format is not supported.
I have seen other threads on this however no solutions or insights.
Does anyone know if this really is a hardware limitation of the pre 3GS phones or does it have something to do with our code?
"iPhone 3G supports H.264 Baseline Profile Level 3.1. If your app runs on older iPhones, however, you should use H.264 Baseline Profile 3.0 for compatibility."
Info taken from this technote.
HTTP Live Streaming is supported on all iPhone, iPod Touch and iPad hardware if you have sufficient network bandwidth for your lowest bit-rate stream and the right level of OS. On an original iPhone 2G running iPhone OS 3.1.3 we are routinely playing HTTP Live Streams over WiFi. It also works in our tests over Edge, but the bandwidth on Edge is usually too low for the rates at which we are encoding. We have seen some issues with bandwidth adaptation on an iPod Touch running 3.1 which we suspect are related to that particular device/OS combination, but are not certain of that.