Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We've been working with some digital audio signal processing here and we've recently become worried that the iPhone 4S might not behave like the iPhone 4 did.
For instance: we have an app that listens to a specific sound and works on it. However, the data generated by the sound, while pretty constant in the iPhone 4, varies a lot in the iPhone 4S. Even though it is the same sound every single time, the data pattern seems to be randomly different.
Another (maybe) important information: from what I can see from my tests, by now, the iPhone 4S doesn't seem to work well with frequencies above 20.5 kHz (the iPhone 4 works very well until 21.5 kHz).
My question is: did anyone already go through something like this? Are the iPhone 4 and iPhone 4S recording systems that different? Is this a hardware situation and/or should the software be modified to support it?
I know that those may not be proper questions, but I don't really know where to go right now, to be able to achieve some kind of diagnosis.
Thank you in advance.
The specification of both those iPhone models only states a frequency response up to 20 kHz. Anything above that might be subject to change without notice, not only between models, but possibly also for a given model if Apple is sourcing mics from multiple vendors. Furthermore, the roll-off behavior in both phase and frequency response between the 20 kHz limit and half the sampling rate can vary by a huge amount depending on the type and order of the anti-aliasing filters.
The frequency response can also vary depending on the direction of the sound and the directionality of the mic, which can also vary between mics and enclosures.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Im building a voice recording app and I would like to show a voice frequency graph similar to "Voice Memo" app on iPhone.
Im not sure exactly where to start build this.. could anyone give me some areas to look into and how to structure it? Ill then go learn all the areas and build it!
Thank you
Great Example Project by Apple:
https://developer.apple.com/library/ios/samplecode/aurioTouch/Introduction/Intro.html
The top chart measures Intensity vs. Time. This is the most intuitive representation of a sound because a louder voice would show up as a larger spike. Intensity is measured in Percentage of Full-Scale (%FS) units where 100% corresponds to the loudest recordable sound by the device.
When a person speaks into a microphone, a voltage fluctuates up and down over time. This is what this graph represents.
The bottom chart is a Power Spectral Density. It shows where there is most power in the signal. For example, a deep loud voice would appear as a maximum at the lower end of the x-axis, corresponding to the low frequencies a deep voice contains. Power is measured in dB (a logarithmic unit) at different frequencies.
After a bit of Googling and testing, I think AVFoundation doesn't provide access to the audio data in real-time, it's a high-level API primarily useful for recording to a file and playing back.
The lower-level Audio Queue Services API seems to be the way to go (although I'm sure there are libraries out there that simplify its complex API).
Audio Queue Services Programming Guide:
https://developer.apple.com/library/mac/documentation/MusicAudio/Conceptual/AudioQueueProgrammingGuide/AboutAudioQueues/AboutAudioQueues.html#//apple_ref/doc/uid/TP40005343-CH5-SW18
DSP in Swift:
https://www.objc.io/issues/24-audio/functional-signal-processing/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I would like to try recording sleep with an iPhone, connected to a power source overnight. However, I'm not sure if the default camera is capable of recording videos that long. A colleague suggested using a video codec with very few key frames, just saving the changes between frames.
My questions are twofold - are there any open source projects that already do long(3+ hours) video recording on iPhone. How would I extract the file from the iPhone if it is 3 Gb or so in size?
At first glance this doesn't really appear to be a programming question...but you hint that you may be looking for open-source code for a video codec.
However I thought I might shed a bit of light on the iPhone itself.
If you're recording at 320p, that averages out to about 15mb per minute. Calculating 8 hours, equals 7.2GB.
To my knowledge, there is no time limit, just the limit of your iPhone empty HD space. However, some people have reported that the video recording stops sometimes, seemingly randomly, during very long recordings.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am making an app for iOS 6.
I have a Mac, but I'm locked into an Android phone for the next year, and really don't want the monthly expense of an iPhone from a regular carrier.
I can get a 3GS for around $200 and then use a pay-as-I-go service for my testing. Will this phone be a good 'test subject' for making iOS apps?
Yes.
However, there are several things to keep in mind:
The 3GS is four generations behind, so it is slower. This can be a good thing - any app that runs fast on a 3GS will run fast on later hardware.
Because of this, there are fewer iOS 6 features that it can use. Most of them are user-facing, though, and not API stuff.
The 3GS doesn't have a Retina display, so you will have to use the iOS simulator for Retina testing.
If these drawbacks don't bother you any (Retina display is the biggest one), don't use pay-as-you-go! (if you have Wi-Fi). The phone's GPS will still work, and it can still connect to WiFi without a cell plan. Since you say you already have a phone, you don't need voice or data on a dev device.
Just buy one off of Craigslist for around $100-150, and you will be fine.
Yes, it is. It actually wouldn't be a bad idea at all because it is the slowest hardware that can run your app, so if it runs efficiently and well (within memory constraints and CPU speed) then you should be in good shape for better hardware! Also you can look at the iPod touch if you don't need any GPS or phone functionalities. One thing you won't get to test on device though is retina graphics (which does make a huge difference).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have been developing some iOs stuff using the simulator and a real iPad. Now I think I really need an iPhone but I am not really into taking it around, my Nokia is just perfect for my needs.
I was wondering: isn't it better for a developer to get a second hand 3g - 3gs instead of an expensive 4/4s so that the apps can be tried with the lowest-end device? what do you think? Am I losing something with this choice besides the possibility of trying it with a newer retina screen?
Edit: I don't want to use it as a phone, it's just for the development. I would like to know if from a developer point of view all the versions 3g up are the same, besides screen resolution. In this case, since I'd do audio application, I would have the possibility of testing the slowest hardware, that can be a good choice for me. Thank you!
Are you developing phone specific functionality? If not then you could use an iPod Touch. The current generation has a camera and the retina screen; last generation has the lower resolution screen and might not have the camera.
I think it would be a mistake to get a 3Gs to target at this point, given that probably by the end of the year the 4 or 4s will take the place of the 3Gs as the low-end device.
Targeting the 3Gs for testing makes sense for perhaps another year or two, if you don't mind upgrading past that point.
One other consideration is that if you are really going after augmented reality, you'll probably want to use some of the hardware advances from the iPhone 4 on, so from that standpoint also an iPhone 4 makes more sense as a test device than a 3Gs.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I need to develop an iPhone/iPod application which runs in the background and can measure audio output. It is part of a research project to measure how loud people have their music. Unfortunately I am unfamiliar with the iOS 4 SDK.
Ideally, the application would have to know if headphones are plugged in, be able to measure the volume of the audio signal being outputted (and calculate some data) and then be able to, at some point, update data to a central database.
Taking into account the multitasking capabilities, is it possible to develop such an application for the iOS 4?
I am aware that multitasking on this platform is quite limited, however I also noticed that audio processing seems to be possible (only to an extent perhaps?)
Here's a question which shows how it can done, but unfortunately, you can only do it for your app (or more correctly, I should say only when your app is active), you can't add a global hook to be notified in the background when the volume is changed by another app.
iOS (4) doesn't have background services like Android which is what I believe you're looking to do.