We have a VOIP application, that records and plays audio. As such, we are using PlayAndRecord (kAudioSessionCategory_PlayAndRecord) audio session category.
So far, we have used it successfully with iPhone 4/4s/5 with both iOS 6 and iOS 7 where call audio and tones played clearly and were audible.
However, with iPhone 5s, we observed that both the call audio and tones sound robotic/garbled in speaker mode. When using earpiece/bluetooth/headset, sound is clear and audible.
iOS Version used with iPhone 5s: 7.0.4
We are using audiounits for recording/playing of call audio.
When setting audio properties like session category, audio route, session mode etc., we tried both the older (deprecated) AudioSessionSetProperty() and AVAudioSession APIs.
For playing tones, we are using AVAudioPlayer. Playing of tones during the VOIP call and also when pressing keypad controller within the app produces robotic sound.
When instantiating the audio component using AudioComponentInstanceNew, we set componentSubType to kAudioUnitSubType_VoiceProcessingIO.
When replacing kAudioUnitSubType_VoiceProcessingIO with kAudioUnitSubType_RemoteIO, we noticed that the sound of call audio and tones was no longer robotic, it was quite clear, but the volume level was very low when using speaker mode.
In summary, keeping all the other audio APIs the same:
kAudioUnitSubType_VoiceProcessingIO: Volume is high (desirable) but sound of tones and call audio was robotic in speaker mode.
kAudioUnitSubType_RemoteIO: Sound of tones and call audio was clear but it is not audible.
STEPS TO REPRODUCE
- Set audio session category to playAndRecord.
- Set audio route to speaker
- Set all the other audio properties like starting audio unit, activating the audio session, instantiating the audio components.
- Set the input and render callbacks
- Try both options
1. Play tones using AVAudioPlayer
2. Play call audio
Any suggestions on how to get over this issue. Raised as an issue with Apple but no response yet from them.
i have shared the code here github link
The only difference between kAudioUnitSubType_VoiceProcessingIO and kAudioUnitSubType_RemoteIO is that voiceProcessing includes code to tune out acoustic echo i.e. tunes out the noise from the speaker so the microphone doesn't pick it up. Its been a long time since I've played with the audio framework but I remember that to sound off there could be any number of things,
Are you doing any work in the audio callbacks that could be taking a long time?
The callbacks run on realtime threads. if your processing takes too long you can miss data. Would be helpful to track the data over a fixed period of time to see are you capturing it all. Use something like wireShark to sniff the network. Record the number of packets and see did the phone capture the same.
Are you modifying any of the audio?
Do you have a circular buffer that might be causing an issue?
I've had several issues doing this and one was using a third party circular buffer that was described as low latency and efficient ... it wasn't. I answered my own question here and included my circular buffer implementation that greatly improved my audio as the issue was I was skipping data.
Give this a go and let me know:
iOS UI are causing a glitch in my audio stream
Please be aware that some of this code is unique to the audio format ALaw, 0xD5 is a byte of silence in ALaw, if you are using linear PCM or any other that will probably be a noise of some kind.
Related
I work on a VOIP app.
I use Core Audio Audio Units for playing and recording audio. I need to be able to manipulate sound volume and output devices. I am trying to use MPVolumeView to set sound volume and choose output devices.
My problem is: When I start using(start playout and capture for RemoteIO Audio Unit) Audio Units it seems MPVolumeView no longer control volume of my session but instead it controls system wide sound preferences. At the same time hardware buttons control volume of sounds played by Audio Units. Also when I start using Audio Units MPVolumeView start showing button to change output devices but before that it doesn't.
It seems that MPVolumeView controls sound volume for some system wide audio session but when I start using Audio Units another app wide (or even Audio Unit wide) audio session is created and used to play sound.
So the question is how to make MPVolumeView control sound volume for my Core Audio audio session?
I would appreciate any hints on why this happens. I've spent almost all day googling and I see that some people have related problems but none got any hints :(. I can also post more details if needed.
Confirmed as a bug by Apple engineer.
In more details - MPVolumeView should be tied to a specific audio route (in more broad sense, like audio route + audio category + mode etc.), and it is for a couple of most common routes (e.g. headphones + play category + default mode) but not to all custom routes you can create.
So basically when one creates some custom route MPVolumeView still tries to control it's last (workable) or default route.
I'm developing an interactive storybook type application for the iPhone and I've recently encountered a frustrating bug concerning audio mixing on the device.
Firstly, I setup an audio session. I set the category to AVAudioSessionCategoryAmbient and then init and play my AVAudioPlayer instance. Now, in the background whilst the audio is playing I'm pre-loading a video to play using an MPMoviePlayerController followed by a call to prepareToPlay. The reason I pre-load the video this way is because I need it to play instantly later on cue with fairly strict timing.
In this configuration, the audio/movie works fine and they mix and do not interrupt each other. However, this particular audio session category does not permit audio to continue playing while the device is locked, a feature I really need. As a result I'm forced to consider a different category: AVAudioSessionCategoryPlayback.
By default this category does not permit mixing with other audio, according to the Apple docs. To enable mixing with other audio I am overriding the relevant category:
OSStatus propertySetError = 0;
UInt32 setProperty = 1;
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(setProperty), &setProperty);
assert(propertySetError == 0);
Unfortunately, this solves my playing whilst locked issue but introduces another issue: the AVAudioPlayer audio is interrupted briefly as the video loads with a minor stutter. The stutter is small, perhaps less than a second but is enough to disrupt the user experience. I've read this related post which enabled me to pre-load the video with the AVAudioSessionCategoryAmbient, but unfortunately the same approach doesn't seem to work with the new category.
The audio session category is applied successfully, according to the return code. Does anyone know why enabling audio mixing with this category is not the same as the mixing facility provided by ambient category?
The best way I've found working a similar problem is to use the newer AVPlayer (+1 #adam) and set your app to enable background audio and receive remote control notifications. I was tipped off to this by #MarquelV following How can you play music from the iPod app while still receiving remote control events in your app?
If you can get backgrounding working properly, that should enable you to continue playing while the device is locked. Oh, and don't forget to add keys to info.plist, its easy to do and then have no idea why it isn't working.
I'm doing research for a iPhone video chat app project.
I've tried this to capture image,
use AVCaptureVideoPreviewLayer to display camera view
and put a MPMoviePlayerController to play network stream video at the back.
They all work until I add an audio input to the AVCaptureSession.
The MPMoviePlayerController stop the AVCaptureSession if there is audio input.
I think of using AVAudioSession for both playing and recording audio and some other way to play video,
but the documentation of AVAudioPlayer said
"Apple recommends that you use this class for audio playback unless you are playing audio captured from a network stream or require very low I/O latency."
I found the Multimedia Programming Guide saying that
"To provide lowest latency audio, especially when doing simultaneous input and output (such as for a VoIP application), use the I/O unit or the Voice Processing I/O unit."
Is it the correct direction to implement video chat in iphone?
I'm working on a project which is using Audio Toolbox for recording and playback of PCM data, and I'm having trouble with playback. In the simulator, I can record and play audio just fine, using a custom class to handle storing and sourcing PCM bytes for the recording and playback buffers as needed. On device (iPhone (3.0.1) and iPod 2G (3.1.2)) recording works fine, the audio files produced are correct, but in-app playback stutters, like it's only playing part of each playback buffer. My buffers are one second long, and I've got 3 buffers, which are preloaded before playback starts; stuttering occurs during those first 3 seconds as well, which I think rules out a latency problem.
I've written Audio Toolbox code before that worked, and I'm not doing anything strange here except that I'm using my own class to source PCM data instead of AudioFileReadBytes()
I know the data that comes out of my source is good, because it plays right in the sim, and it writes to disk as a correct audio file
I've played around with sample rates a bit; I'm normally using 11025Hz sampling to cut down on file size (it's all voice, so it sounds fine). at 44100Hz, but with the same size of buffers, I get the same stuttering problem, but the audio segments come a lot faster, about 4 times faster. That's why I think it's only playing part of each buffer.
The only reason I can conceive that it would only play part of each buffer is a latency problem... like the audio toolbox code is running out of full buffers while I'm still filling an empty one. But that would cause it to play the preloaded buffers correctly, and then start stuttering, and that doesn't happen, it stutters the whole way through
I've tried humongous buffers, like 10MB buffers, and I just get silence and a single stutter of audio at the end of playback. I've also tried preloading more buffers than normal, like 10 seconds worth of audio, and it behaves the same.
The audio session is being set with AVAudioSession, not the Audio Toolbox calls, and it's being set to the Playback category for playback
I have no idea how to try and attack this problem, it makes no sense to me that it works fine on the simulator but not the device.
Code for the playing callback and the set up for the audio queue services: http://pastebin.com/mfaa546c
It turns out that the use of NSData's GetBytes:length: was causing the problem. The buffer filled with that method was playing incorrectly. However, doing a memcpy from that buffer to another buffer would prevent the problem.
Is there an easy way to control the playback speed/tempo of a sound file loop played using Audio Queue Services? For example, if a game is playing background music, I want to make the BGM speed up as time runs out, but without changing the pitch of the music. Thx!
There's no trivial way to do this that I know of. On the Mac, you'd presumably use Audio Units to do this, but I think the support for those is limited on the iPhone SDK.
You can do some processing during your AudioQueue playback callback, between AudioFileReadPackets() and AudioQueueEnqueueBuffer() but I think speed-shifting without changing the tone will require rather a lot of CPU time.
It'd probably be easier to have more than one recording of the music, and switch files during a silent passage of the music.