Simultaneous record and play from same file - iphone

I'm attempting to use Audio Queue Services for the first time. After reading all the documentation and playing with some sample code, I think I understand the classes pretty well and have implemented my own playback and recording application without any problems.
I need to simultaneously record and play from the same buffer, but I'm having some significant difficulty writing to a file and reading from the file at the same time. I can get the file to be played back with no problem but only for the last written buffer before the playing started. I'm hoping it's possible to continue to playback the file for as long as it's being written to. Is this possible?
Thanks in advance!

It might be worthwhile to save the captured data in your own buffers instead of writing to a file. You can then supply these buffers to the playback.
Please make sure that you add the following line from AVFoundation framework for simultaneous capture and playback:
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error:nil];
Hope this helps.

Related

How to loop an mp3 without gaps in monotouch?

I am wishing to seamlessly loop mp3's for an app written in MonoTouch using the AVAudioPlayer and cannot find any examples on how to achieve this. I have found a great library at http://gamua.com/blog/2012/05/gapless-mp3-audio-on-ios/ however it is written for objective c.
I wish for the audio to continue to play even when the device is locked and have played around with the AudioSession initilizer but failed to have this working also.
I am not able to find examples or libraries to assist in achieving results.
For queuing up songs without any spaces in between, make sure to use AVQueuePlayer, it allows you to pre-load songs. If you just want to loop the same song, just set NumberOfLoops to 0 on AVAudioPlayer and it will loop forever.
For playing audio in the background, you also need to declare UIBackgroundModes in your Info.plist. See this guy's article here. He's using Obj-C, but you should be able to port his code to MonoTouch.

iPhone Detect sound and Record it

so I'm making an app and what I need to do is when for example someone starts talking I need to detect that there is a sound and then record it.
I found this tutorial http://mobileorchard.com/tutorial-detecting-when-a-user-blows-into-the-mic/ but it starts the recording on the beginning and then based on the recording it detects the sound.
Is there any other way to detect a sound without actually starting the recorder first? What I thought of would be having 2 recorders, one for detection and one for actually recording the sound. Another solution would be to edit (trim) the sound after it's recorded.
Are these approaches somehow standard or is there a better way to detect sound?
Thanks.
edit: if anyone ever reads this, I also found this http://bonkel.wordpress.com/2010/03/03/frequency-detection-using-fourier-transform/
If you don't mind getting a little dirty, you could go down to a lower level, to CoreAudio, and read data out of the input buffers until you see values exceeding your threshold, and start recording those input buffers, or triggering a high level recording call. You can similarly stop recording after a period of silence.
If you use CoreAudio, you have a lot of control over what you record. You could, pretty easily, filter out background noise, or add beeps to signify when the recording stopped due to silence, and even add markers to use later to match time to the recording.
CoreAudio does require you to do more work. You will have to read the microphone buffers on a timely basis and either save or discard the data pretty quickly in order not to drop any sound data. This isn't that hard, as the devices have plenty of CPU power to do that and other tasks at the same time - you just have to have a good grasp of CoreAudio.
There are plenty of Apple CoreAudio samples that can guide you. The WWDC 2010 and 2010 CoreAudio sessions are also a must-see.
You could use either the Audio Queue or the Core Audio (RemoteIO Audio Unit) API. Unless your app requires low latency, the Audio Queue API may be simpler to use.
You need to start the recording API to detect any sound, but you don't need to save everything you get from the recording callback to a file.

AudioStreamer and AVAudioRecorder don't work together

I am currently using Matt Gallagher's AudioStreamer (which works great!) but when I try to stop the playback and completely remove the streamer, my recording fails. I am unable to get something to record in anyway after I start using the streamer the first time.
With the streamer no longer existing, I have no idea what could be causing it to completely ruin recording functionality. Is there anyway that I can get this working? Any input at all would be extremely valuable.
Thanks in advance!
Matthew
You may have to initialize and configure an Audio Session, or reconfigure the Audio Session type when changing modes (ending the playback streamer, etc.)

How can I record the audio output of the iPhone? (like sounds of my app)

I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)

AudioQueueOfflineRender questions

I have a few questions about this after reading the iPhone documentation on it:
Does this take the audio being played and save it to a buffer so it can be written to a file?
If so does the audio being played have to be played using a playback audio queue or can it be played via a higher level class such as AVAudioPlayer.
Can anyone point me in the direction of some sample code or further help than the docs.
Thanks
Yes
Pretty sure you need to use a/the playback audio queue.
This Apple QA points to a file called aqrender.cpp which implements point 1.