Audio Input and Output in same iOS App - iphone

Okay so I am creating simple objects to later be used for audio input and output. Both objects work independently just fine, but when I try to use them in the same application, they clash and the audio input object gets blocked out by the output object.
The output object is using AudioUnitSessions to pass samples into a buffer and play audio, while the input object is using AudioQueue to feed in samples from the microphone, which we can later process.
I think the solution is as simple as deactivating the AudioUnitSession, but this does not seem to be working. I am doing this the following way
AudioSessionSetActive(true) or AudioSessionSetActive(false)
above depends on whether I am trying to activate it or not.
Apparently this does not work because whenever I try to recreate the input object, it fails to initialize the recording with OSStatus error number -50.
Does anyone know of a way around this, or a simple way of audio input and output in the same application.

Related

Understanding the role of time in a AVCaptureSession regarding CMSampleBuffers

I recently started programming in Swift as I am trying to work out an iOS camera app idea I've had. The main goal of the project is to save the prior 10 seconds of video before the record button is tapped. So the app is actually always capturing and storing frames, but also discarding the frames that are more than 10 seconds old if the app is not 'recording'.
My approach is to output video and audio data from the AVCaptureSession using respectively AVCaptureVideoDataOutput() and AVCaptureAudioDataOutput(). Using captureOutput() I receive a CMSampleBuffer for both video and audio, who I store in different arrays. I would like those arrays to later serve as an input for the AVAssetWriter.
This is the point where I'm not sure about the role of time and timing regarding the sample buffers and the capture session in general, because in order to present the sample buffers to the AVAssetWriter as an input (I believe) I need to make sure my video and audio data are the same length (duration wise) and synchronized.
I currently need to figure out at what rate the capture session is running, or how I can set that rate. Ideally I would have one audioSampleBuffer for each videoSampleBuffer, representing both the exact same duration. I don't know what realistic values are, but in the end my goal is to output 60fps, so it would be perfect if the videoSampleBuffer would contain 1 frame and the audioSampleBuffer would represent 1/60th of a second. I then could easily append the newest sample buffers to the arrays and drop the oldest.
I've of course done some research regarding my problem, but wasn't able to find what I was looking for.
My initial thought was I had to let the capture session run at some sort of set timescale, but didn't see such an option in the AVFoundation documentation. I then looked into Core Media if there was some way to set the clock the capture session was using, but couldn't find a way to say to the session to use a different CMClock (with properties I know), so I gave up this route. I still wasn't sure about the internal mechanics and timing of the capture session, so I tried to find more information about it, but without much luck. I've also stumbled on the synchronizationClock property of AVCaptureSession, but I couldn't find out how to implement this or find an example.
To this point my best guess is that with every step in time (represented by a timestamp) a new sample buffer for both video and audio is created. Which would be a good thing. But I've a feeling this is just wishful thinking and then would still not know what duration the buffers would represent.
Could anyone help me in the right direction, helping me to understand how time works in a capture session and how to get or set the duration of sample buffers?

Sync tonejs with abcjs

I'm trying to sync abcjs with ToneJS. abcjs plays a melody (via notation) and ToneJS plays a loop. I would like to sync these two audio sources.
As I understand I need to create a shared AudioContext in which they both can play, but I'm not sure how to do it.
As far as I understand I need to create a shared audio context. From the ToneJS docs I can use Tone.setContext(ac). I also need a onClick (or similar) callback to run await Tone.play() before playback works. But setting audio context before Tone.play() I get the error "AudioContext is suspended ...".
Can someone please lead me in the right direction...?

AVCaptureSession, multiple AVCaptureAudioDataOutputs

Enviroment
iphone
arm7/sdk6.0
xcode 4.5
Use-case
Based on the AVCam sample
Capture A/V into a file using AVCaptureMovieFileOutput
Add an additional AVCaptureAudioDataOutput to intercept the audio being written to the file while recording
How-to
Add Video input to the Capture session
Add Audio input to the Capture session
Add File Output to the Capture session
Add Audio Output to the Capture session
Configure
Start recording
The problem
It seems the audio output is mutual exclusive, thus, either I get data being written to the disk, OR, I get AVCaptureAudioDataOutput capture delegate being called, when AVCaptureMovieFileOutput is added ( order doesn't matter ), AVCaptureAudioDataOutput delegate is not called.
How can this be solved? how can I get 'AVCaptureAudioDataOutput' triggering it's delegate/selector while, at the same time 'AVCaptureMovieFileOutput' is used to write data to the disk?
Can this be done in any way other way than using a lower level API such as eg. AVAssetWriter et al ?
Any help will be appreciated!
AVAssetWriter is to be used in conjunction with AVAssetWriterInputPixelBufferAdaptor, a good example of how this can be achieved can be found here.
Then, upon 'AVCaptureAudioDataOutputSampleBufferDelegate' invocation, the raw audio buffer can be propagated out for further processing ( in parallel to having the data written to the disk ).

How can I monitor an mp3 live stream to detect corruption?

Once a month the mp3 streams messes up and the only way to tell it has messed up is by listening to it as it streams. Is there a script or program or tool I can use to monitor the live streams at a given url and send some kind of flag when it corrupts?
What happens is normally it plays a song for example or some music but once a month, every month, randomly, the stream corrupts and starts random chimpmunk like trash audio. Any ideas on this? I am just getting started at this with no idea at all.
Typically, this will happen when you play a track of the wrong sample rate.
Most (all that I've seen) SHOUTcast/Icecast encoders (going straight from files) will compress for MP3 just fine, but assume a fixed sample rate of whatever they are configured for. Typically this will be 44.1kHz. If you drop in a 48kHz track, or a 22.05kHz track, they will play at different speeds while causing all sorts of random issues with the stream.
The problem is easy enough to verify. Simply create a file of a different sample rate and test it. I suspect you will reproduce the problem. If that is the case, to my knowledge there is no way to detect it, since your stream isn't actually corrupt... it just sounds incorrect. You will have to scan all of your files for sample rate. FFMPEG in a script should be able to help you with that.
Now, if the problem actually is a corrupt MP3 stream, then you have problems on your encoding side. I suspect simply swapping out whatever DLL or module you're using with a recent stable version of LAME will help.
To detect a corrupt MP3 stream, your encoder must be using CRC. If you enable it, you should be able to read through the headers of each frame to find the CRC, and then run it on the audio data. In the event you get an error (or several frames with errors), you can then trigger a warning.
You can find information on the MP3 stream header here:
http://www.mp3-tech.org/programmer/frame_header.html

iPhone Remote IO Issues

I've been playing around with the SDK recently, and I had an idea to just build a personal autotuner (because I am just as awesome as T-Pain).
Intro aside, I wanted to attach a high-quality microphone into the headphone jack, and I wanted my audio to be processed in a callback, and then copied to the output buffer. This has several implications:
When my audio-in is being routed through the built-in microphone, I need to be able to process this input, and send it once my input has stopped (this works).
When my audio-in is being routed through the microphone-in input from the headset jack, I want the output to be sent immediately.
Routing, however, doesn't seem to work properly when using AudioSession modes and overrides, which technically should allow you to reroute output to the iPhone speakers, no matter where the input is coming from. This is documented to work, but in practice, doesn't really work.
Remote IO, however, is not documented at all. Anyone with experience using Remote IO audio units, can you give me a reasonable high-level overview on how to do this properly? I have been using the aurioTouch example code, but I am running into errors where I get error codes like -50 and -10863, none of which are documented.
Thanks in advance.
The aurioTouch example implements remoteIO play through.
You could modify the samples before passing them on.
It simply calls AudioUnitRender in the output render callback.
NB this trick does not seem to work if you port the code
to OSX style CoreAudio. There, 99% of the time, you need
to create two AUHALs (RemoteIO-a-likes) and pass
the samples between them.