I have a series of sounds that a user will play, rearrange, and edit etc. while using my app. When the user is finished, I want them to be able to save their work and record it to an mp3.
I don't want to play it through speakers and record it with the mic since that will result in low sound quality and interference. I cannot think of any ways of doing this that doesn't require extra hardware and/or a computer.
How can I do this using just their device?
Well, I would say it cant be done with AVFoundation.
My suggestion is to use Audio Units, and transform all your interactions to an audio graph. at some point you set a render notify on the RemoteIO so every time it renders sounds to the speakers you get a callback where you can write it down those frames/packets/data into a file.
I will probably suggest to use AAC(m4a) over MP3. I am not very fond of MP3, and to be honest as far as I know the sdk does not provide encoding to MP3, probably due to licensing issues. I could be wrong though. Check this sample code below, probably the best sample code you will ever find on Audio units on the web.
AudioGraph by Tom Zic
Related
I've been searching for a while and can't come to a good conclusion.
I am trying to create an app that can "record" beats that a user makes on a 4x4 button array. Each button has a sound tied to it and after they hit record, I want to mix the audio that gets played and save it to a file so they can listen to it and play over it later.
What makes this even trickier is that there will be a metronome playing and I do not want to mix the metronome sound into the audio that is getting saved.
From what I have found, the only way to go is Audio Units for these features, but I am reluctant to since it seems a little overkill and somewhat complicated to learn. Can Audio Toolbox make this any easier?
Thanks!
In generally, using a AudioToolBox easily implements.
more information, see below sample code. it's a lot of help.
MixerHost
I'm building a piece of hardware that sends data into the headphone jack, and I need a way to record short snippets and analyze it quickly (hopefully without having to save the file and reopen for analysis). I have played around with fft and the accelerate frameworks, though I don't think it's exactly what I'm looking for.
I'm wondering mostly if something like this is feasible: record a ~30ms snippet of audio, and then grab an array of floats representing the voltage/(db levels?) throughout the recording. Then I could interpret the data depending on the levels at different ms through the recording. Would something like AVAudioRecorder be able to record at a resolution which I could examine every ms in the recording? Since this will be a repeating process, I'm hoping to keep the cpu down as well.
This is totally doable. Use AudioSession with AudioUnits.
Hi unfortunately I've not been able to figure out audio on the iPhone. The best I've come close to are the AVAudioRecorder/Player classes and I know that they are no good fo audio processing.
So i'm wondering if someone would be able to explain to me how to "listen" to the iPhone's mic input in chunks of say 1024 samples, analyse the samples and do stuff. And just keep going like that until my app terminates or tells it to stop. I'm not looking to save any data, all I want is to analyse the data in real time and do stuff in real time with it.
I've attempted to try and understand apples "aurioTouch" example but it's just way too complicated for me to understand.
So can someone explain to me how I should go about this?
If you want to analyze audio input in real-time, it doesn't get a lot simpler than Apple's aurioTouch iOS sample app with source code (there is also a mirror site). You can google a bit more info on using the Audio Unit RemoteIO API for recording, but you'll still have to figure out the real-time analysis DSP portion.
The Audio Queue API is a slight bit simpler for getting input buffers of raw PCM audio data from the mic, but not much simpler, and it has a higher latency.
Added later: There's also a version of aurioTouch converted to Swift here: https://github.com/ooper-shlab/aurioTouch2.0-Swift
AVAudioPlayer/Recorder class won't take you there if you wanna do any real time audio processing. The Audio Toolbox and Audio Unit frameworks are the way to go. Check here for apple's audio programming guide to see which framework suits your need. And believe me, these low level stuff is not easy and is poorly documented. CocoaDev has some tutorials where you can find sample codes. Also, there is an audio DSP library DIRAC I recently discovered for tempo and pitch manipulation. I haven't looked into it much but you might find it useful.
If all you want is samples with a minimum amount of processing by the OS, you probably want the Audio Queue API; see Audio Queue Services Programming Guide.
AVAudioRecorder is designed for recording to a file, and AudioUnit is more for "pluggable" audio processing (and on the Mac side of things, AU Lab is actually pretty cool).
Specifically, I just want to record something, reverse it, and play it back. I've looked through the apple docs and couldn't find anything about editing audio. Is it possible?
Yes, it is definitely possible. Last I checked the Apple Core Audio docs were not very good, but it has been a few months since I've worked with it. Here are the steps that I would follow.
Record the audio sample.
Reverse the audio by looping through the first half of the array and swapping the value located there with one an equidistant from the end of the array.
Play the resulting audio clip.
Quite frankly, the first step is probably the hardest. Here is a decent article about doing audio on the iPhone including recording. Make sure you look at all of the different parts of the article. Here is another article about recording sound on the iPhone, but using a different framework. There are really several ways to go about recording on the iPhone though, last I checked, if you want to play audio while you are recording you have to use RemoteIO.
Edit:
If you would like to use RemoteIO(which I preferred), then this site is pretty helpful for getting started with it. Also, the aurioTouch sample program that Apple provides is immensely helpful (though more than you want).
If you don't need RemoteIO (because it can be a major pain though it is more low-level and thus more flexible), then try the SpeakHere sample program. It is made just to record and play back. However, I just looked at it and it writes the recording to a file rather than a buffer which isn't what you want. I would recommend going with RemoteIO for that reason (unless you can find a way to have it write to a buffer instead).
I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)