It is now a few month that I'm experimenting the possibilities of audio manipulation on the mac via Xcode and swift.
I use AvFoundation's AvAudioEngine to apply audio effects and play an audio file. It is working fine but I would like to go one step further and apply theses effects to the audio being played on a specific audio device (with whatever application). I'am using two audio device (the first is 6in/6out, the second 2out) and would like to be able to select on which device are effects applied.
Is it possible to do that using AVAudioEngine? What about AvAudioMixerNode? Is it like a real mixer, with audio inputs, outputs, sends&returns, aux,..? What about AuGraph on the mac? Is it possible to combine two different classes to do the job?
I'm looking for examples but primarily more information on the general way audio programming works under macOS.
Thank You.
Related
I have an audio app in which all of the sound generating work is accomplished by pure data (using libpd).
I've coded a special sequencer in swift which controls the start/stop playback of multiple sequences, played by the synth engines in pure data.
Until now, I've completely avoided using Core Audio or AVFoundation for any aspect of my app, because I know nothing about them, and they both seem to require C or Objective C coding, which I know nearly nothing about.
However, I've been told from a previous q&a on here, that I need to use Core Audio or AVFoundation to get accurate timing. Without it, I've tried everything else, and the timing is totally messed up (laggy, jittery).
All of the tutorials and books on Core Audio seem overwhelmingly broad and deep to me. If all I need from one of these frameworks is accurate timing for my sequencer, how do you suggest I achieve this as someone who is a total novice to Core Audio and Objective-C, but otherwise has a 95% finished audio app?
If your sequencer is Swift code that depends on being called just-in-time to push audio, it won't work with good timing accuracy. e.g. you can't get the timing you need.
Core Audio uses a real-time pull-model (which excludes Swift code of any interesting complexity). AVFoundation likely requires you to create your audio ahead of time, and schedule buffers. An iOS app needs to be designed nearly from the ground up for one of these two solutions.
Added: If your existing code can generate audio samples a bit ahead of time, enough to statistically cover using a jittery OS timer, you can schedule this pre-generated output to be played a few milliseconds later (e.g. when pulled at the correct sample time).
AudioKit is an open source audio framework that provides Swift access to Core Audio services. It includes a Core Audio based sequencer, and there is plenty of sample code available in the form of Swift Playgrounds.
The AudioKit AKSequencer class has the transport controls you need. You can add MIDI events to your sequencer instance programmatically, or read them from a file. You could then connect your sequencer to an AKCallbackInstrument which can execute code upon receiving MIDI noteOn and noteOff commands, which might be one way to trigger your generated audio.
I need to mix two simultaneous looping m4a sounds for my application, and the only 100% reliable loop method i have come accross is using AudioQueue with this method: http://developer.apple.com/mac/library/qa/qa2009/qa1636.html
However, when I initialize two instances of AudioQueue, I can only seem to get one instance playing. I know that the sdk used to only support playing one compressed audio file at a time, but that changed with 3.0, so I wonder if there is something I am missing?
One current devices, there seems to be hardware support for only playing one compressed audio file at a time, and I'm not sure if m4a is de-compressible in software in real-time (there are only specific types supported by the software decoder). You might be able to decompress the second sound before playing your mixed audio loops.
I've got a couple of wav files and possibly a mp3 that I'd like to mix down to a single wav or mp3-file. I'm using C/C++/Obj-C (iPhone). I have really no experience with this sort of thing. If anyone could give me some pointers, I would be very grateful.
Basically what I want to do is similar things like for example Audacity can do, but programmatically. Isn't there a sound library where you can easily open audio files and "paste" them into a new one at defined positions? Where mixing is something you don't have to worry about?
Thanks.
Mixing two sound buffers of linear PCM is only a matter of adding each sample value in them together, and of course make sure you don't overflow. Normally you would use floating point values in the buffers though, so the issue is when you go back to the file. You should also have CoreAudio available on the iPhone, it has all the means to open/read/write sound files in different formats. I think there is also a more high level api available to the iPhone that isn't on the mac, look up the apple docs.
If you are specifically looking for the features of Audacity, it uses PortAudio under the hood (looks like an MIT license). Perhaps you can just try to use that?
Read Multimedia support as a starting point it contains alot info. Here is an extract:
There are 3 ways to mix the audio on iphone:
Audio Unit framework
Multichannel Mixer - "lets you mix multiple audio streams to a single stream"
3D Mixer unit - "lets you mix multiple audio streams, specify stereo output panning, manipulate sample rate"
OpenAl used in games development
Also check the following sample out: iPhoneMultichannelMixerTest:
Two input busses are created each with input volume controls. An overall mixer output volume control is also provided and each bus may be enabled or disabled
I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)
Can iphone mix two sound files or build custom equalizer?
I have studied for weeks about this problem,
and it seems unable to use iphone-sdk to mix two or more sound files or to build custom equalizer.
Is anyone have the experience to do this?
Yes you can. AVAudioPlayer can play multiple sounds and you can control the volume for each. Or you can use Audio Units and have more control over the audio data.
aurioTouch is a good sample app for what you are thinking of.
For simple playback of sound files you can use the AVAudioPlayer class introduced in the 2.2 SDK. It provides playback and volume controls for playing any audio file. As far as I am aware, there are no restrictions on the number of sound files you can play on the iPhone. The only restriction on playing sound files is that you may only play one AAC or MP3 compressed file at a time, the rest of the files must be either uncompressed or in the IMA4 format.
If your needs are more low-level (If you need to do DSP) you might want to look at AudioQueue Services or AudioUnits - two Mac OS X audio processing APIs that are also available on the iPhone.