For mostly security reasons, I'm not allowed to store a WAV file on the server to be accessed by a browser. What I have is a byte array contains audio data (the data portion of a WAV file I believe) on the sever, and I want it to be played on a browser through JavaScript (or Applet but JS preferred), I can use JSON-PRC to send the whole byte[] over, or I can open a socket to stream it over, but in either case I don't know who to play the byte[] within the browser?
The following code will play the sine wave at 0.5 and 2.0. Call the function play_buffersource() in your button or anywhere you want.
Tested using Chrome with Web Audio flag enabled. For your case, all that you need to do is just to shuffle your audio bytes to the buf.
<script type="text/javascript">
const kSampleRate = 44100; // Other sample rates might not work depending on the your browser's AudioContext
const kNumSamples = 16834;
const kFrequency = 440;
const kPI_2 = Math.PI * 2;
function play_buffersource() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser sucks because it does NOT support any AudioContext!");
return;
}
window.AudioContext = window.webkitAudioContext;
}
var ctx = new AudioContext();
var buffer = ctx.createBuffer(1, kNumSamples, kSampleRate);
var buf = buffer.getChannelData(0);
for (i = 0; i < kNumSamples; ++i) {
buf[i] = Math.sin(kFrequency * kPI_2 * i / kSampleRate);
}
var node = ctx.createBufferSource(0);
node.buffer = buffer;
node.connect(ctx.destination);
node.noteOn(ctx.currentTime + 0.5);
node = ctx.createBufferSource(0);
node.buffer = buffer;
node.connect(ctx.destination);
node.noteOn(ctx.currentTime + 2.0);
}
</script>
References:
http://epx.com.br/artigos/audioapi.php
https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html
If you need to resample the audio, you can use a JavaScript resampler: https://github.com/grantgalitz/XAudioJS
If you need to decode the base64 data, there are a lot of JavaScript base64 decoder: https://github.com/carlo/jquery-base64
I accomplished this via the following code. I pass in a byte array containing the data from the wav file to the function playByteArray. My solution is similar to Peter Lee's, but I could not get his to work in my case (the output was garbled) whereas this solution works well for me. I verified that it works in Firefox and Chrome.
window.onload = init;
var context; // Audio context
var buf; // Audio buffer
function init() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser does not support any AudioContext and cannot play back this audio.");
return;
}
window.AudioContext = window.webkitAudioContext;
}
context = new AudioContext();
}
function playByteArray(byteArray) {
var arrayBuffer = new ArrayBuffer(byteArray.length);
var bufferView = new Uint8Array(arrayBuffer);
for (i = 0; i < byteArray.length; i++) {
bufferView[i] = byteArray[i];
}
context.decodeAudioData(arrayBuffer, function(buffer) {
buf = buffer;
play();
});
}
// Play the loaded file
function play() {
// Create a source node from the buffer
var source = context.createBufferSource();
source.buffer = buf;
// Connect to the final output node (the speakers)
source.connect(context.destination);
// Play immediately
source.start(0);
}
If you have the bytes on the server then I would suggest that you create some kind of handler on the server that will stream the bytes to the response as a wav file. This "file" would only be in memory on the server and not on disk. Then the browser can just handle it like a normal wav file.
More details on the server stack would be needed to give more information on how this could be done in your environment.
I suspect you can achieve this with HTML5 Audio API easily enough:
https://developer.mozilla.org/en/Introducing_the_Audio_API_Extension
This library might come in handy too, though I'm not sure if it reflects the latest browser behaviours:
https://github.com/jussi-kalliokoski/audiolib.js
If anyone have some experience that encode/decode speex audio format with AudioQueue?
I have tried to implement it by editing the SpeakHere sample. But not success!
From the apple API document, AudioQueue can support codec, but I can't found any sample. Could anyone give me some suggestion? I already compiled speex codec successfully in my project in XCode 4.
in the apple sample code "SpeakHere" you can do some thing like this:
AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */,
NULL /* run loop mode */,
0 /* flags */, &mQueue)
you can do some thing in function "MyInputBufferHandler" like
[self encoder:(short *)buffer->mAudioData count:buffer->mAudioDataByteSize/sizeof(short)];
the encoder function like:
while ( count >= samplesPerFrame )
{
speex_bits_reset( &bits );
speex_encode_int( enc_state, samples, &bits );
static const unsigned maxSize = 256;
char data[maxSize];
unsigned size = (unsigned)speex_bits_write( &bits, data, maxSize );
/*
do some thing... for example :send to server
*/
samples += samplesPerFrame;
count -= samplesPerFrame;
}
This is the general idea.Of course fact is hard, but you can see some open source of VOIP, maybe can help you.
good luck.
You can achieve all that with FFMPEG and then play it as PCM with AudioQueue.
The building of the FFMPEG library is not so painless but the whole decode/encode process is not that hard :)
FFMPEG official site
SPEEX official site
You will have to download the libs and build them yourself and then you will have to include them into FFMPEG and build it.
Below is the Code For Capturing Audio using audioqueue and encode (wide-band) using speex
(For Better Quality of Audio You can encode data in separate Thread , Change your sample size according to your capture format).
Audio format
mSampleRate = 16000;
mFormatID = kAudioFormatLinearPCM;
mFramesPerPacket = 1;
mChannelsPerFrame = 1;
mBytesPerFrame = 2;
mBytesPerPacket = 2;
mBitsPerChannel = 16;
mReserved = 0;
mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
Capture callback
void CAudioCapturer::AudioInputCallback(void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription *inPacketDescs)
{
CAudioCapturer *This = (CMacAudioCapturer *)inUserData;
int len = 640;
char data[640];
char *pSrc = (char *)inBuffer->mAudioData;
while (len <= inBuffer->mAudioDataByteSize)
{
memcpy(data,pSrc,640);
int enclen = encode(buffer,encBuffer);
len=len+640;
pSrc+=640; // 640 is the frame size for WB in speex (320 short)
}
AudioQueueEnqueueBuffer(This->m_audioQueue, inBuffer, 0, NULL);
}
speex encoder
int encode(char *buffer,char *pDest)
{
int nbBytes=0;
speex_bits_reset(&encbits);
speex_encode_int(encstate, (short*)(buffer) , &encbits);
nbBytes = speex_bits_write(&encbits, pDest ,640/(sizeof(short)));
return nbBytes;
}
I am currently working on an audio DSP App development. The project requires direct access and modification of audio data. Right now I can successfully access and modify the raw audio data using AudioQueue but encounters error during playback. The output audio after any modification turns out be noise.
In short, the code is something like this:
(Modified from Speakhere sample code. The rest remains unchanged.)
void AQPlayer::AQBufferCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inCompleteAQBuffer)
{
AQPlayer *THIS = (AQPlayer *)inUserData;
if (THIS->mIsDone) return;
UInt32 numBytes;
UInt32 nPackets = THIS->GetNumPacketsToRead();
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(),
false,
&numBytes,
inCompleteAQBuffer->mPacketDescriptions,
THIS->GetCurrentPacket(),
&nPackets,
inCompleteAQBuffer->mAudioData);
if (result)
printf("AudioFileReadPackets failed: %d", (int)result);
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
//My modification starts from here
//Modifying audio data
SInt16 *testBuffer = (SInt16*)inCompleteAQBuffer->mAudioData;
for (int i = 0; i < (inCompleteAQBuffer->mAudioDataByteSize)/sizeof(SInt16); i++)
{
//printf("before modification %d", (int)*testBuffer);
*testBuffer = (SInt16) *testBuffer/2; //Say some simple modification
//printf("after modification %d", (int)*testBuffer);
testBuffer++;
}
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
}
During debugging, the data in buffer is displayed as expected, but the actual output is nothing but noise.
Here are some other strange behaviors of the code that makes both the whole team crazy:
If there is no change to the data (add/sub by 0, multiply by 1) or the whole buffer is assigned to a constant (say 0, then the audio will be muted), the playback behaves normally (Of course!) But if I perform anything more than it, it still turns out to be noise.
In the case I hardcode a single tone as test audio, the output noise spreads into another channel also.
So where is the bug in this code? Or if I am on the wrong track, what is the correct approach to modify the audio data and perform playback CORRECTLY? Any insight will be sincerely appreciated.
Thank you very much :-)
Cheers,
Manca
are you SURE the sample format is SInt16? And how many channels are there? You seem to treat the audio as a single channel short stream, but suppose the format is actually dual channel Float32 or so, and you do the modifications there, than the effect would be exactly as you describe, including the noise on other channels.
I have a project using libavcodec (ffmpeg). I'm using it to encode MPEG-2 video at 4:2:2 Profile, Main Level. I have the pixel format PIX_FMT_YUV422P selected in the AVCodecContext, however the video output I'm getting has all the colours wrong, and looks to me like the encoder is incorrectly reading the buffers as though it thinks it is 4:2:0 chroma rather than 4:2:2. Here's my codec setup:
//
// AVFormatContext* _avFormatContext previously defined as mpeg2video
//
//
// Set up the video stream for output
//
AVVideoStream* _avVideoStream = av_new_stream(_avFormatContext, 0);
if (!_avVideoStream)
{
err = ccErrWFFFmpegUnableToAllocateStream;
goto bail;
}
_avCodecContext = _avVideoStream->codec;
_avCodecContext->codec_id = CODEC_ID_MPEG2VIDEO;
_avCodecContext->codec_type = CODEC_TYPE_VIDEO;
//
// Set up required parameters
//
_avCodecContext->rc_max_rate = _avCodecContext->rc_min_rate = _avCodecContext->bit_rate = src->_avCodecContext->bit_rate;
_avCodecContext->flags = CODEC_FLAG_INTERLACED_DCT;
_avCodecContext->flags2 = CODEC_FLAG2_INTRA_VLC | CODEC_FLAG2_NON_LINEAR_QUANT;
_avCodecContext->qmin = 1;
_avCodecContext->qmax = 1;
_avCodecContext->rc_buffer_size = _avCodecContext->rc_initial_buffer_occupancy = 2000000;
_avCodecContext->rc_buffer_aggressivity = 0.25;
_avCodecContext->profile = 0;
_avCodecContext->level = 5;
_avCodecContext->width = f->GetWidth(); // f is a private Frame class with width, height properties etc.
_avCodecContext->height = f->GetHeight();
_avCodecContext->time_base.den = 25;
_avCodecContext->time_base.num = 1;
_avCodecContext->gop_size = 12;
_avCodecContext->max_b_frames = 2;
_avCodecContext->pix_fmt = PIX_FMT_YUV422P;
if (_avFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
{
_avCodecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
}
if (av_set_parameters(_avFormatContext, NULL) < 0)
{
err = ccErrWFFFmpegUnableToSetParameters;
goto bail;
}
//
// Set up video codec for encoding
//
AVCodec* _avCodec = avcodec_find_encoder(_avCodecContext->codec_id);
if (!_avCodec)
{
err = ccErrWFFFmpegUnableToFindCodecForOutput;
goto bail;
}
if (avcodec_open(_avCodecContext, _avCodec) < 0)
{
err = ccErrWFFFmpegUnableToOpenCodecForOutput;
goto bail;
}
A screengrab of the resulting video frame can be seen at http://ftp.limeboy.com/images/screen_grab.png (the input was standard colour bars).
I've checked by outputting debug frames to TGA format at various points in the process, and I can confirm that it is all fine and dandy up until the point that libavcodec encodes the frame.
Any assistance most appreciated!
Cheers,
Mike.
OK, this is embarrassing.
Actually, the way I had it set up is correct. Looking through the source code for ffmpeg, it appears that all you have to do to get it to encode 4:2:2 profile and 4:2:2 chroma is to set the incoming pixel format to PIX_FMT_YUV422P.
The cause of the problem? I was watching the video file back on VLC in a virtual machine, which at some stage had changed its video resolution from 32-bit to 16-bit.
That's right! IT changed it. I didn't change it - IT did it! BY ITSELF, YOU HEAR ME!!
Apologies if anyone wasted their time chasing down this non-issue.
I'd like to build a synthesizer for the iPhone. I understand that it's possible to use custom audio units for the iPhone. At first glance, this sounds promising, since there's lots and lots of Audio Unit programming resources available. However, using custom audio units on the iPhone seems a bit tricky ( see: http://lists.apple.com/archives/Coreaudio-api/2008/Nov/msg00262.html)
This seems like the sort of thing that loads of people must be doing, but a simple google search for "iphone audio synthesis" doesn't turn up anything along the lines of a nice and easy tutorial or recommended tool kit.
So, anyone here have experience synthesizing sound on the iPhone? Are custom audio units the way to go, or is there another, simpler approach I should consider?
I'm also investigating this. I think the AudioQueue API is probably the way to go.
Here's as far as I got, seems to work okay.
File: BleepMachine.h
//
// BleepMachine.h
// WgHeroPrototype
//
// Created by Andy Buchanan on 05/01/2010.
// Copyright 2010 Andy Buchanan. All rights reserved.
//
#include <AudioToolbox/AudioToolbox.h>
// Class to implement sound playback using the AudioQueue API's
// Currently just supports playing two sine wave tones, one per
// stereo channel. The sound data is liitle-endian signed 16-bit # 44.1KHz
//
class BleepMachine
{
static void staticQueueCallback( void* userData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
{
BleepMachine* pThis = reinterpret_cast<BleepMachine*> ( userData );
pThis->queueCallback( outAQ, outBuffer );
}
void queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer );
AudioStreamBasicDescription m_outFormat;
AudioQueueRef m_outAQ;
enum
{
kBufferSizeInFrames = 512,
kNumBuffers = 4,
kSampleRate = 44100,
};
AudioQueueBufferRef m_buffers[kNumBuffers];
bool m_isInitialised;
struct Wave
{
Wave(): volume(1.f), phase(0.f), frequency(0.f), fStep(0.f) {}
float volume;
float phase;
float frequency;
float fStep;
};
enum
{
kLeftWave = 0,
kRightWave = 1,
kNumWaves,
};
Wave m_waves[kNumWaves];
public:
BleepMachine();
~BleepMachine();
bool Initialise();
void Shutdown();
bool Start();
bool Stop();
bool SetWave( int id, float frequency, float volume );
};
// Notes by name. Integer value is number of semitones above A.
enum Note
{
A = 0,
Asharp,
B,
C,
Csharp,
D,
Dsharp,
E,
F,
Fsharp,
G,
Gsharp,
Bflat = Asharp,
Dflat = Csharp,
Eflat = Dsharp,
Gflat = Fsharp,
Aflat = Gsharp,
};
// Helper function calculates fundamental frequency for a given note
float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave=4 );
float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber );
File:BleepMachine.mm
//
// BleepMachine.mm
// WgHeroPrototype
//
// Created by Andy Buchanan on 05/01/2010.
// Copyright 2010 Andy Buchanan. All rights reserved.
//
#include "BleepMachine.h"
void BleepMachine::queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
{
// Render the wave
// AudioQueueBufferRef is considered "opaque", but it's a reference to
// an AudioQueueBuffer which is not.
// All the samples manipulate this, so I'm not quite sure what they mean by opaque
// saying....
SInt16* coreAudioBuffer = (SInt16*)outBuffer->mAudioData;
// Specify how many bytes we're providing
outBuffer->mAudioDataByteSize = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;
// Generate the sine waves to Signed 16-Bit Stero interleaved ( Little Endian )
float volumeL = m_waves[kLeftWave].volume;
float volumeR = m_waves[kRightWave].volume;
float phaseL = m_waves[kLeftWave].phase;
float phaseR = m_waves[kRightWave].phase;
float fStepL = m_waves[kLeftWave].fStep;
float fStepR = m_waves[kRightWave].fStep;
for( int s=0; s<kBufferSizeInFrames*2; s+=2 )
{
float sampleL = ( volumeL * sinf( phaseL ) );
float sampleR = ( volumeR * sinf( phaseR ) );
short sampleIL = (int)(sampleL * 32767.0);
short sampleIR = (int)(sampleR * 32767.0);
coreAudioBuffer[s] = sampleIL;
coreAudioBuffer[s+1] = sampleIR;
phaseL += fStepL;
phaseR += fStepR;
}
m_waves[kLeftWave].phase = fmodf( phaseL, 2 * M_PI ); // Take modulus to preserve precision
m_waves[kRightWave].phase = fmodf( phaseR, 2 * M_PI );
// Enqueue the buffer
AudioQueueEnqueueBuffer( m_outAQ, outBuffer, 0, NULL );
}
bool BleepMachine::SetWave( int id, float frequency, float volume )
{
if ( ( id < kLeftWave ) || ( id >= kNumWaves ) ) return false;
Wave& wave = m_waves[ id ];
wave.volume = volume;
wave.frequency = frequency;
wave.fStep = 2 * M_PI * frequency / kSampleRate;
return true;
}
bool BleepMachine::Initialise()
{
m_outFormat.mSampleRate = kSampleRate;
m_outFormat.mFormatID = kAudioFormatLinearPCM;
m_outFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
m_outFormat.mFramesPerPacket = 1;
m_outFormat.mChannelsPerFrame = 2;
m_outFormat.mBytesPerPacket = m_outFormat.mBytesPerFrame = sizeof(UInt16) * 2;
m_outFormat.mBitsPerChannel = 16;
m_outFormat.mReserved = 0;
OSStatus result = AudioQueueNewOutput(
&m_outFormat,
BleepMachine::staticQueueCallback,
this,
NULL,
NULL,
0,
&m_outAQ
);
if ( result < 0 )
{
printf( "ERROR: %d\n", (int)result );
return false;
}
// Allocate buffers for the audio
UInt32 bufferSizeBytes = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;
for ( int buf=0; buf<kNumBuffers; buf++ )
{
OSStatus result = AudioQueueAllocateBuffer( m_outAQ, bufferSizeBytes, &m_buffers[ buf ] );
if ( result )
{
printf( "ERROR: %d\n", (int)result );
return false;
}
// Prime the buffers
queueCallback( m_outAQ, m_buffers[ buf ] );
}
m_isInitialised = true;
return true;
}
void BleepMachine::Shutdown()
{
Stop();
if ( m_outAQ )
{
// AudioQueueDispose also chucks any audio buffers it has
AudioQueueDispose( m_outAQ, true );
}
m_isInitialised = false;
}
BleepMachine::BleepMachine()
: m_isInitialised(false), m_outAQ(0)
{
for ( int buf=0; buf<kNumBuffers; buf++ )
{
m_buffers[ buf ] = NULL;
}
}
BleepMachine::~BleepMachine()
{
Shutdown();
}
bool BleepMachine::Start()
{
OSStatus result = AudioQueueSetParameter( m_outAQ, kAudioQueueParam_Volume, 1.0 );
if ( result ) printf( "ERROR: %d\n", (int)result );
// Start the queue
result = AudioQueueStart( m_outAQ, NULL );
if ( result ) printf( "ERROR: %d\n", (int)result );
return true;
}
bool BleepMachine::Stop()
{
OSStatus result = AudioQueueStop( m_outAQ, true );
if ( result ) printf( "ERROR: %d\n", (int)result );
return true;
}
// A (A4=440)
// A# f(n)=2^(n/12) * r
// B where n = number of semitones
// C and r is the root frequency e.g. 440
// C#
// D frq -> MIDI note number
// D# p = 69 + 12 x log2(f/440)
// E
// F
// F#
// G
// G#
//
// MIDI Note ref: http://www.phys.unsw.edu.au/jw/notes.html
//
// MIDI Node numbers:
// A3 57
// A#3 58
// B3 59
// C4 60 <--
// C#4 61
// D4 62
// D#4 63
// E4 64
// F4 65
// F#4 66
// G4 67
// G#4 68
// A4 69 <--
// A#4 70
// B4 71
// C5 72
float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave )
{
semiTones += ( 12 * (octave-4) );
float root = 440.f;
float fn = powf( 2.f, (float)semiTones/12.f ) * root;
return fn;
}
float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber )
{
SInt32 semiTones = midiNoteNumber - 69;
return CalculateFrequencyFromNote( semiTones, 4 );
}
//for ( SInt32 midiNote=21; midiNote<=108; ++midiNote )
//{
// printf( "MIDI Note %d: %f Hz \n",(int)midiNote,CalculateFrequencyFromMIDINote( midiNote ) );
//}
Update: Basic usage info
Initialise. Somehere near the start, I'm using initFromNib: in my code
m_bleepMachine = new BleepMachine;
m_bleepMachine->Initialise();
m_bleepMachine->Start();
Now the sound playback is running, but generating silence.
In your code, call this when you want to change the tone generation
m_bleepMachine->SetWave( ch, frq, vol );
where ch is the channel ( 0 or 1 )
where frq is the frequency to set in Hz
where vol is the volume ( 0=-Inf db, 1=-0db )
At program termination
delete m_bleepMachine;
Since my original post almost a year ago, I've come a long way. After a pretty exhaustive search, I came up with very few high-level synthesis tools suitable for iOS development. There are many which are GPL licensed, but the GPL license is too restrictive for me to feel comfortable using it. LibPD works great, and is what rjdj uses, but I found myself really frustrated by the graphical programming paradigm. JSyn's c-based engine, csyn, is an option, but it requires licensing, and I'm really used to programming with open-source tools. It does look worth a close look though.
In the end, I'm using STK as my basic framework. STK is a very low-level tool, and requires extensive buffer-level programming to get working. This is in contrast to something higher level like PD or SuperCollider, which allows you to simply plug unit generators together and not worry about handling the raw audio data.
Working this way with STK is certainly a bit slower than with a high level tool, but I'm becoming comfortable with it. Especially now that I'm becoming more comfortable with C/C++ programming in general.
There's a new project under way to create a patching-style add on to Open Frameworks. It's called Cleo I think, out of the University of Vancouver. It hasn't been released yet, but it looks like a very nice mix of patching-style connection of unit generators in C++ rather than requiring the use of another language. And it's tightly integrated with Open Frameworks, which may be appealing or not, depending.
So, to answer my original question, first you need to learn how to write to the output buffer. Here's some good sample code for that:
http://atastypixel.com/blog/using-remoteio-audio-unit/
Then you need to do some synthesis to generate the audio data. If you like patching, I wouldn't hesitate to recommend libpd. It seems to work great, and you can work the way you're accustomed to. If you hate graphical patching (like me), your best starting place for now is probably STK. If STK and low-level audio programming seems a bit over your head (like it was for me), just roll up your sleeves, pack a tent, and set up on a bit of a long hike up the learning curve. You'll be a much better programmer for it in the end.
Another bit of advice I wish I could have given myself a year ago: join Apple's Core Audio mailing list.
============== 2014 Edit ===========
I'm now using (and actively contributing to) the Tonic audio synthesis library. It's awesome, if I don't say so myself.
With the enormous caveat that I have yet to get past all the documentation or finishing browsing some classes / sample code, it looks like the fine folks from CCRMA over at Stanford may have put some nice toolkits together for our audio hacking pleasure. No guarantees these will do exactly what you want, but based on what I know about the original STK, they should do the trick. I'm about to embark on an audio synth app myself and the more code I can reuse, the better.
Links / descriptions from their site...
MoMu : MoMu is a light-weight software toolkit for creating musical instruments and experiences on mobile device, and currently supports the iPhone platform (iPhone, iPad, iPod Touches). MoMu provides API's for real-time full-duplex audio, accelerometer, location, multi-touch, networking (via OpenSoundControl), graphics, and utilities. (yada yada)
• and •
MoMu STK : The MoMu release of the Synthesis Toolkit (STK, originally by Perry R. Cook and Gary P. Scavone) is a lightly modified version of STK 4.4.2, and currently supports the iPhone platform (iPhone, iPad, iPod Touches).
I'm just getting into Audio Unit programming for iPhone to build a synth-like app as well. The Apple guide "Audio Unit Hosting Guide for iOS" seems like a good reference:
http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/AudioUnitHostingFundamentals/AudioUnitHostingFundamentals.html#//apple_ref/doc/uid/TP40009492-CH3-SW11
The guide includes links to a couple sample projects. Audio Mixer (MixerHost) and aurioTouch:
http://developer.apple.com/library/ios/samplecode/MixerHost/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010210
http://developer.apple.com/library/ios/samplecode/aurioTouch/Introduction/Intro.html#//apple_ref/doc/uid/DTS40007770
I'm one of the other contributors to Tonic along with morgancodes. For wrangling CoreAudio in a higher-level framework, I can't give enough praise to The Amazing Audio Engine.
We've both used it in tandem with Tonic in a number of projects. It takes so much of the pain out of dealing with CoreAudio directly, letting you focus on the actual content and synthesis instead of the hardware abstraction layer.
Lately I've been using AudioKit
It's a fresh and well designed wrapper over CSound which has been around for ages
I was using tonic with openframeworks and I was finding myself missing programming in swift.
Although tonic and openframeworks are both powerful tools,
I've chosen to get in bed with swift
PD has a version that runs on the iphone, used by RjDj. If you are OK with using someone else's app rather than writing your own, you can do quite a bit in an RjDj scene, and there is a set of objects that let you patch it out and test it on a regular PD on your own computer.
I should mention: PD is a visual dataflow programming language, that is to say, it is turing complete, and can be used to develop graphical applications - but if you are going to do anything interesting I would definitely look into best practices for patching.
Last time I checked you couldn't use custom AUs on iOS in a way that would allow all installed apps to use it (like on MacOS X).
You could theoretically use a custom AU from inside your iOS app by loading it from the app's bundle and calling the AU's render function directly, but then you could as well add the code directly to your app. Also, I'm pretty sure that loading and calling code that sits in a dynamic library would go against the AppStore policies.
So you will either have to do the processing in your remote IO callback or use the Apple AUs that are preinstalled, within an AUGraph.