I'm using libogg and libogg, I've succeeded to add those libraries to my iPhone xCode project and encode my voice with Speex. The problem is that I cannot figure out how to pack those audio packet with ogg. Does someone know how a packet of that kind should look like or have a reference code I can use.
I know in Java it's pretty simple (you have a dedicated function for that) but not on iOS. Please help.
UPD 10.09.2013: Please, see the demo project, which basically takes pcm audiodata from wave container, encodes it with speex codec and pack everything into ogg container. Maybe later I'll create a full-fledged library/framework for all that speex routines on IOS.
UPD 16.02.2015: The demo project is republished on GitHub.
I also have been experimenting with Speex on iOS recently, with varied success, but here is something I found. Basically, if you want to pack some speex-encoded voice into an ogg file, you need to do follow three steps (assuming libogg and libspeex are already compiled and added to the project).
1) Add the first ogg page with Speex header; libspeex provides built-in tools for that (the code below is from my project, not optimal, just for the sake of example):
// create speex header
SpeexHeader spxHeader;
SpeexMode spxMode = speex_wb_mode;
int spxRate = 16000;
int spxNumberOfChannels = 1;
speex_init_header(&spxHeader, spxRate, spxNumberOfChannels, &spxMode);
// set audio and ogg packing parameters
spxHeader.vbr = 0;
spxHeader.bitrate = 16;
spxHeader.frame_size = 320;
spxHeader.frames_per_packet = 1;
// wrap speex header in ogg packet
int oggPacketSize;
_oggPacket.packet = (unsigned char *)speex_header_to_packet(&spxHeader, &oggPacketSize);
_oggPacket.bytes = oggPacketSize;
_oggPacket.b_o_s = 1;
_oggPacket.e_o_s = 0;
_oggPacket.granulepos = 0;
_oggPacket.packetno = 0;
// submit the packet to the ogg streaming layer
ogg_stream_packetin(&_oggStreamState, &_oggPacket);
free(_oggPacket.packet);
// form an ogg page
ogg_stream_flush(&_oggStreamState, &_oggPage);
// write the page to file
[_oggFile appendBytes:&_oggStreamState.header length:_oggStreamState.header_fill];
[_oggFile appendBytes:_oggStreamState.body_data length:_oggStreamState.body_fill];
2) Add the second ogg page with Vorbis comment:
// form any comment you like (I use custom struct with all fields)
vorbisCommentStruct *vorbisComment = calloc(sizeof(vorbisCommentStruct), sizeof(char));
...
// wrap Vorbis comment in ogg packet
_oggPacket.packet = (unsigned char *)vorbisComment;
_oggPacket.bytes = vorbisCommentLength;
_oggPacket.b_o_s = 0;
_oggPacket.e_o_s = 0;
_oggPacket.granulepos = 0;
_oggPacket.packetno = _oggStreamState.packetno;
// the rest should be same as in previous step
...
3) Add subsequent ogg pages with your speex-encoded audio in the similar manner.
First of all decide how many frames with audio data you want to have on every ogg page (0-255; I choose 79 quite arbitrarily):
_framesPerOggPage = 79;
Then for each frame:
// calculate current granule position of audio data within ogg file
int curGranulePos = _spxSamplesPerFrame * _oggTotalFramesCount;
// wrap audio data in ogg packet
oggPacket.packet = (unsigned char *)spxFrame;
oggPacket.bytes = spxFrameLength;
oggPacket.granulepos = curGranulePos;
oggPacket.packetno = _oggStreamState.packetno;
oggPacket.b_o_s = 0;
oggPacket.e_o_s = 0;
// submit packets to streaming layer until their number reaches _framesPerOggPage
...
// if we've reached this limit, we're ready to create another ogg page
ogg_stream_flush(&_oggStreamState, &_oggPage);
[_oggFile appendBytes:&_oggStreamState.header length:_oggStreamState.header_fill];
[_oggFile appendBytes:_oggStreamState.body_data length:_oggStreamState.body_fill];
// finally, if this is the last frame, flush all remaining packets,
// which have been created but not packed into a page, to the last page
// (don't forget to set oggPacket.e_o_s to 1 for this frame)
That's it. Hope it will help. Any corrections or questions are welcome.
I decode amrnb to PCM, then put right pcm buffer to Enqueue buffer (I'm sure PCM data is right), but no sound is heard. And when feeding buffer, log outputs:
/AudioTrack(14857): obtainBuffer timed out (is the CPU pegged?)
My code is below, and my questions are:
Is there something wrong when I use the OpenSL ES?
Is it true that OpenSL ES only works on the real device?
Sample code:
void AudioTest()
{
StartAudioPlay();
while(1)
{
//decode AMR to PCM
/* Convert to little endian and write to wav */
//write buffer to buffer queue
AudioBufferWrite(littleendian, 320);
}
}
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
//do nothing
}
void AudioBufferWrite(const void* buffer, int size)
{
(*gBQBufferQueue)->Enqueue(gBQBufferQueue, buffer, size );
}
// create buffer queue audio player
void SlesCreateBQPlayer(/*AudioCallBackSL funCallback, void *soundMix,*/ int rate, int nChannel, int bitsPerSample )
{
SLresult result;
// configure audio source
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, 1, SL_SAMPLINGRATE_8,
SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
SL_SPEAKER_FRONT_CENTER, SL_BYTEORDER_LITTLEENDIAN};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, gOutputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
result = (*gEngineEngine)->CreateAudioPlayer(gEngineEngine, &gBQObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*gBQObject)->Realize(gBQObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_PLAY, &gBQPlay);
// get the buffer queue interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_BUFFERQUEUE,
&gBQBufferQueue);
// register callback on the buffer queue
result = (*gBQBufferQueue)->RegisterCallback(gBQBufferQueue, bqPlayerCallback, NULL/*soundMix*/);
// get the effect send interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_EFFECTSEND,
&gBQEffectSend);
// set the player's state to playing
result = (*gBQPlay)->SetPlayState(gBQPlay, SL_PLAYSTATE_PLAYING );
}
I'm not entirely sure, but I think you're correct in that the emulator's OpenSL ES support doesn't actually work. I've never gotten it to work in practice, while it works on any device I've tried.
In my application I have to support Android 2.2 as well, so I have a fallback to use JNI to access the Java AudioTrack APIs. I added a special case to my app to always use the AudioTrack interface when the emulator is detected.
I am trying to make an app that plays audio stream using ffmpeg, libmms.
I can open mms server, get stream, and decode audio frame to raw frame using suitable codec.
However I don't know how to do next.
I think I must use AudioToolbox/AudioToolbox.h and make audioqueue.
but however when I give audioqueuebuffer decode buffer's memory and play, Only plays the white noise.
Here is my code.
What am i missing?
Any comment and hint is very appreciated.
Thanks very much.
while(av_read_frame(pFormatCtx, &pkt)>=0)
{
int pkt_decoded_len = 0;
int frame_decoded_len;
int decode_buff_remain=AVCODEC_MAX_AUDIO_FRAME_SIZE * 5;
if(pkt.stream_index==audiostream)
{
frame_decoded_len=decode_buff_remain;
int16_t *decode_buff_ptr = decode_buffer;
int decoded_tot_len=0;
pkt_decoded_len = avcodec_decode_audio2(pCodecCtx, decode_buff_ptr, &frame_decoded_len,
pkt.data, pkt.size);
if (pkt_decoded_len <0) break;
AudioQueueAllocateBuffer(audioQueue, kBufferSize, &buffers[i]);
AQOutputCallback(self, audioQueue, buffers[i], pkt_decoded_len);
if(i == 1){
AudioQueueSetParameter(audioQueue, kAudioQueueParam_Volume, 1.0);
AudioQueueStart(audioQueue, NULL);
}
i++;
}
}
void AQOutputCallback(void *inData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, int copySize)
{
mmsDemoViewController *staticApp = (mmsDemoViewController *)inData;
[staticApp handleBufferCompleteForQueue:inAQ buffer:inBuffer size:copySize];
}
- (void)handleBufferCompleteForQueue:(AudioQueueRef)inAQ
buffer:(AudioQueueBufferRef)inBuffer
size:(int)copySize
{
inBuffer->mAudioDataByteSize = inBuffer->mAudioDataBytesCapacity;
memcpy((char*)inBuffer->mAudioData, (const char*)decode_buffer, copySize);
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
You called AQOutputCallback wrongly. You don't have to necessarilly call that method.
That method will be called automatically when audio buffers used by audio queue.
And the prototype of AQOutputCallback was wrong.
According to your code That method will not be called automatically I think.
You can Override
typedef void (*AudioQueueOutputCallback) (
void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer
);
like this
void AudioQueueCallback(void* inUserData, AudioQueueRef inAQ, AudioQueueBufferRef
inBuffer);
And you should set the Audio Session When your app starts.
The important references are here.
However, What is the extension of Audio you are willing to decode?
AudioStreamPacketDescription is important if the Audio is Variable Frame per packet.
Otherwise, if One Frame per One Packet, AudioStreamPacketDescription is not significant.
What you do next is
To Set the audio session, To Get raw audio frame using decoder, To Put the frame into the Audio Buffer.
Instead of you, Make the system to fill the empty buffer.
I have trouble getting sound output on my iPhone experiment and I'm out of ideas.
Here is my callback to fill the Audio Queue buffer
void AudioOutputCallback(void *user, AudioQueueRef refQueue, AudioQueueBufferRef inBuffer)
{
NSLog(#"callback called");
inBuffer->mAudioDataByteSize = 1024;
gme_play((Music_Emu*)user, 1024, (short *)inBuffer->mAudioData);
AudioQueueEnqueueBuffer(refQueue, inBuffer, 0, NULL);
}
I setup the audio queue using the following snippet
// Create stream description
AudioStreamBasicDescription streamDescription;
streamDescription.mSampleRate = 44100;
streamDescription.mFormatID = kAudioFormatLinearPCM;
streamDescription.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamDescription.mBytesPerPacket = 1024;
streamDescription.mFramesPerPacket = 1024 / 4;
streamDescription.mBytesPerFrame = 2 * sizeof(short);
streamDescription.mChannelsPerFrame = 2;
streamDescription.mBitsPerChannel = 16;
AudioQueueNewOutput(&streamDescription, AudioOutputCallback, theEmu, NULL, NULL, 0, &theAudioQueue);
OSStatus errorCode = AudioQueueAllocateBuffer(theAudioQueue, 1024, &someBuffer);
if( errorCode )
{
NSLog(#"Cannot allocate buffer");
}
AudioOutputCallback(theEmu, theAudioQueue, someBuffer);
AudioQueueSetParameter(theAudioQueue, kAudioQueueParam_Volume, 1.0);
AudioQueueStart(theAudioQueue, NULL);
The library I'm using is outputting linear PCM 16bit 44hz.
I usually use 3 buffers. You need at least 2 because as one gets played, the other one is filled by your code. If you have only one, there's not enough time to fill the same buffer and re-enqueue it and have playback be seamless. So it probably just stops your queue because it ran out of buffers.
I'd like to build a synthesizer for the iPhone. I understand that it's possible to use custom audio units for the iPhone. At first glance, this sounds promising, since there's lots and lots of Audio Unit programming resources available. However, using custom audio units on the iPhone seems a bit tricky ( see: http://lists.apple.com/archives/Coreaudio-api/2008/Nov/msg00262.html)
This seems like the sort of thing that loads of people must be doing, but a simple google search for "iphone audio synthesis" doesn't turn up anything along the lines of a nice and easy tutorial or recommended tool kit.
So, anyone here have experience synthesizing sound on the iPhone? Are custom audio units the way to go, or is there another, simpler approach I should consider?
I'm also investigating this. I think the AudioQueue API is probably the way to go.
Here's as far as I got, seems to work okay.
File: BleepMachine.h
//
// BleepMachine.h
// WgHeroPrototype
//
// Created by Andy Buchanan on 05/01/2010.
// Copyright 2010 Andy Buchanan. All rights reserved.
//
#include <AudioToolbox/AudioToolbox.h>
// Class to implement sound playback using the AudioQueue API's
// Currently just supports playing two sine wave tones, one per
// stereo channel. The sound data is liitle-endian signed 16-bit # 44.1KHz
//
class BleepMachine
{
static void staticQueueCallback( void* userData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
{
BleepMachine* pThis = reinterpret_cast<BleepMachine*> ( userData );
pThis->queueCallback( outAQ, outBuffer );
}
void queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer );
AudioStreamBasicDescription m_outFormat;
AudioQueueRef m_outAQ;
enum
{
kBufferSizeInFrames = 512,
kNumBuffers = 4,
kSampleRate = 44100,
};
AudioQueueBufferRef m_buffers[kNumBuffers];
bool m_isInitialised;
struct Wave
{
Wave(): volume(1.f), phase(0.f), frequency(0.f), fStep(0.f) {}
float volume;
float phase;
float frequency;
float fStep;
};
enum
{
kLeftWave = 0,
kRightWave = 1,
kNumWaves,
};
Wave m_waves[kNumWaves];
public:
BleepMachine();
~BleepMachine();
bool Initialise();
void Shutdown();
bool Start();
bool Stop();
bool SetWave( int id, float frequency, float volume );
};
// Notes by name. Integer value is number of semitones above A.
enum Note
{
A = 0,
Asharp,
B,
C,
Csharp,
D,
Dsharp,
E,
F,
Fsharp,
G,
Gsharp,
Bflat = Asharp,
Dflat = Csharp,
Eflat = Dsharp,
Gflat = Fsharp,
Aflat = Gsharp,
};
// Helper function calculates fundamental frequency for a given note
float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave=4 );
float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber );
File:BleepMachine.mm
//
// BleepMachine.mm
// WgHeroPrototype
//
// Created by Andy Buchanan on 05/01/2010.
// Copyright 2010 Andy Buchanan. All rights reserved.
//
#include "BleepMachine.h"
void BleepMachine::queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
{
// Render the wave
// AudioQueueBufferRef is considered "opaque", but it's a reference to
// an AudioQueueBuffer which is not.
// All the samples manipulate this, so I'm not quite sure what they mean by opaque
// saying....
SInt16* coreAudioBuffer = (SInt16*)outBuffer->mAudioData;
// Specify how many bytes we're providing
outBuffer->mAudioDataByteSize = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;
// Generate the sine waves to Signed 16-Bit Stero interleaved ( Little Endian )
float volumeL = m_waves[kLeftWave].volume;
float volumeR = m_waves[kRightWave].volume;
float phaseL = m_waves[kLeftWave].phase;
float phaseR = m_waves[kRightWave].phase;
float fStepL = m_waves[kLeftWave].fStep;
float fStepR = m_waves[kRightWave].fStep;
for( int s=0; s<kBufferSizeInFrames*2; s+=2 )
{
float sampleL = ( volumeL * sinf( phaseL ) );
float sampleR = ( volumeR * sinf( phaseR ) );
short sampleIL = (int)(sampleL * 32767.0);
short sampleIR = (int)(sampleR * 32767.0);
coreAudioBuffer[s] = sampleIL;
coreAudioBuffer[s+1] = sampleIR;
phaseL += fStepL;
phaseR += fStepR;
}
m_waves[kLeftWave].phase = fmodf( phaseL, 2 * M_PI ); // Take modulus to preserve precision
m_waves[kRightWave].phase = fmodf( phaseR, 2 * M_PI );
// Enqueue the buffer
AudioQueueEnqueueBuffer( m_outAQ, outBuffer, 0, NULL );
}
bool BleepMachine::SetWave( int id, float frequency, float volume )
{
if ( ( id < kLeftWave ) || ( id >= kNumWaves ) ) return false;
Wave& wave = m_waves[ id ];
wave.volume = volume;
wave.frequency = frequency;
wave.fStep = 2 * M_PI * frequency / kSampleRate;
return true;
}
bool BleepMachine::Initialise()
{
m_outFormat.mSampleRate = kSampleRate;
m_outFormat.mFormatID = kAudioFormatLinearPCM;
m_outFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
m_outFormat.mFramesPerPacket = 1;
m_outFormat.mChannelsPerFrame = 2;
m_outFormat.mBytesPerPacket = m_outFormat.mBytesPerFrame = sizeof(UInt16) * 2;
m_outFormat.mBitsPerChannel = 16;
m_outFormat.mReserved = 0;
OSStatus result = AudioQueueNewOutput(
&m_outFormat,
BleepMachine::staticQueueCallback,
this,
NULL,
NULL,
0,
&m_outAQ
);
if ( result < 0 )
{
printf( "ERROR: %d\n", (int)result );
return false;
}
// Allocate buffers for the audio
UInt32 bufferSizeBytes = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;
for ( int buf=0; buf<kNumBuffers; buf++ )
{
OSStatus result = AudioQueueAllocateBuffer( m_outAQ, bufferSizeBytes, &m_buffers[ buf ] );
if ( result )
{
printf( "ERROR: %d\n", (int)result );
return false;
}
// Prime the buffers
queueCallback( m_outAQ, m_buffers[ buf ] );
}
m_isInitialised = true;
return true;
}
void BleepMachine::Shutdown()
{
Stop();
if ( m_outAQ )
{
// AudioQueueDispose also chucks any audio buffers it has
AudioQueueDispose( m_outAQ, true );
}
m_isInitialised = false;
}
BleepMachine::BleepMachine()
: m_isInitialised(false), m_outAQ(0)
{
for ( int buf=0; buf<kNumBuffers; buf++ )
{
m_buffers[ buf ] = NULL;
}
}
BleepMachine::~BleepMachine()
{
Shutdown();
}
bool BleepMachine::Start()
{
OSStatus result = AudioQueueSetParameter( m_outAQ, kAudioQueueParam_Volume, 1.0 );
if ( result ) printf( "ERROR: %d\n", (int)result );
// Start the queue
result = AudioQueueStart( m_outAQ, NULL );
if ( result ) printf( "ERROR: %d\n", (int)result );
return true;
}
bool BleepMachine::Stop()
{
OSStatus result = AudioQueueStop( m_outAQ, true );
if ( result ) printf( "ERROR: %d\n", (int)result );
return true;
}
// A (A4=440)
// A# f(n)=2^(n/12) * r
// B where n = number of semitones
// C and r is the root frequency e.g. 440
// C#
// D frq -> MIDI note number
// D# p = 69 + 12 x log2(f/440)
// E
// F
// F#
// G
// G#
//
// MIDI Note ref: http://www.phys.unsw.edu.au/jw/notes.html
//
// MIDI Node numbers:
// A3 57
// A#3 58
// B3 59
// C4 60 <--
// C#4 61
// D4 62
// D#4 63
// E4 64
// F4 65
// F#4 66
// G4 67
// G#4 68
// A4 69 <--
// A#4 70
// B4 71
// C5 72
float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave )
{
semiTones += ( 12 * (octave-4) );
float root = 440.f;
float fn = powf( 2.f, (float)semiTones/12.f ) * root;
return fn;
}
float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber )
{
SInt32 semiTones = midiNoteNumber - 69;
return CalculateFrequencyFromNote( semiTones, 4 );
}
//for ( SInt32 midiNote=21; midiNote<=108; ++midiNote )
//{
// printf( "MIDI Note %d: %f Hz \n",(int)midiNote,CalculateFrequencyFromMIDINote( midiNote ) );
//}
Update: Basic usage info
Initialise. Somehere near the start, I'm using initFromNib: in my code
m_bleepMachine = new BleepMachine;
m_bleepMachine->Initialise();
m_bleepMachine->Start();
Now the sound playback is running, but generating silence.
In your code, call this when you want to change the tone generation
m_bleepMachine->SetWave( ch, frq, vol );
where ch is the channel ( 0 or 1 )
where frq is the frequency to set in Hz
where vol is the volume ( 0=-Inf db, 1=-0db )
At program termination
delete m_bleepMachine;
Since my original post almost a year ago, I've come a long way. After a pretty exhaustive search, I came up with very few high-level synthesis tools suitable for iOS development. There are many which are GPL licensed, but the GPL license is too restrictive for me to feel comfortable using it. LibPD works great, and is what rjdj uses, but I found myself really frustrated by the graphical programming paradigm. JSyn's c-based engine, csyn, is an option, but it requires licensing, and I'm really used to programming with open-source tools. It does look worth a close look though.
In the end, I'm using STK as my basic framework. STK is a very low-level tool, and requires extensive buffer-level programming to get working. This is in contrast to something higher level like PD or SuperCollider, which allows you to simply plug unit generators together and not worry about handling the raw audio data.
Working this way with STK is certainly a bit slower than with a high level tool, but I'm becoming comfortable with it. Especially now that I'm becoming more comfortable with C/C++ programming in general.
There's a new project under way to create a patching-style add on to Open Frameworks. It's called Cleo I think, out of the University of Vancouver. It hasn't been released yet, but it looks like a very nice mix of patching-style connection of unit generators in C++ rather than requiring the use of another language. And it's tightly integrated with Open Frameworks, which may be appealing or not, depending.
So, to answer my original question, first you need to learn how to write to the output buffer. Here's some good sample code for that:
http://atastypixel.com/blog/using-remoteio-audio-unit/
Then you need to do some synthesis to generate the audio data. If you like patching, I wouldn't hesitate to recommend libpd. It seems to work great, and you can work the way you're accustomed to. If you hate graphical patching (like me), your best starting place for now is probably STK. If STK and low-level audio programming seems a bit over your head (like it was for me), just roll up your sleeves, pack a tent, and set up on a bit of a long hike up the learning curve. You'll be a much better programmer for it in the end.
Another bit of advice I wish I could have given myself a year ago: join Apple's Core Audio mailing list.
============== 2014 Edit ===========
I'm now using (and actively contributing to) the Tonic audio synthesis library. It's awesome, if I don't say so myself.
With the enormous caveat that I have yet to get past all the documentation or finishing browsing some classes / sample code, it looks like the fine folks from CCRMA over at Stanford may have put some nice toolkits together for our audio hacking pleasure. No guarantees these will do exactly what you want, but based on what I know about the original STK, they should do the trick. I'm about to embark on an audio synth app myself and the more code I can reuse, the better.
Links / descriptions from their site...
MoMu : MoMu is a light-weight software toolkit for creating musical instruments and experiences on mobile device, and currently supports the iPhone platform (iPhone, iPad, iPod Touches). MoMu provides API's for real-time full-duplex audio, accelerometer, location, multi-touch, networking (via OpenSoundControl), graphics, and utilities. (yada yada)
• and •
MoMu STK : The MoMu release of the Synthesis Toolkit (STK, originally by Perry R. Cook and Gary P. Scavone) is a lightly modified version of STK 4.4.2, and currently supports the iPhone platform (iPhone, iPad, iPod Touches).
I'm just getting into Audio Unit programming for iPhone to build a synth-like app as well. The Apple guide "Audio Unit Hosting Guide for iOS" seems like a good reference:
http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/AudioUnitHostingFundamentals/AudioUnitHostingFundamentals.html#//apple_ref/doc/uid/TP40009492-CH3-SW11
The guide includes links to a couple sample projects. Audio Mixer (MixerHost) and aurioTouch:
http://developer.apple.com/library/ios/samplecode/MixerHost/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010210
http://developer.apple.com/library/ios/samplecode/aurioTouch/Introduction/Intro.html#//apple_ref/doc/uid/DTS40007770
I'm one of the other contributors to Tonic along with morgancodes. For wrangling CoreAudio in a higher-level framework, I can't give enough praise to The Amazing Audio Engine.
We've both used it in tandem with Tonic in a number of projects. It takes so much of the pain out of dealing with CoreAudio directly, letting you focus on the actual content and synthesis instead of the hardware abstraction layer.
Lately I've been using AudioKit
It's a fresh and well designed wrapper over CSound which has been around for ages
I was using tonic with openframeworks and I was finding myself missing programming in swift.
Although tonic and openframeworks are both powerful tools,
I've chosen to get in bed with swift
PD has a version that runs on the iphone, used by RjDj. If you are OK with using someone else's app rather than writing your own, you can do quite a bit in an RjDj scene, and there is a set of objects that let you patch it out and test it on a regular PD on your own computer.
I should mention: PD is a visual dataflow programming language, that is to say, it is turing complete, and can be used to develop graphical applications - but if you are going to do anything interesting I would definitely look into best practices for patching.
Last time I checked you couldn't use custom AUs on iOS in a way that would allow all installed apps to use it (like on MacOS X).
You could theoretically use a custom AU from inside your iOS app by loading it from the app's bundle and calling the AU's render function directly, but then you could as well add the code directly to your app. Also, I'm pretty sure that loading and calling code that sits in a dynamic library would go against the AppStore policies.
So you will either have to do the processing in your remote IO callback or use the Apple AUs that are preinstalled, within an AUGraph.