I'm building an iPhone app that generates random guitar music by playing back individual recorded guitar notes in "caf" format. These notes vary in duration from 3 to 11 seconds, depending on the amount of sustain.
I originally used the AVAudioPlayer for playback, and in the simulator at 120 bpm, playing 16th notes it sung beautifully, but on my handset, as soon as I
upped the tempo a little over 60 bpm playing just 1/4 notes, it ran like a dog and wouldn't keep in time. My elation was very short lived.
To reduce latency, I tried to implement playback via Audio Units using the Apple MixerHost project as a template for an audio engine, but kept getting a bad access error after I bolted it on and connected everything up.
After many hours of it doing my head in, I gave up on that avenue of thought and I bolted on the Novocaine audio engine instead.
I have now run into a brick wall trying to connect it up to my model.
On the most basic level, my model is a Neck object containing an NSDictionary of Note objects.
Each Note object knows what string and fret of the guitar neck it's on and contains its own AVAudioPlayer.
I build a chromatic guitar neck containing either 122 notes (6 strings by 22 frets) or 144 notes (6 strings by 24 frets) depending on the neck size selected in the user preferences.
I use these Notes as my single point of truth so all scalar Notes generated by the music engine are pointers to this chromatic note bucket.
#interface Note : NSObject <NSCopying>
{
NSString *name;
AVAudioPlayer *soundFilePlayer;
int stringNumber;
int fretNumber;
}
I always start off playback with the root Note or Chord of the selected scale and then generate the note to play next so I am always playing one note behind the generated note. This way, the next Note to play is always queued up ready to go.
Playback control of these Notes is a achieved with the following code:
- (void)runMusicGenerator:(NSNumber *)counter
{
if (self.isRunning) {
Note *NoteToPlay;
// pulseRate is the time interval between beats
// staticNoteLength = 1/4 notes, 1/8th notes, 16th notes, etc.
float delay = self.pulseRate / [self grabStaticNoteLength];
// user setting to play single, double or triplet notes.
if (self.beatCounter == CONST_BEAT_COUNTER_INIT_VAL) {
NoteToPlay = [self.GuitarNeck generateNoteToPlayNext];
} else {
NoteToPlay = [self.GuitarNeck cloneNote:self.GuitarNeck.NoteToPlayNow];
}
self.GuitarNeck.NoteToPlayNow = NoteToPlay;
[self callOutNoteToPlay];
[self performSelector:#selector(runDrill:) withObject:NoteToPlay afterDelay:delay];
}
- (Note *)generateNoteToPlayNext
{
if ((self.musicPaused) || (self.musicStopped)) {
// grab the root note on the string to resume
self.NoteToPlayNow = [self grabRootNoteForString];
//reset the flags
self.musicPaused = NO;
self.musicStopped = NO;
} else {
// Set NoteRingingOut to NoteToPlayNow
self.NoteRingingOut = self.NoteToPlayNow;
// Set NoteToPlaNowy to NoteToPlayNext
self.NoteToPlayNow = self.NoteToPlayNext;
if (!self.NoteToPlayNow) {
self.NoteToPlayNow = [self grabRootNoteForString];
// now prep the note's audio player for playback
[self.NoteToPlayNow.soundFilePlayer prepareToPlay];
}
}
// Load NoteToPlayNext
self.NoteToPlayNext = [self generateRandomNote];
}
- (void)callOutNoteToPlay
{
self.GuitarNeck.NoteToPlayNow.soundFilePlayer.delegate = (id)self;
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer setVolume:1.0];
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer setCurrentTime:0];
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer play];
}
Each Note's AVAudioPlayer is loaded as follows:
- (AVAudioPlayer *)buildStringNotePlayer:(NSString *)nameOfNote
{
NSString *soundFileName = #"S";
soundFileName = [soundFileName stringByAppendingString:[NSString stringWithFormat:#"%d", stringNumber]];
soundFileName = [soundFileName stringByAppendingString:#"F"];
if (fretNumber < 10) {
soundFileName = [soundFileName stringByAppendingString:#"0"];
}
soundFileName = [soundFileName stringByAppendingString:[NSString stringWithFormat:#"%d", fretNumber]];
NSString *soundPath = [[NSBundle mainBundle] pathForResource:soundFileName ofType:#"caf"];
NSURL *fileURL = [NSURL fileURLWithPath:soundPath];
AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:fileURL error:nil];
return notePlayer;
}
Here is where I come a cropper.
According to the Novocaine Github page ...
Playing Audio
Novocaine *audioManager = [Novocaine audioManager];
[audioManager setOutputBlock:^(float *audioToPlay, UInt32 numSamples, UInt32 numChannels) {
// All you have to do is put your audio into "audioToPlay".
}];
But in the downloaded project, you use the following code to load the audio ...
// AUDIO FILE READING OHHH YEAHHHH
// ========================================
NSURL *inputFileURL = [[NSBundle mainBundle] URLForResource:#"TLC" withExtension:#"mp3"];
fileReader = [[AudioFileReader alloc]
initWithAudioFileURL:inputFileURL
samplingRate:audioManager.samplingRate
numChannels:audioManager.numOutputChannels];
[fileReader play];
fileReader.currentTime = 30.0;
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels)
{
[fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels];
NSLog(#"Time: %f", fileReader.currentTime);
}];
Here is where I really start to get confused because the first method uses a float and the second one uses a URL.
How do you pass a "caf" file to a float? I am not sure how to implement Novocaine - it is still fuzzy in my head.
My questions that I hope someone can help me with are as follows ...
Are Novocaine objects similar to AVAudioPlayer objects, just more versatile and tweaked to the max for minimum latency? i.e. self contained audio playing (/recording/generating) units?
Can I use Novocaine in my model as it is? i.e. 1 Novocaine object per chromatic note or should I have 1 novocain object that contains all the Chromatic Notes? Or do I just store the URL in the note instead and pass that to a Novocaine player?
How can I put my audio into "audioToPlay" when my audio is a "caf" file and "audioToPlay" take a float?
If I include and declare a Novocaine property in Note.m do I then have to rename the class to Note.mm in order to use the Novocaine object?
How do I play multiple Novocaine objects concurrently in order to reproduce chords and intervals?
Can I loop a Novocaine object's playback?
Can I set the playback length of a note? i.e. play a 10 sec note for only 1 sec?
Can I modify the above code to use Novocaine?
Is the method I am using for runMusicGenerator the correct one to use in order to maintain a tempo that is up to professional standards?
Novocaine makes your life easier by eliminating the need for you to setup the RemoteIO AudioUnit manually. This includes having to painfully fill a bunch of CoreAudio structs and providing a bunch of callbacks such as this audio process callback.
static OSStatus PerformThru(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData);
Instead Novocaine handles that in its implementation and then calls your block, which you set by doing this.
[audioManager setOutputBlock: ^(float *audioToPlay, UInt32 numSamples, UInt32 numChannels){} ];
Whatever you write to audioToPlay gets played.
Novocaine sets up the RemoteIO AudioUnit for you. This is a low-level CoreAudio API, different from the high-level AVFoundation, and very low-latency as expected. You are right in that Novocaine is self-contained. You can record, generate, and process audio in realtime.
Novocaine is a singleton, you cannot have multiple Novocaine instances. One way to do it is to store your guitar sound/sounds in a separate class or array, and then write a bunch of methods, using Novocaine to play them.
You have a bunch of options. You can use Novocaine's AudioFileReader to play your .caf file for you. You do this by allocating an AudioFileReader and then passing the URL of the .caf file you want to play, as per example code. You then stick [fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels] in your block, as per example code. Each time your block is called, AudioFileReader grabs and buffers a chunk of audio from disk and puts it in audioToPlay which subsequently gets played. There are some disadvantages with this. For short sounds (such as your guitar sound I'm assuming) repeatedly calling retrieveFreshAudio is a performance hit. It is generally a better idea (for short sounds) to perform a synchronous, sequential read of the entire file into memory. Novocaine does not provide a way to do this (yet). You will have to use ExtAudioFileServices to do this. The Apple example project MixerHost details how to do this.
If you are using AudioFileReader yes. You only rename to .mm when you are #import ing from Obj-C++ headers or #include ing C++ headers.
As mentioned earlier, only 1 Novocaine instance is allowed. You can achieve polyphony by mixing multiple audio sources. This is simply just adding buffers together. If you have made multiple versions of the same guitar sound at different pitches, just read them all in to memory, and mix away. If you only want to have one guitar sound, then you have to, in realtime, change the playback rate of however many notes you are playing and then mixdown.
Novocaine is agnostic to what you are actually playing and does not care how long you are playing a sample for. In order to loop a sound, you have to maintain a count of how many samples have elapsed, check if you are at the end of your sound, and then set that count back to 0.
Yes. Assuming a 44.1k sample rate, 1 sec of audio = 44100 samples. You would then reset your count when it reaches 44100.
Yes. It looks something like this. Assuming you have 4 guitar sounds which are mono and longer than 1 second long, and you have read them into memory float *guitarC, *guitarE, *guitarG, *guitarB; (jazzy CMaj7 chord w00t), and want to mix them down for 1 second and loop that back in mono:
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels){
static int count = 0;
for(int i=0; i<numFrames; ++i){
//Mono mix each sample of each sound together. Since result can be 4x louder, divide the total amp by 4.
//You should be using `vDSP_vadd` from the accelerate framework for added performance.
data[count] = (guitarC[count] + guitarE[count] + guitarG[count] + guitarB[count]) * 0.25;
if(++count >= 44100) count = 0; //Plays the mix for 1 sec
}
}];
Not exactly. Using performSelector or any mechanism scheduled on a runloop or thread is not guaranteed to be precise. You might experience timing irregularities when the CPU load fluctuates, for example. Use the audio block if you want sample accurate timing.
I want to know that how to place camera near and far (By increasing and decreasing value of eyeZ self.camera setEyeX:0 eyeY:0 eyeZ:1]; self.camera setEyeX:0 eyeY:0 eyeZ:180];) including animation (for smoothness), as normally it provide jerky zooming.
My suggestion is creating your own subclass of CCActionInterval, say CCCameraZoomAnimation and override its update method. The main advantage of having an action, aside from being able to control the camera movement finely, is also the possibility of using this action through CCEaseOut/CCEaseIn (etc.) to obtain nice graphical effects.
CCCameraZoomAnimation would have the node whose camera you want to modify as a target and another parameter to the constructor specifying the final Z value.
#interface CCActionEase : CCActionInterval <NSCopying>
{
CCActionInterval * other;
}
/** creates the action */
+(id) actionWithDuration:(ccTime)t finalZ:(float)finalZ;
/** initializes the action */
-(id) initWithDuration:(ccTime)t finalZ:(float)finalZ;
#end
The update method is called with an argument dt which represents the elapsed time since the start of the action and would allow you to easily calculate the current Z position:
-(void) update: (ccTime) t
{
// Get the camera's current values.
float centerX, centerY, centerZ;
float eyeX, eyeY, eyeZ;
[_target.camera centerX:¢erX centerY:¢erY centerZ:¢erZ];
[_target.camera eyeX:&eyeX eyeY:&eyeY eyeZ:&eyeZ];
eyeZ = _intialZ + _delta * t //-- just a try at modifying the camera
// Set values.
[_target.camera setCenterX:newX centerY:newY centerZ:0];
[_target.camera setEyeX:newX eyeY:newY eyeZ:eyeZ];
}
You would also need to implement copyWithZone:
-(id) copyWithZone: (NSZone*) zone
{
CCAction *copy = [[[self class] allocWithZone: zone] initWithDuration: [self duration] finalZ:_finalZ];
return copy;
}
and make use of startWithTarget to
-(void) startWithTarget:(CCNode *)aTarget
{
[super startWithTarget:aTarget];
_initialZ = _target.camera....; //-- get the current value for eyeZ
_delta = ccpSub( _finalZ, _initialZ );
}
Nothing more, nothing less.
Sorry if copy/paste/modify produced some bugs, but I hope that the general idea is clear.
if you increase the 'z' by 180 from the off, you are bound to get a jerky animation, try running this in an animation context loop,increasing the value over a period of time will allow you to have a smooth 'zoom'.
I am currently working on an audio DSP App development. The project requires direct access and modification of audio data. Right now I can successfully access and modify the raw audio data using AudioQueue but encounters error during playback. The output audio after any modification turns out be noise.
In short, the code is something like this:
(Modified from Speakhere sample code. The rest remains unchanged.)
void AQPlayer::AQBufferCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inCompleteAQBuffer)
{
AQPlayer *THIS = (AQPlayer *)inUserData;
if (THIS->mIsDone) return;
UInt32 numBytes;
UInt32 nPackets = THIS->GetNumPacketsToRead();
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(),
false,
&numBytes,
inCompleteAQBuffer->mPacketDescriptions,
THIS->GetCurrentPacket(),
&nPackets,
inCompleteAQBuffer->mAudioData);
if (result)
printf("AudioFileReadPackets failed: %d", (int)result);
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
//My modification starts from here
//Modifying audio data
SInt16 *testBuffer = (SInt16*)inCompleteAQBuffer->mAudioData;
for (int i = 0; i < (inCompleteAQBuffer->mAudioDataByteSize)/sizeof(SInt16); i++)
{
//printf("before modification %d", (int)*testBuffer);
*testBuffer = (SInt16) *testBuffer/2; //Say some simple modification
//printf("after modification %d", (int)*testBuffer);
testBuffer++;
}
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
}
During debugging, the data in buffer is displayed as expected, but the actual output is nothing but noise.
Here are some other strange behaviors of the code that makes both the whole team crazy:
If there is no change to the data (add/sub by 0, multiply by 1) or the whole buffer is assigned to a constant (say 0, then the audio will be muted), the playback behaves normally (Of course!) But if I perform anything more than it, it still turns out to be noise.
In the case I hardcode a single tone as test audio, the output noise spreads into another channel also.
So where is the bug in this code? Or if I am on the wrong track, what is the correct approach to modify the audio data and perform playback CORRECTLY? Any insight will be sincerely appreciated.
Thank you very much :-)
Cheers,
Manca
are you SURE the sample format is SInt16? And how many channels are there? You seem to treat the audio as a single channel short stream, but suppose the format is actually dual channel Float32 or so, and you do the modifications there, than the effect would be exactly as you describe, including the noise on other channels.
I have analyzed "SpeakHere" sample code of iPhone dev forum.
There is a code for starting AudioQueue as following..
AudioTimeStamp ats = {0};
AudioQueueStart(mQueue, &ats);
But I have no idea that how to start middle of file.
I changed AudioTimeStamp with various values include negative. But it does not works.
Please let me know your great opinion. Thanks.
AudioQueueStart is not the function that will help you to do that. The time there is like a delay, if you pass NULL then it means that the queue will start ASAP.
You have to pass the frame you want to play and enqueue it, to calculate that you have to know the number of frames your file has and the (relative) position you want to play.
These are instructions how to make it in SpeakHere
In the new (objc++ based) SpeakHere
In AQPlayer.h add a private instance variable:
UInt64 mPacketCount;
and a public method:
void SetQueuePosition(float position) { mCurrentPacket = mPacketCount*position; };
In AQPlayer.mm inside AQPlayer::SetupNewQueue() before mIsInitialized = true; add:
// get the total number of packets
UInt32 sizeOfPacketsCount = sizeof(mPacketCount);
XThrowIfError (AudioFileGetProperty (mAudioFile, kAudioFilePropertyAudioDataPacketCount, &sizeOfPacketsCount, &mPacketCount), "get packet count");
Now you have to use it (In SpeakHereControler.mm add this and link it to a UISlider for example):
- (IBAction) sliderValueChanged:(UISlider *) sender
{
float value = [sender value];
player->SetQueuePosition(position);
}
Why this works:
The playback callback function AudioQueueOutputCallback that feeds the queue with new packets and which in the new SpeakHere is: void AQPlayer::AQBufferCallback( , , ) calls AudioFileReadPackets to read and enqueue a certain part of a file. For that task mCurrentPacket is used and that is what we just adjusted in above methods, hence the part you wanted to play is read, enqueued and finally played :)
Just for historical reasons :)
In the old (objc based) SpeakHere
In AudioPlayer.h add an instance variable:
UInt64 totalFrames;
AudioPlayer.m inside
- (void) openPlaybackFile: (CFURLRef) soundFile
add:
UInt32 sizeOfTotalFrames = sizeof(UInt64);
AudioFileGetProperty (
[self audioFileID],
kAudioFilePropertyAudioDataPacketCount,
&sizeOfTotalFrames,
&totalFrames
);
Then add a method to AudioPlayer.h and .m
- (void) setRelativePlaybackPosition: (float) position
{
startingPacketNumber = totalFrames * position;
}
Now you have to use it (In AudioViewController add this and link it to a UISlider for example):
- (IBAction) setPlaybackPosition: (UISlider *) sender
{
float value = [sender value];
[audioPlayer setRelativePlaybackPosition: value];
}
When value is 0 you will play from the beggining, 0.5 from the middle, etc.
Hope this helps.
I'd like to build a synthesizer for the iPhone. I understand that it's possible to use custom audio units for the iPhone. At first glance, this sounds promising, since there's lots and lots of Audio Unit programming resources available. However, using custom audio units on the iPhone seems a bit tricky ( see: http://lists.apple.com/archives/Coreaudio-api/2008/Nov/msg00262.html)
This seems like the sort of thing that loads of people must be doing, but a simple google search for "iphone audio synthesis" doesn't turn up anything along the lines of a nice and easy tutorial or recommended tool kit.
So, anyone here have experience synthesizing sound on the iPhone? Are custom audio units the way to go, or is there another, simpler approach I should consider?
I'm also investigating this. I think the AudioQueue API is probably the way to go.
Here's as far as I got, seems to work okay.
File: BleepMachine.h
//
// BleepMachine.h
// WgHeroPrototype
//
// Created by Andy Buchanan on 05/01/2010.
// Copyright 2010 Andy Buchanan. All rights reserved.
//
#include <AudioToolbox/AudioToolbox.h>
// Class to implement sound playback using the AudioQueue API's
// Currently just supports playing two sine wave tones, one per
// stereo channel. The sound data is liitle-endian signed 16-bit # 44.1KHz
//
class BleepMachine
{
static void staticQueueCallback( void* userData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
{
BleepMachine* pThis = reinterpret_cast<BleepMachine*> ( userData );
pThis->queueCallback( outAQ, outBuffer );
}
void queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer );
AudioStreamBasicDescription m_outFormat;
AudioQueueRef m_outAQ;
enum
{
kBufferSizeInFrames = 512,
kNumBuffers = 4,
kSampleRate = 44100,
};
AudioQueueBufferRef m_buffers[kNumBuffers];
bool m_isInitialised;
struct Wave
{
Wave(): volume(1.f), phase(0.f), frequency(0.f), fStep(0.f) {}
float volume;
float phase;
float frequency;
float fStep;
};
enum
{
kLeftWave = 0,
kRightWave = 1,
kNumWaves,
};
Wave m_waves[kNumWaves];
public:
BleepMachine();
~BleepMachine();
bool Initialise();
void Shutdown();
bool Start();
bool Stop();
bool SetWave( int id, float frequency, float volume );
};
// Notes by name. Integer value is number of semitones above A.
enum Note
{
A = 0,
Asharp,
B,
C,
Csharp,
D,
Dsharp,
E,
F,
Fsharp,
G,
Gsharp,
Bflat = Asharp,
Dflat = Csharp,
Eflat = Dsharp,
Gflat = Fsharp,
Aflat = Gsharp,
};
// Helper function calculates fundamental frequency for a given note
float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave=4 );
float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber );
File:BleepMachine.mm
//
// BleepMachine.mm
// WgHeroPrototype
//
// Created by Andy Buchanan on 05/01/2010.
// Copyright 2010 Andy Buchanan. All rights reserved.
//
#include "BleepMachine.h"
void BleepMachine::queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
{
// Render the wave
// AudioQueueBufferRef is considered "opaque", but it's a reference to
// an AudioQueueBuffer which is not.
// All the samples manipulate this, so I'm not quite sure what they mean by opaque
// saying....
SInt16* coreAudioBuffer = (SInt16*)outBuffer->mAudioData;
// Specify how many bytes we're providing
outBuffer->mAudioDataByteSize = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;
// Generate the sine waves to Signed 16-Bit Stero interleaved ( Little Endian )
float volumeL = m_waves[kLeftWave].volume;
float volumeR = m_waves[kRightWave].volume;
float phaseL = m_waves[kLeftWave].phase;
float phaseR = m_waves[kRightWave].phase;
float fStepL = m_waves[kLeftWave].fStep;
float fStepR = m_waves[kRightWave].fStep;
for( int s=0; s<kBufferSizeInFrames*2; s+=2 )
{
float sampleL = ( volumeL * sinf( phaseL ) );
float sampleR = ( volumeR * sinf( phaseR ) );
short sampleIL = (int)(sampleL * 32767.0);
short sampleIR = (int)(sampleR * 32767.0);
coreAudioBuffer[s] = sampleIL;
coreAudioBuffer[s+1] = sampleIR;
phaseL += fStepL;
phaseR += fStepR;
}
m_waves[kLeftWave].phase = fmodf( phaseL, 2 * M_PI ); // Take modulus to preserve precision
m_waves[kRightWave].phase = fmodf( phaseR, 2 * M_PI );
// Enqueue the buffer
AudioQueueEnqueueBuffer( m_outAQ, outBuffer, 0, NULL );
}
bool BleepMachine::SetWave( int id, float frequency, float volume )
{
if ( ( id < kLeftWave ) || ( id >= kNumWaves ) ) return false;
Wave& wave = m_waves[ id ];
wave.volume = volume;
wave.frequency = frequency;
wave.fStep = 2 * M_PI * frequency / kSampleRate;
return true;
}
bool BleepMachine::Initialise()
{
m_outFormat.mSampleRate = kSampleRate;
m_outFormat.mFormatID = kAudioFormatLinearPCM;
m_outFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
m_outFormat.mFramesPerPacket = 1;
m_outFormat.mChannelsPerFrame = 2;
m_outFormat.mBytesPerPacket = m_outFormat.mBytesPerFrame = sizeof(UInt16) * 2;
m_outFormat.mBitsPerChannel = 16;
m_outFormat.mReserved = 0;
OSStatus result = AudioQueueNewOutput(
&m_outFormat,
BleepMachine::staticQueueCallback,
this,
NULL,
NULL,
0,
&m_outAQ
);
if ( result < 0 )
{
printf( "ERROR: %d\n", (int)result );
return false;
}
// Allocate buffers for the audio
UInt32 bufferSizeBytes = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;
for ( int buf=0; buf<kNumBuffers; buf++ )
{
OSStatus result = AudioQueueAllocateBuffer( m_outAQ, bufferSizeBytes, &m_buffers[ buf ] );
if ( result )
{
printf( "ERROR: %d\n", (int)result );
return false;
}
// Prime the buffers
queueCallback( m_outAQ, m_buffers[ buf ] );
}
m_isInitialised = true;
return true;
}
void BleepMachine::Shutdown()
{
Stop();
if ( m_outAQ )
{
// AudioQueueDispose also chucks any audio buffers it has
AudioQueueDispose( m_outAQ, true );
}
m_isInitialised = false;
}
BleepMachine::BleepMachine()
: m_isInitialised(false), m_outAQ(0)
{
for ( int buf=0; buf<kNumBuffers; buf++ )
{
m_buffers[ buf ] = NULL;
}
}
BleepMachine::~BleepMachine()
{
Shutdown();
}
bool BleepMachine::Start()
{
OSStatus result = AudioQueueSetParameter( m_outAQ, kAudioQueueParam_Volume, 1.0 );
if ( result ) printf( "ERROR: %d\n", (int)result );
// Start the queue
result = AudioQueueStart( m_outAQ, NULL );
if ( result ) printf( "ERROR: %d\n", (int)result );
return true;
}
bool BleepMachine::Stop()
{
OSStatus result = AudioQueueStop( m_outAQ, true );
if ( result ) printf( "ERROR: %d\n", (int)result );
return true;
}
// A (A4=440)
// A# f(n)=2^(n/12) * r
// B where n = number of semitones
// C and r is the root frequency e.g. 440
// C#
// D frq -> MIDI note number
// D# p = 69 + 12 x log2(f/440)
// E
// F
// F#
// G
// G#
//
// MIDI Note ref: http://www.phys.unsw.edu.au/jw/notes.html
//
// MIDI Node numbers:
// A3 57
// A#3 58
// B3 59
// C4 60 <--
// C#4 61
// D4 62
// D#4 63
// E4 64
// F4 65
// F#4 66
// G4 67
// G#4 68
// A4 69 <--
// A#4 70
// B4 71
// C5 72
float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave )
{
semiTones += ( 12 * (octave-4) );
float root = 440.f;
float fn = powf( 2.f, (float)semiTones/12.f ) * root;
return fn;
}
float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber )
{
SInt32 semiTones = midiNoteNumber - 69;
return CalculateFrequencyFromNote( semiTones, 4 );
}
//for ( SInt32 midiNote=21; midiNote<=108; ++midiNote )
//{
// printf( "MIDI Note %d: %f Hz \n",(int)midiNote,CalculateFrequencyFromMIDINote( midiNote ) );
//}
Update: Basic usage info
Initialise. Somehere near the start, I'm using initFromNib: in my code
m_bleepMachine = new BleepMachine;
m_bleepMachine->Initialise();
m_bleepMachine->Start();
Now the sound playback is running, but generating silence.
In your code, call this when you want to change the tone generation
m_bleepMachine->SetWave( ch, frq, vol );
where ch is the channel ( 0 or 1 )
where frq is the frequency to set in Hz
where vol is the volume ( 0=-Inf db, 1=-0db )
At program termination
delete m_bleepMachine;
Since my original post almost a year ago, I've come a long way. After a pretty exhaustive search, I came up with very few high-level synthesis tools suitable for iOS development. There are many which are GPL licensed, but the GPL license is too restrictive for me to feel comfortable using it. LibPD works great, and is what rjdj uses, but I found myself really frustrated by the graphical programming paradigm. JSyn's c-based engine, csyn, is an option, but it requires licensing, and I'm really used to programming with open-source tools. It does look worth a close look though.
In the end, I'm using STK as my basic framework. STK is a very low-level tool, and requires extensive buffer-level programming to get working. This is in contrast to something higher level like PD or SuperCollider, which allows you to simply plug unit generators together and not worry about handling the raw audio data.
Working this way with STK is certainly a bit slower than with a high level tool, but I'm becoming comfortable with it. Especially now that I'm becoming more comfortable with C/C++ programming in general.
There's a new project under way to create a patching-style add on to Open Frameworks. It's called Cleo I think, out of the University of Vancouver. It hasn't been released yet, but it looks like a very nice mix of patching-style connection of unit generators in C++ rather than requiring the use of another language. And it's tightly integrated with Open Frameworks, which may be appealing or not, depending.
So, to answer my original question, first you need to learn how to write to the output buffer. Here's some good sample code for that:
http://atastypixel.com/blog/using-remoteio-audio-unit/
Then you need to do some synthesis to generate the audio data. If you like patching, I wouldn't hesitate to recommend libpd. It seems to work great, and you can work the way you're accustomed to. If you hate graphical patching (like me), your best starting place for now is probably STK. If STK and low-level audio programming seems a bit over your head (like it was for me), just roll up your sleeves, pack a tent, and set up on a bit of a long hike up the learning curve. You'll be a much better programmer for it in the end.
Another bit of advice I wish I could have given myself a year ago: join Apple's Core Audio mailing list.
============== 2014 Edit ===========
I'm now using (and actively contributing to) the Tonic audio synthesis library. It's awesome, if I don't say so myself.
With the enormous caveat that I have yet to get past all the documentation or finishing browsing some classes / sample code, it looks like the fine folks from CCRMA over at Stanford may have put some nice toolkits together for our audio hacking pleasure. No guarantees these will do exactly what you want, but based on what I know about the original STK, they should do the trick. I'm about to embark on an audio synth app myself and the more code I can reuse, the better.
Links / descriptions from their site...
MoMu : MoMu is a light-weight software toolkit for creating musical instruments and experiences on mobile device, and currently supports the iPhone platform (iPhone, iPad, iPod Touches). MoMu provides API's for real-time full-duplex audio, accelerometer, location, multi-touch, networking (via OpenSoundControl), graphics, and utilities. (yada yada)
• and •
MoMu STK : The MoMu release of the Synthesis Toolkit (STK, originally by Perry R. Cook and Gary P. Scavone) is a lightly modified version of STK 4.4.2, and currently supports the iPhone platform (iPhone, iPad, iPod Touches).
I'm just getting into Audio Unit programming for iPhone to build a synth-like app as well. The Apple guide "Audio Unit Hosting Guide for iOS" seems like a good reference:
http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/AudioUnitHostingFundamentals/AudioUnitHostingFundamentals.html#//apple_ref/doc/uid/TP40009492-CH3-SW11
The guide includes links to a couple sample projects. Audio Mixer (MixerHost) and aurioTouch:
http://developer.apple.com/library/ios/samplecode/MixerHost/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010210
http://developer.apple.com/library/ios/samplecode/aurioTouch/Introduction/Intro.html#//apple_ref/doc/uid/DTS40007770
I'm one of the other contributors to Tonic along with morgancodes. For wrangling CoreAudio in a higher-level framework, I can't give enough praise to The Amazing Audio Engine.
We've both used it in tandem with Tonic in a number of projects. It takes so much of the pain out of dealing with CoreAudio directly, letting you focus on the actual content and synthesis instead of the hardware abstraction layer.
Lately I've been using AudioKit
It's a fresh and well designed wrapper over CSound which has been around for ages
I was using tonic with openframeworks and I was finding myself missing programming in swift.
Although tonic and openframeworks are both powerful tools,
I've chosen to get in bed with swift
PD has a version that runs on the iphone, used by RjDj. If you are OK with using someone else's app rather than writing your own, you can do quite a bit in an RjDj scene, and there is a set of objects that let you patch it out and test it on a regular PD on your own computer.
I should mention: PD is a visual dataflow programming language, that is to say, it is turing complete, and can be used to develop graphical applications - but if you are going to do anything interesting I would definitely look into best practices for patching.
Last time I checked you couldn't use custom AUs on iOS in a way that would allow all installed apps to use it (like on MacOS X).
You could theoretically use a custom AU from inside your iOS app by loading it from the app's bundle and calling the AU's render function directly, but then you could as well add the code directly to your app. Also, I'm pretty sure that loading and calling code that sits in a dynamic library would go against the AppStore policies.
So you will either have to do the processing in your remote IO callback or use the Apple AUs that are preinstalled, within an AUGraph.