Is kAudioFormatFlagIsFloat supported on iPhoneOS? - iphone

I am writing an iPhone app that records and plays audio simultaneously using the I/O audio unit as per Apple's recommendations.
I want to apply some sound effects (reverb, etc) on the recorded audio before playing it back. For these effects to work well, I need the samples to be floating point numbers, rather than integers. It seems this should be possible, by creating an AudioStreamBasicDescription with kAudioFormatFlagIsFloat set on mFormatFlags. This is what my code looks like:
AudioStreamBasicDescription streamDescription;
streamDescription.mSampleRate = 44100.0;
streamDescription.mFormatID = kAudioFormatLinearPCM;
streamDescription.mFormatFlags = kAudioFormatFlagIsFloat;
streamDescription.mBitsPerChannel = 32;
streamDescription.mBytesPerFrame = 4;
streamDescription.mBytesPerPacket = 4;
streamDescription.mChannelsPerFrame = 1;
streamDescription.mFramesPerPacket = 1;
streamDescription.mReserved = 0;
OSStatus status;
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamDescription, sizeof(streamDescription));
if (status != noErr)
fprintf(stderr, "AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input) returned status %ld\n", status);
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &streamDescription, sizeof(streamDescription));
if (status != noErr)
fprintf(stderr, "AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output) returned status %ld\n", status);
However, when I run this (on an iPhone 3GS running iPhoneOS 3.1.3), I get this:
AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input) returned error -10868
AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output) returned error -10868
(-10868 is the value of kAudioUnitErr_FormatNotSupported)
I didn't find anything of value in Apple's documentation, apart from a recommendation to stick to 16 bit little-endian integers. However, the aurioTouch example project contains at least some support code related to kAudioFormatFlagIsFloat.
So, is my stream description incorrect, or is kAudioFormatFlagIsFloat simply not supported on iPhoneOS?

It's not supported, as far as I know. You can pretty easily convert to floats, though using AudioConverter. I do this conversion (both ways) in real time to use the Accelerate framework with iOS audio. (note: this code is copied and pasted from more modular code, so there may be some minor typos)
First, you'll need the AudioStreamBasicDescription from the input. Say
AudioStreamBasicDescription aBasicDescription = {0};
aBasicDescription.mSampleRate = self.samplerate;
aBasicDescription.mFormatID = kAudioFormatLinearPCM;
aBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
aBasicDescription.mFramesPerPacket = 1;
aBasicDescription.mChannelsPerFrame = 1;
aBasicDescription.mBitsPerChannel = 8 * sizeof(SInt16);
aBasicDescription.mBytesPerPacket = sizeof(SInt16) * aBasicDescription.mFramesPerPacket;
aBasicDescription.mBytesPerFrame = sizeof(SInt16) * aBasicDescription.mChannelsPerFrame
Then, generate a corresponding AudioStreamBasicDescription for float.
AudioStreamBasicDescription floatDesc = {0};
floatDesc.mFormatID = kAudioFormatLinearPCM;
floatDesc.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked;
floatDesc.mBitsPerChannel = 8 * sizeof(float);
floatDesc.mFramesPerPacket = 1;
floatDesc.mChannelsPerFrame = 1;
floatDesc.mBytesPerPacket = sizeof(float) * floatDesc.mFramesPerPacket;
floatDesc.mBytesPerFrame = sizeof(float) * floatDesc.mChannelsPerFrame;
floatDesc.mSampleRate = [controller samplerate];
Make some buffers.
UInt32 intSize = inNumberFrames * sizeof(SInt16);
UInt32 floatSize = inNumberFrames * sizeof(float);
float *dataBuffer = (float *)calloc(numberOfAudioFramesIn, sizeof(float));
Then convert. (ioData is your AudioBufferList containing the int audio)
AudioConverterRef converter;
OSStatus err = noErr;
err = AudioConverterNew(&aBasicDescription, &floatDesct, &converter);
//check for error here in "real" code
err = AudioConverterConvertBuffer(converter, intSize, ioData->mBuffers[0].mData, &floatSize, dataBuffer);
//check for error here in "real" code
//do stuff to dataBuffer, which now contains floats
//convert the floats back by running the conversion the other way

I'm doing something unrelated to AudioUnits but I am using AudioStreamBasicDescription on iOS. I was able to use float samples by specifying:
dstFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked;
The book Learning Core Audio: A Hands-on Guide to Audio Programming for Mac and iOS was helpful for this.

It is supported.
The problem is you must also set kAudioFormatFlagIsNonInterleaved on mFormatFlags. If you don't do this when setting kAudioFormatFlagIsFloat, you will get a format error.
So, you want to do something like this when preparing your AudioStreamBasicDescription:
streamDescription.mFormatFlags = kAudioFormatFlagIsFloat |
kAudioFormatFlagIsNonInterleaved;
As for why iOS requires this, I'm not sure - I only stumbled across it via trial and error.

From the Core Audio docs:
kAudioFormatFlagIsFloat
Set for floating point, clear for integer.
Available in iPhone OS 2.0 and later.
Declared in CoreAudioTypes.h.
I don't know enough about your stream to comment on its [in]correctness.

You can obtain an interleaved float RemoteIO with the following ASBD setup:
// STEREO_CHANNEL = 2, defaultSampleRate = 44100
AudioStreamBasicDescription const audioDescription = {
.mSampleRate = defaultSampleRate,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsFloat,
.mBytesPerPacket = STEREO_CHANNEL * sizeof(float),
.mFramesPerPacket = 1,
.mBytesPerFrame = STEREO_CHANNEL * sizeof(float),
.mChannelsPerFrame = STEREO_CHANNEL,
.mBitsPerChannel = 8 * sizeof(float),
.mReserved = 0
};
This worked for me.

Related

iOS Audio Unit + Speex encoding is distorted

I am working on an iOS project that needs to encode and decode Speex audio using a remoteIO audio unit as input / output.
The problem I am having is although speex doesn't print any errors, the audio I get is somewhat recognizable as voice but very distorted, sort of sounds like the gain was just cranked up in a robotic way.
Here are the encode and decode functions (Input to encode is 320 bytes of signed integers from the audio unit render function, Input to decode is 62 bytes of compressed data ):
#define AUDIO_QUALITY 10
#define FRAME_SIZE 160
#define COMP_FRAME_SIZE 62
char *encodeSpeexWithBuffer(spx_int16_t *buffer, int *insize) {
SpeexBits bits;
void *enc_state;
char *outputBuffer = (char *)malloc(200);
speex_bits_init(&bits);
enc_state = speex_encoder_init(&speex_nb_mode);
int quality = AUDIO_QUALITY;
speex_encoder_ctl(enc_state, SPEEX_SET_QUALITY, &quality);
speex_bits_reset(&bits);
speex_encode_int(enc_state, buffer, &bits);
*insize = speex_bits_write(&bits, outputBuffer, 200);
speex_bits_destroy(&bits);
speex_encoder_destroy(enc_state);
return outputBuffer;
}
short *decodeSpeexWithBuffer(char *buffer) {
SpeexBits bits;
void *dec_state;
speex_bits_init(&bits);
dec_state = speex_decoder_init(&speex_nb_mode);
short *outTemp = (short *)malloc(FRAME_SIZE * 2);
speex_bits_read_from(&bits, buffer, COMP_FRAME_SIZE);
speex_decode_int(dec_state, &bits, outTemp);
speex_decoder_destroy(dec_state);
speex_bits_destroy(&bits);
return outTemp;
}
And the audio unit format:
// Describe format
audioFormat.mSampleRate = 8000.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
No errors are reported anywhere and I have confirmed that the Audio Unit is processing at a sample rate of 8000
After a few days of going crazy over this I finally figured it out. The trick with Speex is that you must initialize a SpeexBit and encoder void* and use them throughout the entire session. Because I was recreating them for every piece of the encode it was causing strange sounding results.
Once I moved:
speex_bits_init(&bits);
enc_state = speex_encoder_init(&speex_nb_mode);
Out of the while loop everything worked great.

ExtAudioFileWrite to m4a/aac failing on dual-core devices (ipad 2, iphone 4s)

I wrote a loop to encode pcm audio data generated by my app to aac using Extended Audio File Services. The encoding takes place in a background thread synchronously, and not in real-time.
The encoding works flawlessly on ipad 1 and iphone 3gs/4 for both ios 4 and 5. However, for dual-core devices (iphone 4s, ipad 2) the third call to ExtAudioFileWrite crashes the encoding thread with no stack trace and no error code.
Here is the code in question:
The data formats
AudioStreamBasicDescription AUCanonicalASBD(Float64 sampleRate,
UInt32 channel){
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = sampleRate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
audioFormat.mChannelsPerFrame = channel;
audioFormat.mBytesPerPacket = sizeof(AudioUnitSampleType);
audioFormat.mBytesPerFrame = sizeof(AudioUnitSampleType);
audioFormat.mFramesPerPacket = 1;
audioFormat.mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
audioFormat.mReserved = 0;
return audioFormat;
}
AudioStreamBasicDescription MixdownAAC(void){
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.0;
audioFormat.mFormatID = kAudioFormatMPEG4AAC;
audioFormat.mFormatFlags = kMPEG4Object_AAC_Main;
audioFormat.mChannelsPerFrame = 2;
audioFormat.mBytesPerPacket = 0;
audioFormat.mBytesPerFrame = 0;
audioFormat.mFramesPerPacket = 1024;
audioFormat.mBitsPerChannel = 0;
audioFormat.mReserved = 0;
return audioFormat;
}
The render loop
OSStatus err;
ExtAudioFileRef outFile;
NSURL *mixdownURL = [NSURL fileURLWithPath:filePath isDirectory:NO];
// internal data format
AudioStreamBasicDescription localFormat = AUCanonicalASBD(44100.0, 2);
// output file format
AudioStreamBasicDescription mixdownFormat = MixdownAAC();
err = ExtAudioFileCreateWithURL((CFURLRef)mixdownURL,
kAudioFileM4AType,
&mixdownFormat,
NULL,
kAudioFileFlags_EraseFile,
&outFile);
err = ExtAudioFileSetProperty(outFile, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &localFormat);
// prep
AllRenderData *allData = &allRenderData;
writeBuffer = malloc(sizeof(AudioBufferList) + (2*sizeof(AudioBuffer)));
writeBuffer->mNumberBuffers = 2;
writeBuffer->mBuffers[0].mNumberChannels = 1;
writeBuffer->mBuffers[0].mDataByteSize = bufferBytes;
writeBuffer->mBuffers[0].mData = malloc(bufferBytes);
writeBuffer->mBuffers[1].mNumberChannels = 1;
writeBuffer->mBuffers[1].mDataByteSize = bufferBytes;
writeBuffer->mBuffers[1].mData = malloc(bufferBytes);
memset(writeBuffer->mBuffers[0].mData, 0, bufferBytes);
memset(writeBuffer->mBuffers[1].mData, 0, bufferBytes);
UInt32 framesToGet;
UInt32 frameCount = allData->gLoopStartFrame;
UInt32 startFrame = allData->gLoopStartFrame;
UInt32 lastFrame = allData->gLoopEndFrame;
// write one silent buffer
ExtAudioFileWrite(outFile, bufferFrames, writeBuffer);
while (frameCount < lastFrame){
// how many frames do we need to get
if (lastFrame - frameCount > bufferFrames)
framesToGet = bufferFrames;
else
framesToGet = lastFrame - frameCount;
// get dem frames
err = theBigOlCallback((void*)&allRenderData,
NULL, NULL, 1,
framesToGet, writeBuffer);
// write to output file
ExtAudioFileWrite(outFile, framesToGet, writeBuffer);
frameCount += framesToGet;
}
// write one trailing silent buffer
memset(writeBuffer->mBuffers[0].mData, 0, bufferBytes);
memset(writeBuffer->mBuffers[1].mData, 0, bufferBytes);
processLimiterInPlace8p24(limiter, writeBuffer->mBuffers[0].mData, writeBuffer->mBuffers[1].mData, bufferFrames);
ExtAudioFileWrite(outFile, bufferFrames, writeBuffer);
err = ExtAudioFileDispose(outFile);
The pcm frames are properly created, but ExtAudioFileWrite fails the 2nd/3rd time it is called.
Any ideas? Thank you!
I had a very similar problem where I was attempting to use Extended Audio File Services in order to stream PCM sound into an m4a file on an iPad 2. Everything appeared to work except that every call to ExtAudioFileWrite returned the error code -66567 (kExtAudioFileError_MaxPacketSizeUnknown). The fix I eventually found was to set the "Codec Manufacturer" to software instead of hardware. So place
UInt32 codecManf = kAppleSoftwareAudioCodecManufacturer;
ExtAudioFileSetProperty(FileToWrite, kExtAudioFileProperty_CodecManufacturer, sizeof(UInt32), &codecManf);
just before you set the client data format.
This would lead me to believe that Apple's hardware codecs can only support very specific encoding, but the software codecs can more reliably do what you want. In my case, the software codec translation to m4a takes 50% longer than writing the exact same file to LPCM format.
Does anyone know whether Apple specifies somewhere what their audio codec hardware is capable of? It seems that software engineers are stuck playing the hours-long guessing game of setting the ~20 parameters in the AudioStreamBasicDescription and AudioChannelLayout for the client and for the file to every possible permutation until something works...

Stereo playback gets converted to mono (on iPad only) even when using stereo headphones

I'm developing an audio processing app using core audio that records sounds through the headset mic and plays them back trough the headphones.
I've added a feature for the balance, i.e. to shift the playback onto one ear only.
This works perfectly on the iPods and iPhones I've tested it on. But not on the iPad. On the iPad the location of the sound doesn't change at all.
This is the code used to render the audio output:
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
// Get a pointer to the dataBuffer of the AudioBufferList
AudioBuffer firstBuffer = ioData->mBuffers[0];
AudioSampleType *outA = (AudioSampleType *)firstBuffer.mData;
int numChannels = firstBuffer.mNumberChannels;
NSLog(#"numChannels = %d, left = %d, right = %d", numChannels, leftBalVolume, rightBalVolume);
// Loop through the callback buffer, generating samples
for (UInt32 i = 0; i < inNumberFrames * numChannels; i += numChannels) {
int outSignal = getFilteredSampleData(sampleDataTail);
outA[i] = (outSignal * leftBalVolume) / 32768;
if (numChannels > 1) {
outA[i + 1] = (outSignal * rightBalVolume) / 32768;
}
sampleDataTail = (sampleDataTail + 1) % sampleDataLen;
}
return noErr;
}
The output from the NSLog is as follows:
numChannels = 2, left = 16557, right = 32767
...telling me that it is basically working in stereo mode, I should hear the audio slightly to the right. But even if I put it 100% to the right, I still hear the audio in the middle, same volume on both earphones.
Obviously, the iPad 2 mixes the audiosignal down to mono and then plays that on both earphones. I thought that it might have to do with the fact that the iPad has only one speaker and thus would usually mix to mono... but why does it do that, even when a stereo headphone is connected?
Any help is greatly appreciated!
Found the culprit:
I've called
desc.SetAUCanonical(1, true);
on the StreamFormat descriptor of the mixer's output. Now I'm just setting values for every property, and it works on the iPad as well...
desc.mSampleRate = kGraphSampleRate;
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
desc.mFramesPerPacket = 1;
desc.mChannelsPerFrame = 2;
desc.mBitsPerChannel = 16;
desc.mBytesPerPacket = 4;
desc.mBytesPerFrame = 4;
It seems that SetAUCanonical does different things on the iPad vs. iPod Touch and iPhone

iPhone AudioUnitRender error -50 in device

I am working on one project in which i have used AudioUnitRender it runs fine in simulator but gives -50 error in the device.
If anyone have faced similar problem please give me some solution.
RIOInterface* THIS = (RIOInterface *)inRefCon;
COMPLEX_SPLIT A = THIS->A;
void *dataBuffer = THIS->dataBuffer;
float *outputBuffer = THIS->outputBuffer;
FFTSetup fftSetup = THIS->fftSetup;
uint32_t log2n = THIS->log2n;
uint32_t n = THIS->n;
uint32_t nOver2 = THIS->nOver2;
uint32_t stride = 1;
int bufferCapacity = THIS->bufferCapacity;
SInt16 index = THIS->index;
AudioUnit rioUnit = THIS->ioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;
renderErr = AudioUnitRender(rioUnit, ioActionFlags,
inTimeStamp, bus1, inNumberFrames, THIS->bufferList);
NSLog(#"%d",renderErr);
if (renderErr < 0) {
return renderErr;
}
data regarding sample size and frame...
bytesPerSample = sizeof(SInt16);
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBitsPerChannel = 8 * bytesPerSample;
asbd.mFramesPerPacket = 1;
asbd.mChannelsPerFrame = 1;
//asbd.mBytesPerPacket = asbd.mBytesPerFrame * asbd.mFramesPerPacket;
asbd.mBytesPerPacket = bytesPerSample * asbd.mFramesPerPacket;
//asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;
asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;
asbd.mSampleRate = sampleRate;
thanks in advance..
The length of the buffer (inNumberFrames) can be different on the device and the simulator. From my experience it is often larger on the device. When you use your own AudioBufferList this is something you have to take into account. I would suggest allocating more memory for the buffer in the AudioBufferList.
I know this thread is old, but I just found the solution to this problem.
The buffer duration for the device is different from that on the simulator. So you have to change the buffer duration:
Float32 bufferDuration = ((Float32) <INSERT YOUR BUFFER DURATION HERE>) / sampleRate; // buffer duration in seconds
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(bufferDuration), &bufferDuration);
Try adding kAudioFormatFlagsNativeEndian to your list of stream description format flags. Not sure if that will make a difference, but it can't hurt.
Also, I'm suspicious about the use of THIS for the userData member, which definitely does not fill that member with any meaningful data by default. Try running the code in a debugger and see if that instance is correctly extracted and casted. Assuming it is, just for fun try putting the AudioUnit object into a global variable (yeah, I know..) just to see if it works.
Finally, why use THIS->bufferList instead of the one passed into your render callback? That's probably not good.

AudioConverterConvertBuffer problem with insz error

I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format
_
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 2;
_streamFormat.mBytesPerPacket = 4;
_streamFormat.mBytesPerFrame = 4;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100;
_streamFormat.mReserved = 0;
to this format
_streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0;
_streamFormatOutput.mBitsPerChannel = 16;
_streamFormatOutput.mChannelsPerFrame = 1;
_streamFormatOutput.mBytesPerPacket = 2;
_streamFormatOutput.mBytesPerFrame = 2;
_streamFormatOutput.mFramesPerPacket = 1;
_streamFormatOutput.mSampleRate = 44100;
_streamFormatOutput.mReserved = 0;
and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows
This is to set the channel map for PCM output file
SInt32 channelMap[1] = {0};
status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap);
and this is to convert the buffer in a while loop
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(#"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(#"The buffer size is %d",audioBuffer.mDataByteSize);
numBytesIO = audioBuffer.mDataByteSize;
convertedBuf = malloc(sizeof(char)*numBytesIO);
status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf);
char errchar[10];
NSLog(#"status audio converter convert %d",status);
if (status != 0) {
NSLog(#"Fail conversion");
assert(0);
}
NSLog(#"Bytes converted %d",numBytesIO);
status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf);
NSLog(#"status for writebyte %d, bytes written %d",status,numBytesIO);
free(convertedBuf);
if (numBytesIO != audioBuffer.mDataByteSize) {
NSLog(#"Something wrong in writing");
assert(0);
}
countByteBuf = countByteBuf + numBytesIO;
But the insz problem is there... so it cant convert. I would appreciate any input
Thanks in advance
First, you cannot use AudioConverterConvertBuffer() to convert anything where input and output byte size is different. You need to use AudioConverterFillComplexBuffer(). This includes performing any kind of sample rate conversions, or adding/removing channels.
See Apple's documentation on AudioConverterConvertBuffer(). This was also discussed on Apple's CoreAudio mailing lists, but I'm afraid I cannot find a reference right now.
Second, even if this could be done (which it can't) you are passing the same number of bytes allocated for output as you had for input, despite actually requiring half of the number of bytes (due to reducing number of channels from 2 to 1).
I'm actually working on using AudioConverterConvertBuffer() right now, and the test files are mono while I need to play stereo. I'm currently stuck with the converter performing conversion only of the first chunk of the data. If I manage to get this to work, I'll try to remember to post the code. If I don't post it, please poke me in comments.