Using AVAssetWriter with raw NAL Units - iphone

I noticed in the iOS documentation for AVAssetWriterInput you can pass nil for the outputSettings dictionary to specify that the input data should not be re-encoded.
The settings used for encoding the media appended to the output. Pass nil to specify that appended samples should not be re-encoded.
I want to take advantage of this feature to pass in a stream of raw H.264 NALs, but I am having trouble adapting my raw byte streams into a CMSampleBuffer that I can pass into AVAssetWriterInput's appendSampleBuffer method. My stream of NALs contains only SPS/PPS/IDR/P NALs (1, 5, 7, 8). I haven't been able to find documentation or a conclusive answer on how to use pre-encoded H264 data with AVAssetWriter. The resulting video file is not able to be played.
How can I properly package the NAL units into CMSampleBuffers? Do I need to use a start code prefix? A length prefix? Do I need to ensure I only put one NAL per CMSampleBuffer? My end goal is to create an MP4 or MOV container with H264/AAC.
Here's the code I've been playing with:
-(void)addH264NAL:(NSData *)nal
{
dispatch_async(recordingQueue, ^{
//Adapting the raw NAL into a CMSampleBuffer
CMSampleBufferRef sampleBuffer = NULL;
CMBlockBufferRef blockBuffer = NULL;
CMFormatDescriptionRef formatDescription = NULL;
CMItemCount numberOfSampleTimeEntries = 1;
CMItemCount numberOfSamples = 1;
CMVideoFormatDescriptionCreate(kCFAllocatorDefault, kCMVideoCodecType_H264, 480, 360, nil, &formatDescription);
OSStatus result = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, NULL, [nal length], kCFAllocatorDefault, NULL, 0, [nal length], kCMBlockBufferAssureMemoryNowFlag, &blockBuffer);
if(result != noErr)
{
NSLog(#"Error creating CMBlockBuffer");
return;
}
result = CMBlockBufferReplaceDataBytes([nal bytes], blockBuffer, 0, [nal length]);
if(result != noErr)
{
NSLog(#"Error filling CMBlockBuffer");
return;
}
const size_t sampleSizes = [nal length];
CMSampleTimingInfo timing = { 0 };
result = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, YES, NULL, NULL, formatDescription, numberOfSamples, numberOfSampleTimeEntries, &timing, 1, &sampleSizes, &sampleBuffer);
if(result != noErr)
{
NSLog(#"Error creating CMSampleBuffer");
}
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeVideo];
});
}
Note that I'm calling CMSampleBufferSetOutputPresentationTimeStamp on the sample buffer inside of the writeSampleBuffer method with what I think is a valid time before I'm actually trying to append it.
Any help is appreciated.

I managed to get video playback working in VLC but not QuickTime. I used code similar to what I posted above to get H.264 NALs into CMSampleBuffers.
I had two main issues:
I was not setting CMSampleTimingInfo correctly (as my comment above states).
I was not packing the raw NAL data correctly (not sure where this is documented, if anywhere).
To solve #1, I set timing.duration = CMTimeMake(1, fps); where fps is the expected frame rate. I then set timing.decodeTimeStamp = kCMTimeInvalid; to mean that the samples will be given in decoding order. Lastly, I set timing.presentationTimeStamp by calculating the absolute time, which I also used with startSessionAtSourceTime.
To solve #2, through trial and error I found that giving my NAL units in the following form worked:
[7 8 5] [1] [1] [1]..... [7 8 5] [1] [1] [1]..... (repeating)
Where each NAL unit is prefixed by a 32-bit start code equaling 0x00000001.
Presumably for the same reason it's not playing in QuickTime, I'm still having trouble moving the resulting .mov file to the photo album (the ALAssetLibrary method videoAtPathIsCompatibleWithSavedPhotosAlbum is failing stating that the "Movie could not be played." Hopefully someone with an idea about what's going on can comment. Thanks!

Related

How to edit the default instrument of an AUGraph?

I'm working with the MusicPlayer API. I understand that when you load in a .mid as a sequence, the API creates a default AUGraph for you that includes an AUSampler. This AUSampler uses a simple sine-wave based instrument to synthesize the notes in the .mid
My question is, how does one change the default instrument in the AUSampler? I understand that you can use SoundFont2 files (.sf2) and add them using the AudioUnitSetProperty method. But, how does one access this default AUGraph? Do you have to open the graph before you can edit the AudioUnit or is opening a graph only for editing connections between nodes?
Thanks :)
I've written a tutorial on this but here but here's an outline of the process:
Function to load a Sound Font file (taken from the Apple documentation):
-(OSStatus) loadFromDLSOrSoundFont: (NSURL *)bankURL withPatch: (int)presetNumber {
OSStatus result = noErr;
// fill out a bank preset data structure
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = (UInt8) presetNumber;
// set the kAUSamplerProperty_LoadPresetFromBank property
result = AudioUnitSetProperty([pointer to your AUSampler node here],
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
// check for errors
NSCAssert (result == noErr,
#"Unable to set the preset property on the Sampler. Error code:%d '%.4s'",
(int) result,
(const char *)&result);
return result; }
Then you need to load the Sound Font from your Resources folder:
NSURL *presetURL = [[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:#"Name of sound font" ofType:#"sf2"]];
// Initialise the sound font
[self loadFromDLSOrSoundFont: (NSURL *)presetURL withPatch: (int)10];
Hope this helps!
You might take a look at the Audiograph example. It doesn't use soundFonts but should give you an idea of how to set up a graph.
When I use the MusicPlayer I always generate the midi note data from code/GUI and create the AUGraph (with a mixer) from scratch. There are ways to derive/extract the default generated AUGraph & AUSampler resulting from loading a midi file (example code below) but I never had success setting a new soundFont this way. On the other hand, creating the AUGraph from scratch and then loading an .sf2 file works great.
AUGraph graph;
result = MusicSequenceGetAUGraph (sequence, &graph);
MusicTrack firstTrack;
result = MusicSequenceGetIndTrack (sequence, 0, &firstTrack);
AUNode myNode;
result = MusicTrackGetDestNode(firstTrack,&myNode);
AudioUnit mySamplerUnit;
result = AUGraphNodeInfo(graph, myNode, 0, &mySamplerUnit);

Programatically determining the number of presets in a DLS or sf2 file?

Context: iOS5 AUSampler AudioUnit
I've been digging around trying to determine is there is a programmatic way to determine the number of presets in a DLS or sf2 file. I was hoping it would be available either through 'AudioUnitGetProperty' or 'AudioUnitGetParameter' for an AUSampler. Then of course I want to be able to switch presets on the fly. The Docs don't indicate if this is possible or not.
I'm using the standard code for loading DLS/sf2 per TechNote TN2283. The problem is that with lots of sf2 files it is a trial and error process to find out what the presets are.
-(OSStatus) loadFromDLSOrSoundFont: (NSURL *)bankURL withPatch: (int)presetNumber
OSStatus result = noErr;
// fill out a bank preset data structure
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = (UInt8) presetNumber;
// set the kAUSamplerProperty_LoadPresetFromBank property
result = AudioUnitSetProperty(self.mySamplerUnit,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
// check for errors
NSCAssert (result == noErr,
#"Unable to set the preset property on the Sampler. Error code:%d '%.4s'",
(int) result,
(const char *)&result);
return result;
}
OK - had an answer from an Apple Core Audio engineer:
"There is no API to retrieve the number of presets. The Sampler AU only loads a single instrument at a time from any SF2 or DLS bank, so it does not "digest" the entire bank file (and so has no knowledge of its complete contents)."

Recording Mono on iPhone in IMA4 format

I'm using the SpeakHear sample app on Apple's developer site to create an audio recording app. I'm attempting to record directly to IMA4 format using the kAudioFormatAppleIMA4 system constant. This is listed as one of the usable formats, but every time I set up my audio format variable and pass and set it, I get a 'fmt?' error. Here is the code I use to set up the audio format variable:
#define kAudioRecordingFormat kAudioFormatAppleIMA4
#define kAudioRecordingType kAudioFileCAFType
#define kAudioRecordingSampleRate 16000.00
#define kAudioRecordingChannelsPerFrame 1
#define kAudioRecordingFramesPerPacket 1
#define kAudioRecordingBitsPerChannel 16
#define kAudioRecordingBytesPerPacket 2
#define kAudioRecordingBytesPerFrame 2
- (void) setupAudioFormat: (UInt32) formatID {
// Obtains the hardware sample rate for use in the recording
// audio format. Each time the audio route changes, the sample rate
// needs to get updated.
UInt32 propertySize = sizeof (self.hardwareSampleRate);
OSStatus err = AudioSessionGetProperty (
kAudioSessionProperty_CurrentHardwareSampleRate,
&propertySize,
&hardwareSampleRate
);
if(err != 0){
NSLog(#"AudioRecorder::setupAudioFormat - error getting audio session property");
}
audioFormat.mSampleRate = kAudioRecordingSampleRate;
NSLog (#"Hardware sample rate = %f", self.audioFormat.mSampleRate);
audioFormat.mFormatID = formatID;
audioFormat.mChannelsPerFrame = kAudioRecordingChannelsPerFrame;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = kAudioRecordingFramesPerPacket;
audioFormat.mBitsPerChannel = kAudioRecordingBitsPerChannel;
audioFormat.mBytesPerPacket = kAudioRecordingBytesPerPacket;
audioFormat.mBytesPerFrame = kAudioRecordingBytesPerFrame;
}
And here is where I use that function:
- (id) initWithURL: fileURL {
NSLog (#"initializing a recorder object.");
self = [super init];
if (self != nil) {
// Specify the recording format. Options are:
//
// kAudioFormatLinearPCM
// kAudioFormatAppleLossless
// kAudioFormatAppleIMA4
// kAudioFormatiLBC
// kAudioFormatULaw
// kAudioFormatALaw
//
// When targeting the Simulator, SpeakHere uses linear PCM regardless of the format
// specified here. See the setupAudioFormat: method in this file.
[self setupAudioFormat: kAudioRecordingFormat];
OSStatus result = AudioQueueNewInput (
&audioFormat,
recordingCallback,
self, // userData
NULL, // run loop
NULL, // run loop mode
0, // flags
&queueObject
);
NSLog (#"Attempted to create new recording audio queue object. Result: %f", result);
// get the recording format back from the audio queue's audio converter --
// the file may require a more specific stream description than was
// necessary to create the encoder.
UInt32 sizeOfRecordingFormatASBDStruct = sizeof (audioFormat);
AudioQueueGetProperty (
queueObject,
kAudioQueueProperty_StreamDescription, // this constant is only available in iPhone OS
&audioFormat,
&sizeOfRecordingFormatASBDStruct
);
AudioQueueAddPropertyListener (
[self queueObject],
kAudioQueueProperty_IsRunning,
audioQueuePropertyListenerCallback,
self
);
[self setAudioFileURL: (CFURLRef) fileURL];
[self enableLevelMetering];
}
return self;
}
Thanks for the help!
-Matt
I'm not sure that all the format flags you're passing are correct; IMA4 (which, IIRC, stands for IMA ADPCM 4:1) is 4-bit (4:1 compression from 16 bits) with some headers.
According to the docs for AudioStreamBasicDescription:
mBytesPerFrame should be 0, since the format is compressed.
mBitsPerChannel should be 0, since the format is compressed.
mFormatFlags should probably be 0, since there is nothing to choose.
Aaccording to afconvert -f caff -t ima4 -c 1 blah.aiff blah.caf followed by afinfo blah.caf:
mBytesPerPacket should be 34, and
mFramesPerPacket should be 64. You might be able to set these to 0 instead.
The reference algorithm in the original IMA spec is not that helpful (It's an OCR of scans, the site also has the scans).
On top of what #tc. has already said, it's easier to automatically populate your descriptions based on the IDs using this:
AudioStreamBasicDescription streamDescription;
UInt32 streamDesSize = sizeof(streamDescription);
memset(&streamDescription, 0, streamDesSize);
streamDescription.mFormatID = kAudioFormatiLBC;
OSStatus status;
status = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &streamDesSize, &streamDescription);
assert(status==noErr);
This way you don't need to bother with guessing the features of certain formats. Be warned, although in this example the kAudioFormatiLBC didn't need any other additional info, other formats do (usually the number of channels and the sample rate).

AAC header and other info in iPhone

I'm building an iPhone Application that records sound. I make use of Audio Queue Services, and everything works great for the recording.
The thing is, I'm using AudioFileWritePackets for file writing, and I'm trying to put the same "AAC + ADTS" packets to a network socket.
The resulting file is different since some "headers" or "adts header" might be missing. I am searching for ideas on how to write the ADTS header and/or AAC header? Could the community assist me with this or refer me to a guide that demonstrated how to do this?
I currently have my Buffer Handler method:
void Recorder::MyInputBufferHandler(void inUserData,
AudioQueueRefinAQ, AudioQueueBufferRefinBuffer,
const AudioTimeStamp*inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription*inPacketDesc) {
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
// write packets to file
XThrowIfError(AudioFileWritePackets(aqr->mRecordFile,
FALSE,
inBuffer->mAudioDataByteSize,
inPacketDesc,
aqr->mRecordPacket,
&inNumPackets,
inBuffer->mAudioData),
"AudioFileWritePackets failed");
fprintf(stderr, "Writing.");
// We write the Net Buffer.
[aqr->socket_if writeData :(void *)(inBuffer->mAudioData)
:inBuffer->mAudioDataByteSize];
aqr->mRecordPacket += inNumPackets;
}
// if we're not stopping, re-enqueue the buffe so that it gets filled again
if (aqr->IsRunning()) {
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL),
"AudioQueueEnqueueBuffer failed");
}
}
catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
I've found the solution for this:
I implemented the callback
XThrowIfError(
AudioFileInitializeWithCallbacks(
this,
nil,
BufferFilled_callback,
nil,
nil,
//kAudioFileCAFType,
kAudioFileAAC_ADTSType,
&mRecordFormat,
kAudioFileFlags_EraseFile,
&mRecordFile),
"InitializeWithCallbacks failed");
... And voilá! The real callback you have to implement is BufferFilled_callback. Here is my implementation:
OSStatus AQRecorder::BufferFilled_callback(
void * inUserData,
SInt64 inPosition,
UInt32 requestCount,
const void * buffer,
UInt32 * actualCount) {
AQRecorder *aqr = (AQRecorder *)inUserData;
// You can write these bytes to anywhere.
// You can build a streaming server
return 0;
}
If you want to see more about audio queue services, you can get some ideas from Flipzu for iPhone (ex-app for live broadcasting audio // we have to shut it down because we could not raise money).
https://github.com/lucaslain/Flipzu_iPhone
Best,
Lucas.
I've recently encountered this issue with the iLBC codec, and arrived at the solution as follows:
Record the audio data you want and just write it to a file. Then, take that file and do an octal dump on it. You can use the -c flag to see ascii characters.
Then, create a separate file that you know doesn't contain the header. This is just your data from the buffers on the audio queue. Octal dump that, and compare.
From this, you should have the header and enough info on how to proceed. Hope this helps.

Can anyone provide a working example of AudioFileStreamSeek for the iPhone?

I find Apple's documentation quite limited on AudioFileStreamSeek and I cannot find any examples of actual usage anywhere. I have a working streaming audio player, but I just can't seem to get AudioFileStreamSeek to work as advertised...
Any help tips or a little example would be greatly appreciated!
I am told this works:
AudioQueueStop(audioQueue, true);
UInt32 flags = 0;
err = AudioFileStreamParseBytes(audioFileStream, length, bytes,
kAudioFileStreamParseFlag_Discontinuity);
OSStatus status = AudioFileStreamSeek(audioFileStream, framePacket.mPacket,
&currentOffset, &flags);
NSLog(#"Setting next byte offset to: %qi, flags: %d", (long long)currentOffset, flags);
// then read data from the new offset set by AudioFileStreamSeek
[fileHandle seekToFileOffset:currentOffset];
NSData* data = "" readDataOfLength:4096];
flags = kAudioFileStreamParseFlag_Discontinuity;
status = AudioFileStreamParseBytes( stream, [data length], [data bytes], flags);
if (status != noErr)
{
NSLog(#"Error parsing bytes: %d", status);
}
Unless I'm mistaken, this is only available in the 3.0 SDK, and therefore under NDA. Maybe you should take this to the Apple Beta forums?
I stand corrected. AudioFileStreamSeek doesn't show up if you do a search in the online 2.2.1 documentation. You have to manually dig into the docs to find it.
Don't forget to add the data offset (kAudioFileStreamProperty_DataOffset) to the byte offset returned by AudioFileStreamSeek. The return value is an offset into the audio data and ignores the data offset.
It's also a good idea to stop and then re-start the AudioQueue before/after seeking.
Matt Gallagher uses AudioFileStreamSeek in his example "Streaming and playing an MP3 stream".
Look at Matt's code AudioStreamer.m:
SInt64 seekPacket = floor(newSeekTime / packetDuration);
err = AudioFileStreamSeek(audioFileStream, seekPacket, &packetAlignedByteOffset, &ioFlags);