OpenAL alSourceUnqueueBuffers & alSourceUnqueueBuffers - iphone

everyone , I have a problem about the API-- alSourceUnqueueBuffers when I use the OpenAL Libaray.
My problem as follows:
1.I play a pcm-music though streaming mechanism.
2.The application can queue up one or multiple buffer names using alSourceQueueBuffers.
when a buffer has been processed. I want to fill new audio data in my function: getSourceState . but when I use the API of OpenAL alSourceUnqueueBuffers. it returns an error
--- AL_INVALID_OPERATION . I do this as the document about the OpenAL.
so I test a way to solve this problem. I use alSourceStop(source) before the api alSourceUnqueueBuffers, an use alSourcePlay(source) after i filled new data though
alBufferData & alSourceQueueBuffers. but it is bad. because It breaks down the music.
who can help me to find this problem ?
and where i can find more information and method about openAL?
I am waiting for your help . thanks , everyone.
so my code as follows:
.h:
#interface myPlayback : NSObject
{
ALuint source;
ALuint * buffers;
ALCcontext* context;
ALCdevice* device;
unsigned long long offset;
ALenum m_format;
ALsizei m_freq;
void* data;
}
#end
.m
- (void)initOpenAL
{
ALenum error;
// Create a new OpenAL Device
// Pass NULL to specify the system’s default output device
device = alcOpenDevice(NULL);
if (device != NULL)
{
// Create a new OpenAL Context
// The new context will render to the OpenAL Device just created
context = alcCreateContext(device, 0);
if (context != NULL)
{
// Make the new context the Current OpenAL Context
alcMakeContextCurrent(context);
// Create some OpenAL Buffer Objects
buffers = (ALuint*)malloc(sizeof(ALuint) * 5);
alGenBuffers(5, buffers);
if((error = alGetError()) != AL_NO_ERROR) {
NSLog(#"Error Generating Buffers: %x", error);
exit(1);
}
// Create some OpenAL Source Objects
alGenSources(1, &source);
if(alGetError() != AL_NO_ERROR)
{
NSLog(#"Error generating sources! %x\n", error);
exit(1);
}
}
}
// clear any errors
alGetError();
[self initBuffer];
[self initSource];
}
- (void) initBuffer
{
ALenum error = AL_NO_ERROR;
ALenum format;
ALsizei size;
ALsizei freq;
NSBundle* bundle = [NSBundle mainBundle];
// get some audio data from a wave file
CFURLRef fileURL = (CFURLRef)[[NSURL fileURLWithPath:[bundle pathForResource:#"4" ofType:#"caf"]] retain];
if (fileURL)
{
data = MyGetOpenALAudioData(fileURL, &size, &format, &freq);
CFRelease(fileURL);
m_freq = freq;
m_format = format;
if((error = alGetError()) != AL_NO_ERROR) {
NSLog(#"error loading sound: %x\n", error);
exit(1);
}
alBufferData(buffers[0], format, data, READ_SIZE , freq);
offset += READ_SIZE;
alBufferData(buffers[1], format, data + offset, READ_SIZE, freq);
offset += READ_SIZE;
alBufferData(buffers[2], format, data + offset, READ_SIZE, freq);
offset += READ_SIZE;
alBufferData(buffers[3], format, data + offset, READ_SIZE, freq);
offset += READ_SIZE;
alBufferData(buffers[4], format, data + offset, READ_SIZE, freq);
offset += READ_SIZE;
if((error = alGetError()) != AL_NO_ERROR) {
NSLog(#"error attaching audio to buffer: %x\n", error);
}
}
else
NSLog(#"Could not find file!\n");
}
- (void) initSource
{
ALenum error = AL_NO_ERROR;
alGetError(); // Clear the error
// Turn Looping ON
alSourcei(source, AL_LOOPING, AL_TRUE);
// Set Source Position
float sourcePosAL[] = {sourcePos.x, kDefaultDistance, sourcePos.y};
alSourcefv(source, AL_POSITION, sourcePosAL);
// Set Source Reference Distance
alSourcef(source, AL_REFERENCE_DISTANCE, 50.0f);
alSourceQueueBuffers(source, 5, buffers);
if((error = alGetError()) != AL_NO_ERROR) {
NSLog(#"Error attaching buffer to source: %x\n", error);
exit(1);
}
}
- (void)startSound
{
ALenum error;
NSLog(#"Start!\n");
// Begin playing our source file
alSourcePlay(source);
if((error = alGetError()) != AL_NO_ERROR) {
NSLog(#"error starting source: %x\n", error);
} else {
// Mark our state as playing (the view looks at this)
self.isPlaying = YES;
}
while (1) {
[self getSourceState];
}
}
-(void)getSourceState
{
int queued;
int processed;
int state;
alGetSourcei(source, AL_BUFFERS_QUEUED, &queued);
alGetSourcei(source, AL_BUFFERS_PROCESSED, &processed);
alGetSourcei(source, AL_SOURCE_STATE, &state);
NSLog(#"%d", queued);
NSLog(#"%d", processed);
NSLog(#"===================================");
while (processed > 0) {
for (int i = 0; i < processed; ++i) {
ALuint buf;
alGetError();
// alSourceStop(source);
ALenum y = alGetError();
NSLog(#"%d", y);
alSourceUnqueueBuffers(source, 1, &buf);
ALenum i = alGetError();
NSLog(#"%d", i);
processed --;
alBufferData(buf, m_format, data + offset, READ_SIZE, m_freq);
ALenum j = alGetError();
NSLog(#"%d", j);
alSourceQueueBuffers(source, 1, &buf);
ALenum k = alGetError();
NSLog(#"%d", k);
offset += READ_SIZE;
// alSourcePlay(source);
}
}
// [self getSourceState];
}

I found the reason about the problem.
the reason I turn Looping ON : alSourcei(source, AL_LOOPING, AL_TRUE);
if you set this , when the source processed a buffer, you want to fill new data or delete the buffer from the source. you will get the error.

Related

ZLib in iPhone unable to decompress data

I am trying to decompress data using the ZLib in iPhone, but it always through error of "Invalid header Check".
To compress the data I am using the following in Java
Implementation: Standard Java implementation for Zlib
Deflator : java.util.zip.Deflater
version 1.45, 04/07/06
Compression level: BEST_COMPRESSION
In iPhone the following is the code for decompressing:
- (NSData *)zlibInflate
{
if ([self length] == 0) return self;
unsigned full_length = [self length];
unsigned half_length = [self length] / 2;
NSMutableData *decompressed = [NSMutableData dataWithLength: full_length + half_length];
BOOL done = NO;
int status;
z_stream strm;
strm.next_in = (Bytef *)[self bytes];
strm.avail_in = [self length];
strm.total_out = 0;
strm.zalloc = Z_NULL;
strm.zfree = Z_NULL;
if (inflateInit (&strm) != Z_OK) return nil;
while (!done)
{
// Make sure we have enough room and reset the lengths.
if (strm.total_out >= [decompressed length])
[decompressed increaseLengthBy: half_length];
strm.next_out = [decompressed mutableBytes] + strm.total_out;
strm.avail_out = [decompressed length] - strm.total_out;
// Inflate another chunk.
status = inflate (&strm, Z_SYNC_FLUSH);
if (status == Z_STREAM_END) done = YES;
else if (status != Z_OK) {
NSLog(#"%s", strm.msg);
break;
}
}
if (inflateEnd (&strm) != Z_OK) return nil;
// Set real length.
if (done)
{
[decompressed setLength: strm.total_out];
return [NSData dataWithData: decompressed];
}
else return nil;
}
Following is a sample compressed string:
xÚÝUko²Jþ~?­ó?¥¾?¤?©?´ÚjCMX,Òµ?ª?µßVX¹È­?¿.øë_?¯¶ZÏ%íùxHH&Ã<ÏÌ3ÌÎ
#2.ðE?ºqþpéEzÏ09IoÒ?ª? ?®?£àÌönì$brÛ#fl95?¿»a//Tçáò?¢?¿½
µ©ÊÃÉPÔ¼:8y¦ý.äÎ?µ?¥?¼y?©ã¯9ö?¥½?¢±ÝûwÛ?§ãga?©á8?¨?­m\Õ?»6,'Îe?¬}(L}7ÆÅ6#gJ(¥7´s?¬d.ó,Ë°¦prßýÕÖ? 
Below is the function for compresser:
public static byte[] compress(String s) {
Deflater comp = new Deflater();
//comp.setLevel(Deflater.BEST_COMPRESSION);
comp.setInput(s.getBytes());
comp.finish();
ByteArrayOutputStream bos = new ByteArrayOutputStream(s.length());
// Compress the data
byte[] buf = new byte[1024];
try {
while (!comp.finished()) {
int count = comp.deflate(buf);
bos.write(buf, 0, count);
}
bos.close();
} catch (Exception e) {
//Log.d(TAG, e.getMessage());
e.printStackTrace();
}
// Get the compressed data
byte[] compressedData = bos.toByteArray();
// put in this fix for Symbol scanners
byte[] compressedDataForSymbol = mungeForSymbol(compressedData);
/*
* byte[] decompressedDataForSymbol =
* decompressedDataAfterSymbol(compressedDataForSymbol); // check they
* are the same for(int i=0;i<compressedData.length;i++) { if
* (compressedData[i] != decompressedDataForSymbol[i]) {
* //System.out.println("Error at " + i); } }
*/
return compressedDataForSymbol;
// return s.getBytes();
}
Using Java Deflater with default compression level creates output encoded data with header with first two bytes 0x78 0x9c. These are not used in IOS. Just remove the first two bytes and try the Inflate ie decompression in IOS. It should work.
I had faced the same issue, But i wanted data from IOS(compressed) to android(decompressed).

AudioQueue how to find out playback length of queued data

I am using AudioQueue to stream some song, my question is how can i tell the length of playback of already queued buffers? I want to stream two seconds of data at a time, the problem i am having is how do i know how many bytes actually correspond to two seconds of music (so i can always be ahead by two seconds).
Thanks
Daniel
Here is a class that uses Audio File Services to get at bitrate / packet / frame data to grab the amount of bytes from a music file that correspond to x seconds, the example has been tested with mp3 and m4a files
Header
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#interface MusicChunker : NSObject
{
AudioFileID audioFile;
int _sampleRate;
int _totalFrames;
UInt64 _framesPerPacket;
UInt64 _totalPackets;
UInt64 fileDataSize;
AudioFilePacketTableInfo _packetInfo;
int _fileLength;
AudioStreamBasicDescription _fileDataFormat;
NSFileHandle * _fileHandle;
int _packetOffset;
int _totalReadBytes;
int _maxPacketSize;
BOOL firstTime;
BOOL _ism4a;
}
-(id)initWithURL:(NSURL*)url andFileType:(NSString*)ext;
//gets next chunk that corresponds to seconds of audio
-(NSData*)getNextDataChunk:(int)seconds;
#end
Implementation
#import "MusicChunker.h"
void ReportAudioError(OSStatus statusCode);
#implementation MusicChunker
- (id)init
{
self = [super init];
if (self) {
// Initialization code here.
}
return self;
}
void ReportAudioError(OSStatus statusCode) {
switch (statusCode) {
case noErr:
break;
case kAudioFileUnspecifiedError:
[NSException raise:#"AudioFileUnspecifiedError" format:#"An unspecified error occured."];
break;
case kAudioFileUnsupportedDataFormatError:
[NSException raise:#"AudioFileUnsupportedDataFormatError" format:#"The data format is not supported by the output file type."];
break;
case kAudioFileUnsupportedFileTypeError:
[NSException raise:#"AudioFileUnsupportedFileTypeError" format:#"The file type is not supported."];
break;
case kAudioFileUnsupportedPropertyError:
[NSException raise:#"AudioFileUnsupportedPropertyError" format:#"A file property is not supported."];
break;
case kAudioFilePermissionsError:
[NSException raise:#"AudioFilePermissionsError" format:#"The operation violated the file permissions. For example, an attempt was made to write to a file opened with the kAudioFileReadPermission constant."];
break;
case kAudioFileNotOptimizedError:
[NSException raise:#"AudioFileNotOptimizedError" format:#"The chunks following the audio data chunk are preventing the extension of the audio data chunk. To write more data, you must optimize the file."];
break;
case kAudioFileInvalidChunkError:
[NSException raise:#"AudioFileInvalidChunkError" format:#"Either the chunk does not exist in the file or it is not supported by the file."];
break;
case kAudioFileDoesNotAllow64BitDataSizeError:
[NSException raise:#"AudioFileDoesNotAllow64BitDataSizeError" format:#"The file offset was too large for the file type. The AIFF and WAVE file format types have 32-bit file size limits."];
break;
case kAudioFileInvalidPacketOffsetError:
[NSException raise:#"AudioFileInvalidPacketOffsetError" format:#"A packet offset was past the end of the file, or not at the end of the file when a VBR format was written, or a corrupt packet size was read when the packet table was built."];
break;
case kAudioFileInvalidFileError:
[NSException raise:#"AudioFileInvalidFileError" format:#"The file is malformed, or otherwise not a valid instance of an audio file of its type."];
break;
case kAudioFileOperationNotSupportedError:
[NSException raise:#"AudioFileOperationNotSupportedError" format:#"The operation cannot be performed. For example, setting the kAudioFilePropertyAudioDataByteCount constant to increase the size of the audio data in a file is not a supported operation. Write the data instead."];
break;
case -50:
[NSException raise:#"AudioFileBadParameter" format:#"An invalid parameter was passed, possibly the current packet and/or the inNumberOfPackets."];
break;
default:
[NSException raise:#"AudioFileUknownError" format:#"An unknown error type %# occured. [%s]", [NSNumber numberWithInteger:statusCode], (char*)&statusCode];
break;
}
}
+ (AudioFileTypeID)hintForFileExtension:(NSString *)fileExtension
{
AudioFileTypeID fileTypeHint = kAudioFileAAC_ADTSType;
if ([fileExtension isEqual:#"mp3"])
{
fileTypeHint = kAudioFileMP3Type;
}
else if ([fileExtension isEqual:#"wav"])
{
fileTypeHint = kAudioFileWAVEType;
}
else if ([fileExtension isEqual:#"aifc"])
{
fileTypeHint = kAudioFileAIFCType;
}
else if ([fileExtension isEqual:#"aiff"])
{
fileTypeHint = kAudioFileAIFFType;
}
else if ([fileExtension isEqual:#"m4a"])
{
fileTypeHint = kAudioFileM4AType;
}
else if ([fileExtension isEqual:#"mp4"])
{
fileTypeHint = kAudioFileMPEG4Type;
}
else if ([fileExtension isEqual:#"caf"])
{
fileTypeHint = kAudioFileCAFType;
}
else if ([fileExtension isEqual:#"aac"])
{
fileTypeHint = kAudioFileAAC_ADTSType;
}
return fileTypeHint;
}
-(id)initWithURL:(NSURL*)url andFileType:(NSString*)ext
{
self = [super init];
if (self) {
// Initialization code here.
//OSStatus theErr = noErr;
if([ext isEqualToString:#"mp3"])
{
_ism4a=FALSE;
}
else
_ism4a=TRUE;
firstTime=TRUE;
_packetOffset=0;
AudioFileTypeID hint=[MusicChunker hintForFileExtension:ext];
OSStatus theErr = AudioFileOpenURL((CFURLRef)url, kAudioFileReadPermission, hint, &audioFile);
if(theErr)
{
ReportAudioError(theErr);
}
UInt32 thePropertySize;// = sizeof(theFileFormat);
thePropertySize = sizeof(fileDataSize);
theErr = AudioFileGetProperty(audioFile, kAudioFilePropertyAudioDataByteCount, &thePropertySize, &fileDataSize);
if(theErr)
{
ReportAudioError(theErr);
}
theErr = AudioFileGetProperty(audioFile, kAudioFilePropertyAudioDataPacketCount, &thePropertySize, &_totalPackets);
if(theErr)
{
ReportAudioError(theErr);
}
/*
UInt32 size;
size= sizeof(_packetInfo);
theErr= AudioFileGetProperty(audioFile, kAudioFilePropertyPacketTableInfo, &size, &_packetInfo);
g(#"Key %#", key );
}
if(theErr)
{
ReportAudioError(theErr);
}
*/
UInt32 size;
size=sizeof(_maxPacketSize);
theErr=AudioFileGetProperty(audioFile, kAudioFilePropertyMaximumPacketSize , &size, &_maxPacketSize);
size = sizeof( _fileDataFormat );
theErr=AudioFileGetProperty( audioFile, kAudioFilePropertyDataFormat, &size, &_fileDataFormat );
_framesPerPacket=_fileDataFormat.mFramesPerPacket;
_totalFrames=_fileDataFormat.mFramesPerPacket*_totalPackets;
_fileHandle=[[NSFileHandle fileHandleForReadingFromURL:url error:nil] retain];
_fileLength=[_fileHandle seekToEndOfFile];
_sampleRate=_fileDataFormat.mSampleRate;
_totalReadBytes=0;
/*
AudioFramePacketTranslation tran;//= .mFrame = 0, .mPacket = packetCount - 1, .mFrameOffsetInPacket = 0 };
tran.mFrame=0;
tran.mFrameOffsetInPacket=0;
tran.mPacket=1;
UInt32 size=sizeof(tran);
theErr=AudioFileGetProperty(audioFile, kAudioFilePropertyPacketToFrame, &size, &tran);
*/
/*
AudioBytePacketTranslation bt;
bt.mPacket=4;
bt.mByteOffsetInPacket=0;
size=sizeof(bt);
theErr=AudioFileGetProperty(audioFile, kAudioFilePropertyPacketToByte, &size, &bt);
*/
}
return self;
}
//gets next chunk that corresponds to seconds of audio
-(NSData*)getNextDataChunk:(int)seconds
{
//NSLog(#"%d, total packets",_totalPackets);
if(_packetOffset>=_totalPackets)
return nil;
//sampleRate * seconds = number of wanted frames
int framesWanted= _sampleRate*seconds;
NSData *header=nil;
int wantedPackets= framesWanted/_framesPerPacket;
if(firstTime && _ism4a)
{
firstTime=false;
//when we have a header that was stripped off, we grab it from the original file
int totallen= [_fileHandle seekToEndOfFile];
int dif=totallen-fileDataSize;
[_fileHandle seekToFileOffset:0];
header= [_fileHandle readDataOfLength:dif];
}
int packetOffset=_packetOffset+wantedPackets;
//bound condition
if(packetOffset>_totalPackets)
{
packetOffset=_totalPackets;
}
UInt32 outBytes;
UInt32 packetCount = wantedPackets;
int x=packetCount * _maxPacketSize;
void *data = (void *)malloc(x);
OSStatus theErr=AudioFileReadPackets(audioFile, false, &outBytes, NULL, _packetOffset, &packetCount, data);
if(theErr)
{
ReportAudioError(theErr);
}
//calculate bytes to read
int bytesRead=outBytes;
//update read bytes
_totalReadBytes+=bytesRead;
// NSLog(#"total bytes read %d", _totalReadBytes);
_packetOffset=packetOffset;
NSData *subdata=[[NSData dataWithBytes:data length:outBytes] retain];
free(data);
if(header)
{
NSMutableData *data=[[NSMutableData alloc]init];
[data appendData:header];
[data appendData:subdata];
[subdata release];
return [data autorelease];
}
return [subdata autorelease];
}
#end
If the songs are in arbitrary compressed formats, and you want exactly 2 second snips, you may have to convert the songs into raw PCM samples or WAV data first (AVAssetReader, et. al.). Then you can count samples at a known sample rate. e.g. 88200 frames at a 44.1k sample rate would be 2 seconds worth.

iPhone Extended Audio File Services, mp3 -> PCM -> mp3

I would like to use the Core Audio extended audio file services framework to read a mp3 file, process it as a PCM, then write the modified file back as a mp3 file. I am able to convert the mp3 file to PCM, but am NOT able to write the PCM file back as a mp3.
I have followed and analyzed the Apple ExtAudioFileConvertTest sample and also cannot get that to work. The failure point is when I set the client format for the output file(set to a canonical PCM type). This fails with error "fmt?" if the output target type is set to mp3.
Is it possible to do mp3 -> PCM -> mp3 on the iPhone? If I remove the failing line, setting the kExtAudioFileProperty_ClientDataFormat for the output file, the code fails with "pkd?" when I try to write to the output file later. So basically I have 2 errors:
1) "fmt?" when trying to set kExtAudioFileProperty_ClientDataFormat for the output file
2) "pkd?" when trying to write to the output file
Here is the code to set up the files:
NSURL *fileUrl = [NSURL fileURLWithPath:sourceFilePath];
OSStatus error = noErr;
//
// Open the file
//
error = ExtAudioFileOpenURL((CFURLRef)fileUrl, &sourceFile);
if(error){
NSLog(#"AudioClip: Error opening file at %#. Error code %d", sourceFilePath, error);
return NO;
}
//
// Store the number of frames in the file
//
SInt64 numberOfFrames = 0;
UInt32 propSize = sizeof(SInt64);
error = ExtAudioFileGetProperty(sourceFile, kExtAudioFileProperty_FileLengthFrames, &propSize, &numberOfFrames);
if(error){
NSLog(#"AudioClip: Error retreiving number of frames: %d", error);
[self closeAudioFile];
return NO;
}
frameCount = numberOfFrames;
//
// Get the source file format info
//
propSize = sizeof(sourceFileFormat);
memset(&sourceFileFormat, 0, sizeof(AudioStreamBasicDescription));
error = ExtAudioFileGetProperty(sourceFile, kExtAudioFileProperty_FileDataFormat, &propSize, &sourceFileFormat);
if(error){
NSLog(#"AudioClip: Error getting source audio file properties: %d", error);
[self closeAudioFile];
return NO;
}
//
// Set the format for our read. We read in PCM, clip, then write out mp3
//
memset(&readFileFormat, 0, sizeof(AudioStreamBasicDescription));
readFileFormat.mFormatID = kAudioFormatLinearPCM;
readFileFormat.mSampleRate = 44100;
readFileFormat.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
readFileFormat.mChannelsPerFrame = 1;
readFileFormat.mBitsPerChannel = 8 * sizeof(AudioSampleType);
readFileFormat.mFramesPerPacket = 1;
readFileFormat.mBytesPerFrame = sizeof(AudioSampleType);
readFileFormat.mBytesPerPacket = sizeof(AudioSampleType);
readFileFormat.mReserved = 0;
propSize = sizeof(readFileFormat);
error = ExtAudioFileSetProperty(sourceFile, kExtAudioFileProperty_ClientDataFormat, propSize, &readFileFormat);
if(error){
NSLog(#"AudioClip: Error setting read format: %d", error);
[self closeAudioFile];
return NO;
}
//
// Set the format for the output file that we will write
//
propSize = sizeof(targetFileFormat);
memset(&targetFileFormat, 0, sizeof(AudioStreamBasicDescription));
targetFileFormat.mFormatID = kAudioFormatMPEGLayer3;
targetFileFormat.mChannelsPerFrame = 1;
//
// Let the API fill in the rest
//
error = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &propSize, &targetFileFormat);
if(error){
NSLog(#"AudioClip: Error getting target file format info: %d", error);
[self closeAudioFile];
return NO;
}
//
// Create our target file
//
NSURL *writeURL = [NSURL fileURLWithPath:targetFilePath];
error = ExtAudioFileCreateWithURL( (CFURLRef)writeURL, kAudioFileMP3Type,
&targetFileFormat, NULL,
kAudioFileFlags_EraseFile,
&targetFile);
if(error){
NSLog(#"AudioClip: Error opening target file for writing: %d", error);
[self closeAudioFile];
return NO;
}
//
// Set the client format for the output file the same as our client format for the input file
//
propSize = sizeof(readFileFormat);
error = ExtAudioFileSetProperty(targetFile, kExtAudioFileProperty_ClientDataFormat, propSize, &readFileFormat);
if(error){
NSLog(#"AudioClip: Error, cannot set client format for output file: %d", error);
[self closeAudioFile];
return NO;
}
And the code to read/write:
NSInteger framesToRead = finalFrameNumber - startFrameNumber;
while(framesToRead > 0){
//
// Read frames into our data
//
short *data = (short *)malloc(framesToRead * sizeof(short));
if(!data){
NSLog(#"AudioPlayer: Cannot init memory for read buffer");
[self notifyDelegateFailure];
[self closeAudioFile];
return;
}
AudioBufferList bufferList;
OSStatus error = noErr;
UInt32 loadedPackets = framesToRead;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mData = data;
bufferList.mBuffers[0].mDataByteSize = (framesToRead * sizeof(short));
NSLog(#"AudioClip: Before read nNumberBuffers = %d, mNumberChannels = %d, mData = %p, mDataByteSize = %d",
bufferList.mNumberBuffers, bufferList.mBuffers[0].mNumberChannels, bufferList.mBuffers[0].mData,
bufferList.mBuffers[0].mDataByteSize);
error = ExtAudioFileRead(sourceFile, &loadedPackets, &bufferList);
if(error){
NSLog(#"AudioClip: Error %d from ExtAudioFileRead", error);
[self notifyDelegateFailure];
[self closeAudioFile];
return;
}
//
// Now write the data to our file which will convert it into a mp3 file
//
NSLog(#"AudioClip: After read nNumberBuffers = %d, mNumberChannels = %d, mData = %p, mDataByteSize = %d",
bufferList.mNumberBuffers, bufferList.mBuffers[0].mNumberChannels, bufferList.mBuffers[0].mData,
bufferList.mBuffers[0].mDataByteSize);
error = ExtAudioFileWrite(targetFile, loadedPackets, &bufferList);
if(error){
NSLog(#"AudioClip: Error %d from ExtAudioFileWrite", error);
[self notifyDelegateFailure];
[self closeAudioFile];
return;
}
framesToRead -= loadedPackets;
}
Apple doesn't supply an MP3 encoder- only a decoder. The source document is a bit outdated, but AFAIK it is still current: http://developer.apple.com/library/ios/#documentation/MusicAudio/Conceptual/CoreAudioOverview/SupportedAudioFormatsMacOSX/SupportedAudioFormatsMacOSX.html%23//apple_ref/doc/uid/TP40003577-CH7-SW1
I think your best bet might be to use AAC.

Trying to find USB device on iphone with IOKit.framework

I'm working on a project were I need the USB port to communicate with an external device. I have been looking for examples on the net (Apple and /developer/IOKit/usb exemple) and trying some others, but I can't even find the device.
In my code, I'm blocking at the place where the function looks for a next iterator (pointer in fact) with the function getNextIterator; but it never returns a good value, so the code is blocking. By the way, I am using toolchain and added IOKit.framework in my project. All I want right now is to communicate or do like a ping to someone on the USB bus! I'm blocking in FindDevice... I can't manage to enter in the while loop because the variable usbDevice is always = to 0... I have tested my code in a small mac program and it works...
Here is my code :
IOReturn ConfigureDevice(IOUSBDeviceInterface **dev) {
UInt8 numConfig;
IOReturn result;
IOUSBConfigurationDescriptorPtr configDesc;
//Get the number of configurations
result = (*dev)->GetNumberOfConfigurations(dev, &numConfig);
if (!numConfig) {
return -1;
}
// Get the configuration descriptor
result = (*dev)->GetConfigurationDescriptorPtr(dev, 0, &configDesc);
if (result) {
NSLog(#"Couldn't get configuration descriptior for index %d (err=%08x)\n", 0, result);
return -1;
}
#ifdef OSX_DEBUG
NSLog(#"Number of Configurations: %d\n", numConfig);
#endif
// Configure the device
result = (*dev)->SetConfiguration(dev, configDesc->bConfigurationValue);
if (result)
{
NSLog(#"Unable to set configuration to value %d (err=%08x)\n", 0, result);
return -1;
}
return kIOReturnSuccess;
}
IOReturn FindInterfaces(IOUSBDeviceInterface **dev, IOUSBInterfaceInterface ***itf) {
IOReturn kr;
IOUSBFindInterfaceRequest request;
io_iterator_t iterator;
io_service_t usbInterface;
IOUSBInterfaceInterface **intf = NULL;
IOCFPlugInInterface **plugInInterface = NULL;
HRESULT res;
SInt32 score;
UInt8 intfClass;
UInt8 intfSubClass;
UInt8 intfNumEndpoints;
int pipeRef;
CFRunLoopSourceRef runLoopSource;
NSLog(#"Debut FindInterfaces \n");
request.bInterfaceClass = kIOUSBFindInterfaceDontCare;
request.bInterfaceSubClass = kIOUSBFindInterfaceDontCare;
request.bInterfaceProtocol = kIOUSBFindInterfaceDontCare;
request.bAlternateSetting = kIOUSBFindInterfaceDontCare;
kr = (*dev)->CreateInterfaceIterator(dev, &request, &iterator);
usbInterface = IOIteratorNext(iterator);
IOObjectRelease(iterator);
NSLog(#"Interface found.\n");
kr = IOCreatePlugInInterfaceForService(usbInterface, kIOUSBInterfaceUserClientTypeID, kIOCFPlugInInterfaceID, &plugInInterface, &score);
kr = IOObjectRelease(usbInterface); // done with the usbInterface object now that I have the plugin
if ((kIOReturnSuccess != kr) || !plugInInterface)
{
NSLog(#"unable to create a plugin (%08x)\n", kr);
return -1;
}
// I have the interface plugin. I need the interface interface
res = (*plugInInterface)->QueryInterface(plugInInterface, CFUUIDGetUUIDBytes(kIOUSBInterfaceInterfaceID), (LPVOID*) &intf);
(*plugInInterface)->Release(plugInInterface); // done with this
if (res || !intf)
{
NSLog(#"couldn't create an IOUSBInterfaceInterface (%08x)\n", (int) res);
return -1;
}
// Now open the interface. This will cause the pipes to be instantiated that are
// associated with the endpoints defined in the interface descriptor.
kr = (*intf)->USBInterfaceOpen(intf);
if (kIOReturnSuccess != kr)
{
NSLog(#"unable to open interface (%08x)\n", kr);
(void) (*intf)->Release(intf);
return -1;
}
kr = (*intf)->CreateInterfaceAsyncEventSource(intf, &runLoopSource);
if (kIOReturnSuccess != kr)
{
NSLog(#"unable to create async event source (%08x)\n", kr);
(void) (*intf)->USBInterfaceClose(intf);
(void) (*intf)->Release(intf);
return -1;
}
CFRunLoopAddSource(CFRunLoopGetCurrent(), runLoopSource, kCFRunLoopDefaultMode);
if (!intf)
{
NSLog(#"Interface is NULL!\n");
} else
{
*itf = intf;
}
NSLog(#"End of FindInterface \n \n");
return kr;
}
unsigned int FindDevice(void *refCon, io_iterator_t iterator) {
kern_return_t kr;
io_service_t usbDevice;
IOCFPlugInInterface **plugInInterface = NULL;
HRESULT result;
SInt32 score;
UInt16 vendor;
UInt16 product;
UInt16 release;
unsigned int count = 0;
NSLog(#"Searching Device....\n");
while (usbDevice = IOIteratorNext(iterator))
{
// create intermediate plug-in
NSLog(#"Found a device!\n");
kr = IOCreatePlugInInterfaceForService(usbDevice,
kIOUSBDeviceUserClientTypeID,
kIOCFPlugInInterfaceID,
&plugInInterface, &score);
kr = IOObjectRelease(usbDevice);
if ((kIOReturnSuccess != kr) || !plugInInterface) {
NSLog(#"Unable to create a plug-in (%08x)\n", kr);
continue;
}
// Now create the device interface
result = (*plugInInterface)->QueryInterface(plugInInterface,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID),
(LPVOID)&dev);
// Don't need intermediate Plug-In Interface
(*plugInInterface)->Release(plugInInterface);
if (result || !dev) {
NSLog(#"Couldn't create a device interface (%08x)\n",
(int)result);
continue;
}
// check these values for confirmation
kr = (*dev)->GetDeviceVendor(dev, &vendor);
kr = (*dev)->GetDeviceProduct(dev, &product);
//kr = (*dev)->GetDeviceReleaseNumber(dev, &release);
//if ((vendor != LegoUSBVendorID) || (product != LegoUSBProductID) || (release != LegoUSBRelease)) {
if ((vendor != LegoUSBVendorID) || (product != LegoUSBProductID))
{
NSLog(#"Found unwanted device (vendor = %d != %d, product = %d != %d, release = %d)\n",
vendor, kUSBVendorID, product, LegoUSBProductID, release);
(void) (*dev)->Release(dev);
continue;
}
// Open the device to change its state
kr = (*dev)->USBDeviceOpen(dev);
if (kr == kIOReturnSuccess) {
count++;
} else {
NSLog(#"Unable to open device: %08x\n", kr);
(void) (*dev)->Release(dev);
continue;
}
// Configure device
kr = ConfigureDevice(dev);
if (kr != kIOReturnSuccess) {
NSLog(#"Unable to configure device: %08x\n", kr);
(void) (*dev)->USBDeviceClose(dev);
(void) (*dev)->Release(dev);
continue;
}
break;
}
return count;
}
// USB rcx Init
IOUSBInterfaceInterface** osx_usb_rcx_init (void)
{
CFMutableDictionaryRef matchingDict;
kern_return_t result;
IOUSBInterfaceInterface **intf = NULL;
unsigned int device_count = 0;
// Create master handler
result = IOMasterPort(MACH_PORT_NULL, &gMasterPort);
if (result || !gMasterPort)
{
NSLog(#"ERR: Couldn't create master I/O Kit port(%08x)\n", result);
return NULL;
}
else {
NSLog(#"Created Master Port.\n");
NSLog(#"Master port 0x:08X \n \n", gMasterPort);
}
// Set up the matching dictionary for class IOUSBDevice and its subclasses
matchingDict = IOServiceMatching(kIOUSBDeviceClassName);
if (!matchingDict) {
NSLog(#"Couldn't create a USB matching dictionary \n");
mach_port_deallocate(mach_task_self(), gMasterPort);
return NULL;
}
else {
NSLog(#"USB matching dictionary : %08X \n", matchingDict);
}
CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID),
CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &LegoUSBVendorID));
CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID),
CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &LegoUSBProductID));
result = IOServiceGetMatchingServices(gMasterPort, matchingDict, &gRawAddedIter);
matchingDict = 0; // this was consumed by the above call
// Iterate over matching devices to access already present devices
NSLog(#"RawAddedIter : 0x:%08X \n", &gRawAddedIter);
device_count = FindDevice(NULL, gRawAddedIter);
if (device_count == 1)
{
result = FindInterfaces(dev, &intf);
if (kIOReturnSuccess != result)
{
NSLog(#"unable to find interfaces on device: %08x\n", result);
(*dev)->USBDeviceClose(dev);
(*dev)->Release(dev);
return NULL;
}
// osx_usb_rcx_wakeup(intf);
return intf;
}
else if (device_count > 1)
{
NSLog(#"too many matching devices (%d) !\n", device_count);
}
else
{
NSLog(#"no matching devices found\n");
}
return NULL;
}
int main(int argc, char *argv[])
{
int returnCode;
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSLog(#"Debut du programme \n \n");
osx_usb_rcx_init();
NSLog(#"Fin du programme \n \n");
return 0;
// returnCode = UIApplicationMain(argc, argv, #"Untitled1App", #"Untitled1App");
// [pool release];
// return returnCode;
}
IOKit is not available for iPhone applications. If you need to connect with external devices from the iPhone you need to sign up for the MFi Program which will provide you with the needed API's and documentation.
besides the appstore rules i dont think u can even touch iokit on iOS without violating the sdk's agreement.

How to play looping sound with OpenAL on iPhone

I'm following a tutorial about playing sound with OpenAL. Now everything works fine except I can't make the sound looping. I believe that I've used AL_LOOPING for the source. Now it can only play once and when it finishes playing, the app will block(doesn't response to my tap on the play button). Any ideas about what's wrong with the code?
// start up openAL
// init device and context
-(void)initOpenAL
{
// Initialization
mDevice = alcOpenDevice(NULL); // select the "preferred device"
if (mDevice) {
// use the device to make a context
mContext = alcCreateContext(mDevice, NULL);
// set my context to the currently active one
alcMakeContextCurrent(mContext);
}
}
// open the audio file
// returns a big audio ID struct
-(AudioFileID)openAudioFile:(NSString*)filePath
{
AudioFileID outAFID;
// use the NSURl instead of a cfurlref cuz it is easier
NSURL * afUrl = [NSURL fileURLWithPath:filePath];
// do some platform specific stuff..
#if TARGET_OS_IPHONE
OSStatus result = AudioFileOpenURL((CFURLRef)afUrl, kAudioFileReadPermission, 0, &outAFID);
#else
OSStatus result = AudioFileOpenURL((CFURLRef)afUrl, fsRdPerm, 0, &outAFID);
#endif
if (result != 0) NSLog(#"cannot openf file: %#",filePath);
return outAFID;
}
// find the audio portion of the file
// return the size in bytes
-(UInt32)audioFileSize:(AudioFileID)fileDescriptor
{
UInt64 outDataSize = 0;
UInt32 thePropSize = sizeof(UInt64);
OSStatus result = AudioFileGetProperty(fileDescriptor, kAudioFilePropertyAudioDataByteCount, &thePropSize, &outDataSize);
if(result != 0) NSLog(#"cannot find file size");
return (UInt32)outDataSize;
}
- (void)stopSound
{
alSourceStop(sourceID);
}
-(void)cleanUpOpenAL:(id)sender
{
// delete the sources
alDeleteSources(1, &sourceID);
// delete the buffers
alDeleteBuffers(1, &bufferID);
// destroy the context
alcDestroyContext(mContext);
// close the device
alcCloseDevice(mDevice);
}
-(IBAction)play:(id)sender
{
alSourcePlay(sourceID);
}
#pragma mark -
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad {
[super viewDidLoad];
[self initOpenAL];
// get the full path of the file
NSString* fileName = [[NSBundle mainBundle] pathForResource:#"sound" ofType:#"caf"];
// first, open the file
AudioFileID fileID = [self openAudioFile:fileName];
// find out how big the actual audio data is
UInt32 fileSize = [self audioFileSize:fileID];
// this is where the audio data will live for the moment
unsigned char * outData = malloc(fileSize);
// this where we actually get the bytes from the file and put them
// into the data buffer
OSStatus result = noErr;
result = AudioFileReadBytes(fileID, false, 0, &fileSize, outData);
AudioFileClose(fileID); //close the file
if (result != 0) NSLog(#"cannot load effect: %#", fileName);
//NSUInteger bufferID; // buffer is defined in head file
// grab a buffer ID from openAL
alGenBuffers(1, &bufferID);
// jam the audio data into the new buffer
alBufferData(bufferID, AL_FORMAT_STEREO16, outData, fileSize, 8000);
//NSUInteger sourceID; // source is defined in head file
// grab a source ID from openAL
alGenSources(1, &sourceID);
// attach the buffer to the source
alSourcei(sourceID, AL_BUFFER, bufferID);
// set some basic source prefs
alSourcef(sourceID, AL_PITCH, 1.0f);
alSourcef(sourceID, AL_GAIN, 1.0f);
alSourcei(sourceID, AL_LOOPING, AL_TRUE);
// clean up the buffer
if (outData)
{
free(outData);
outData = NULL;
}
}
You should be able to release outData right after your alBufferData() call. It exclude it as the culprit, you can try the static extension and manage the memory yourself. It's something like:
alBufferDataStaticProcPtr alBufferDataStaticProc = (alBufferDataStaticProcPtr)alcGetProcAddress(0, (const ALCchar *)"alBufferDataStatic");
alBufferDataStaticProc(bufferID, bitChanFormat, audioData, audioDataSize, dataFormat.mSampleRate);