playing pcm data on iphone - iphone

i need to play linear pcm data live on a iphone.
i get a LIVE datastream via RTSP, and i can currently read it out from iphone, save it into a file, play it on a desktop audioplayer that supports pcm, therefore i think the transport is okay.
now i got stuck, i have completely! no idea what to do with my NSData object containing the data.
i did a bit of research, ending up with AudioUnits, but i just cannot assign my NSdata to the audiobuffer, respectivly i have no clue how.
for my instance, i assigned the callback:
AURenderCallbackStruct input;
input.inputProc = makeSound;
input.inputProcRefCon = self;
and having the function 'makeSound':
OSStatus makeSound(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
//so what to do here?
//ioData->mBuffers[0].mdata = [mySound bytes]; does not work, nor does
//ioData->mBuffers = [mySound byes];
return noErr;
}
is my approeach wrong in gerneral?
of what do i need to know/learn/implement? i am a complete audio-newbie, so my suggestion was, that i dont need several buffers, since when i get the new sound-package from rtsp, the old one is ended, since its a live stream (i base this on my recordings, that just appended the bytes w/o looking up presentation timestamps, since i dont receive some anyways)
Cheers

I don't know if this is exactly what you are looking for but some of Matt Gallagher's AudioStreamer code might be helpful to you. In particular, check out how he handles the audio buffering.
http://cocoawithlove.com/2010/03/streaming-mp3aac-audio-again.html

Related

Saving audio played via AUFilePlayer to an external file

In my iOS app, I am playing some short wave files and finally trying to export everything that I played to a single audio file such as WAV or CAF file. I have managed to do the playback using AUFilePlayer. How do I save the audio played via AUFilePlayer to a WAV or CAF file?
You'll probably want to look into the ExtAudioFile API. This exposes a function called ExtAudioFileWrite which is designed to tie in nicely with the data your Audio Units are passing around. ExtAudioFileWrite's signature is as follows:
OSStatus ExtAudioFileWrite (
ExtAudioFileRef inExtAudioFile,
UInt32 inNumberFrames,
const AudioBufferList *ioData
);
Which coincides nicely with an Audio Unit render callback, which looks like this:
OSStatus (*AURenderCallback)(
void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData
);
Notice the shared UInt32 inNumberFrames and AudioBufferList * ioData args.
So, your workflow could be:
Get an ExtAudioFile set up for writing
Get your AUGraph set up to render audio
Capture the AudioBufferLists that your AudioUnits are passing around
3 requires a bit more knowledge of how your app is set up, so I can't really help you out too much there. If you want to have your audio going to the speakers as well as being written to a file, you'll probably want to make use of AUGraphAddRenderNotify, which will let you know whenever a render happens (and let you hook in your own AURenderCallback to write to your ExtAudioFile).
If you're doing things this way (i.e. live in a render callback) make sure to use ExtAudioFileWriteAsync so you don't block the render thread.

Playing audio from a continuous stream of data (iOS)

Been banging my head against this problem all morning.
I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio)
and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it.
Now I been searching all morning finding different examples but none were really layed out.
in the
(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data {
function, the "data" package is the audio package. I tried streaming it with AVPlayer, MFVideoPlayer but nothing has worked for me so far. Also tried looking at mattgallagher's Audiostreamer but still was unable to achieve it.
Anyone here can help, has some (preferably) working examples?
Careful: The answer below is only valid if you receive PCM data from the server. This is of course never happens. That's why between rendering the audio and receiving the data you need another step: data conversion.
Depending on format, this could be more or less tricky, but in general you should use Audio Converter Services for this step.
You should use -(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data only to fill a buffer with the data that comes from the server, playing it should not have anything to do with this method.
Now, to play the data you 'stored' in memory using the buffer you need to use RemoteIO and audio units. Here is a good, comprehensive tutorial. You can remove the "record" part from the tutorial as you don't really need it.
As you can see, they define a callback for playback:
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
and playbackCallback function looks like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
AudioBuffer buffer = ioData->mBuffers[i];
unsigned char *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames*2; j++){
frameBuffer[j] = getNextPacket();//this here is a function you have to make to get the next chunk of bytes available in the stream buffer
}
}
return noErr;
}
Basically what it does is to fill up the ioData buffer with the next chunk of bytes that need to be played. Be sure to zero out (silence) the ioData buffer if there is no new data to play (the player is silenced if not enough data is in the stream buffer).
Also, you can achieve the same thing with OpenAL using alSourceQueueBuffers and alSourceUnqueueBuffers to queue buffers one after the other.
That's it. Happy codding!

AudioToolbox/OpenAL ExtAudioFile to play compressed audio

I'm currently using OpenAL to play game music. It works fine, except that it doesn't work with anything except for raw WAV files. This means that I end up with a ~9mb soundtrack.
I'm new to OpenAL, and I'm using code directly from Apple's example (https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9) to get the buffer data.
Question: Is there any way to modify this function so it reads compressed audio and decodes it on the fly?
I'm not so worried about the audio file format, just as long as it can be played and is compressed (like mp3, aac, caf). The only reason I want to do this (obviously) is to reduce file size.
Edit: It seems that the problem is not so much in OpenAL as the method I'm using to get the buffer. The function at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9 uses AudioFileOpenURL and AudioFileReadBytes. Is there any way to get the framework to decode the audio for me using ExtAudioFileOpenURL and ExtAudioFileRead?
I have tried the code here: https://devforums.apple.com/message/10678#10678, but I don't know what to make of it. The function I use to get the buffer is at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9, and I haven't really modified it, so that's what I need to build on.
I've started a bounty because I really need this, hopefully someone can point me in the right direction.
You'll need to use audio services to load other formats. Bear in mind that OpenAL ONLY supports uncompressed PCM data, so any data you load needs to be uncompressed during load.
Here's some code that will load any format supported by iOS: https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/Support/OALAudioFile.m
If you want to stream compressed soundtrack-type audio, use AVAudioPlayer since it plays compressed audio straight from disk.
You don't need any third party library to open archived files. With a little help from AudioToolbox/AudioToolbox.h framework you can open and read the data of a .caf file which is a very good choice by the way (better than mp3 or ogg) in terms of performance (minimal CPU impact during decompression). So ,when the data gets to OpenAL it is already PCM, ready to fill the buffers. Here is some sample code on how you can achieve this:
-(void) prepareFiles:(NSString *) filePath{
// get the full path of the file
NSString* fileName = [[NSBundle mainBundle] pathForResource:filePath ofType:#"caf"];
// open the file using the custom created methods (see below)
AudioFileID fileID = [self openAudioFile:fileName];
preparedAudioFileSize = [self audioFileSize:fileID];
if (preparedAudioFile){
free(preparedAudioFile);
preparedAudioFile = nil;
}
else{
;
}
preparedAudioFile = malloc(preparedAudioFileSize);
//read the data from the file into soundOutData var
AudioFileReadBytes(fileID, false, 0, &preparedAudioFileSize, preparedAudioFile);
//close the file
AudioFileClose(fileID);
}
-(AudioFileID)openAudioFile:(NSString*)filePath
{
AudioFileID fileID;
NSURL * url = [NSURL fileURLWithPath:filePath];
OSStatus result = AudioFileOpenURL((CFURLRef)url, kAudioFileReadPermission, 0, &fileID);
if (result != noErr) {
NSLog(#"fail to open: %#",filePath);
}
else {
;
}
return fileID;
}
-(UInt32)audioFileSize:(AudioFileID)fileDescriptor
{
UInt64 outDataSize = 0;
UInt32 thePropSize = sizeof(UInt64);
OSStatus result = AudioFileGetProperty(fileDescriptor, kAudioFilePropertyAudioDataByteCount, &thePropSize, &outDataSize);
if(result != 0) NSLog(#"cannot find file size");
return (UInt32)outDataSize;
}
based on Karl's reply above, I made a minimal single c++ function which opens a file and gives you back a buffer of pcm audio ( suitable for OpenAL ) and all the info you need to create an OpenAL sound ( format, samplerate, buffersize etc ).
the two files you need are here:
https://gist.github.com/ofTheo/5171369
hope it helps!
theo
Try if this works: http://kcat.strangesoft.net/openal-tutorial.html
You might try to use a third party library to load a mp3-ogg into a char* buffer, and then give this buffer to openAL. That would solve the file size problem.
For ogg, you should find the libraries on their website
For mp3, I honestly don't know where to find a lightweight library which could do that. But that should exist.

MP3 streaming on iOS

I want to use OpenAL to play music in an iOS game. The music files are stored in mp3 format and I want to stream them using a buffer queue. I load audio data into the buffers using AudioFileReadPacketData(). However playing the buffers only gives me noise. It works perfectly for caf files, but not for mp3s. Did I miss some vital step in decoding the file?
Code I use to open the sound file:
- (void) openFile:(NSString*)fileName {
NSBundle *bundle = [NSBundle mainBundle];
CFURLRef url = (CFURLRef)[[NSURL fileURLWithPath:[bundle pathForResource:fileName ofType:#"mp3"]] retain];
AudioFileOpenURL(url, kAudioFileReadPermission, 0, &audioFile);
AudioStreamBasicDescription theFormat;
UInt32 formatSize = sizeof(theFormat);
AudioFileGetProperty(audioFile, kAudioFilePropertyDataFormat, &formatSize, &theFormat);
freq = (ALsizei)theFormat.mSampleRate;
CFRelease(url);
}
Code I use to fill in buffers:
- (void) loadOneChunkIntoBuffer:(ALuint)buffer {
char data[STREAM_BUFFER_SIZE];
UInt32 loadSize = STREAM_BUFFER_SIZE;
AudioStreamPacketDescription packetDesc[STREAM_PACKETS];
UInt32 numPackets = STREAM_PACKETS;
AudioFileReadPacketData(audioFile, NO, &loadSize, packetDesc, packetsLoaded, &numPackets, data);
alBufferData(buffer, AL_FORMAT_STEREO16, data, loadSize, freq);
packetsLoaded += numPackets;
}
Because you're reading bytes of MP3 data and treating them as PCM data.
You almost certainly want AudioFileReadPacketData(). EDIT: Except that still gives you MP3 data; it just gives it in packets and (possibly) parses packet headers.
If you don't require OpenAL, AVAudioPlayer is probably the better way to go (according to the Multimedia Programming Guide, there's also Audio Queue services if you want more control).
If you really need to use OpenAL, according to TN2199 you'll need to convert it to PCM in the native byte order. See oalTouch/Classes/MyOpenALSupport.c for an example of using Extended Audio File Services to do this. Note that TN2199 says the format "must ... not use hardware decompression" — according to the Multimedia Programming Guide, software decoding is supported for everything except HE-AAC since OS 3.0. Also note that software MP3 decoding can use a significant amount of CPU time.
Alternatively, explicitly convert the audio using AudioConverter or (possibly) AudioUnit with kAudioUnitSubType_AUConverter. If you do this, it might be worthwhile decompressing everything once and keeping it in memory to minimize overhead.

Audio Volume in Apple's speakHere example code

I am trying to increase the volume of my Audio Output using the speakHere example from Apple. The volume is already set to max with :
// set the volume of the queue
XThrowIfError (AudioQueueSetParameter(mQueue, kAudioQueueParam_Volume, 1.0), "set queue volume");
However, the output is directed to the ear-piece speaker, which is not as loud as the bottom-left-speaker on the iPhone. An example of this can be seen nicely in the 'Voice Memos' that comes with the iPhone. They provide a 'Speaker-Button' that toggles between the two speakers. Does anybody have an idea how that is done? What do I need to output my Audio to the bottom speaker?
Any tips, hints, answers will be much appreciated.
Thanks you in advance
Al
You need to set Player to Speaker Mode.
Add this code in AQPlayer.mm:
OSStatus error;
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
error = AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof (audioRouteOverride), &audioRouteOverride);
if (error) printf("couldn't set audio speaker!");
Before this code:
XThrowIfError (AudioQueueSetParameter(mQueue, kAudioQueueParam_Volume, 1.0), "set queue volume");
I hope it helps.
Take a look at AudioSessionSetProperty, the kAudioSessionProperty_OverrideCategoryDefaultToSpeaker property in particular.
look at the kAudioSessionProperty_OverrideAudioRoute property