AVAudioPlayer seems not to handle some audio files that can be handled if using AudioStreamer (https://github.com/mattgallagher/AudioStreamer) even when played as a local file.
My questions:
1) What type of audio files generate the error code "pty?". NOTE: Audio file plays fine in QuickTime Player.
2) The following code generates the same error using this audio file:
UInt32 size;
OSStatus err = AudioFileGetPropertyInfo([self audioFileID], kAudioFilePropertyChannelLayout, &size, NULL);
But using the stream api on the same audio file this will work (ok different properties are fetched but then the question is why can't channel layout be asked?):
err = AudioFileStreamGetPropertyInfo(inAudioFileStream, kAudioFileStreamProperty_FormatList, &formatListSize, &outWriteable);
I know that if you stream audio you need to use the stream api because only a part of the file is available at the time. But when the complete file is in the filesystem the file audio api should be possible to use (?)
3) Is it recommended to use stream api even if the file is local? Good ideas how to implement it are welcome.
What puzzles me is why AudioFile* api fails were AudioFileStream* works.
Related
I am creating a bot that will record the Microsoft team live sessions. Audio recording is working fine but facing problems in generating the video file. The process I am following is that I am converting the video data into a byte array and then writing the data to a video format file.
I am adding some code snippets, I have examined so far.
1. Stream videoStream = new FileStream(videoFilePath, FileMode.Create);
BinaryWriter videoStreamWriter = new BinaryWriter(videoStream);
videoStreamWriter.Write(videoBytesArray, 0, videoBytesArray.Length);
videoStreamWriter.Close();
2. System.IO.File.WriteAllBytes(videoFilePath, videoBytesArray);
The generated files from the above code snippets are of an unsupported format.
It may be because of the data receiving from the session.
I am receiving the data through the Local media session's Video Socket on VideoMediaReceived Event (ICall.ILocalMediaSession.VideoSockets). The Video Color Format of the data that the socket is receiving is of H264 Format.
A similar problem I encountered when creating the audio file. For that, I utilized the WaveFormat package for creating an audio file.
So, Is there any library/method to convert the byte array to a video file of any format?
#Murtaza, you can try this and see if it helps. If the byte array is already a video stream then, simply you can serialize it to the disk with the extension mp4. (If it's an MP4 encoded stream).
Stream t = new FileStream("video.mp4", FileMode.Create);
BinaryWriter b = new BinaryWriter(t);
b.Write(videoData);
t.Close();
I'm currently using OpenAL to play game music. It works fine, except that it doesn't work with anything except for raw WAV files. This means that I end up with a ~9mb soundtrack.
I'm new to OpenAL, and I'm using code directly from Apple's example (https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9) to get the buffer data.
Question: Is there any way to modify this function so it reads compressed audio and decodes it on the fly?
I'm not so worried about the audio file format, just as long as it can be played and is compressed (like mp3, aac, caf). The only reason I want to do this (obviously) is to reduce file size.
Edit: It seems that the problem is not so much in OpenAL as the method I'm using to get the buffer. The function at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9 uses AudioFileOpenURL and AudioFileReadBytes. Is there any way to get the framework to decode the audio for me using ExtAudioFileOpenURL and ExtAudioFileRead?
I have tried the code here: https://devforums.apple.com/message/10678#10678, but I don't know what to make of it. The function I use to get the buffer is at https://developer.apple.com/library/ios/#samplecode/MusicCube/Listings/Classes_MyOpenALSupport_h.html%23//apple_ref/doc/uid/DTS40008978-Classes_MyOpenALSupport_h-DontLinkElementID_9, and I haven't really modified it, so that's what I need to build on.
I've started a bounty because I really need this, hopefully someone can point me in the right direction.
You'll need to use audio services to load other formats. Bear in mind that OpenAL ONLY supports uncompressed PCM data, so any data you load needs to be uncompressed during load.
Here's some code that will load any format supported by iOS: https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/Support/OALAudioFile.m
If you want to stream compressed soundtrack-type audio, use AVAudioPlayer since it plays compressed audio straight from disk.
You don't need any third party library to open archived files. With a little help from AudioToolbox/AudioToolbox.h framework you can open and read the data of a .caf file which is a very good choice by the way (better than mp3 or ogg) in terms of performance (minimal CPU impact during decompression). So ,when the data gets to OpenAL it is already PCM, ready to fill the buffers. Here is some sample code on how you can achieve this:
-(void) prepareFiles:(NSString *) filePath{
// get the full path of the file
NSString* fileName = [[NSBundle mainBundle] pathForResource:filePath ofType:#"caf"];
// open the file using the custom created methods (see below)
AudioFileID fileID = [self openAudioFile:fileName];
preparedAudioFileSize = [self audioFileSize:fileID];
if (preparedAudioFile){
free(preparedAudioFile);
preparedAudioFile = nil;
}
else{
;
}
preparedAudioFile = malloc(preparedAudioFileSize);
//read the data from the file into soundOutData var
AudioFileReadBytes(fileID, false, 0, &preparedAudioFileSize, preparedAudioFile);
//close the file
AudioFileClose(fileID);
}
-(AudioFileID)openAudioFile:(NSString*)filePath
{
AudioFileID fileID;
NSURL * url = [NSURL fileURLWithPath:filePath];
OSStatus result = AudioFileOpenURL((CFURLRef)url, kAudioFileReadPermission, 0, &fileID);
if (result != noErr) {
NSLog(#"fail to open: %#",filePath);
}
else {
;
}
return fileID;
}
-(UInt32)audioFileSize:(AudioFileID)fileDescriptor
{
UInt64 outDataSize = 0;
UInt32 thePropSize = sizeof(UInt64);
OSStatus result = AudioFileGetProperty(fileDescriptor, kAudioFilePropertyAudioDataByteCount, &thePropSize, &outDataSize);
if(result != 0) NSLog(#"cannot find file size");
return (UInt32)outDataSize;
}
based on Karl's reply above, I made a minimal single c++ function which opens a file and gives you back a buffer of pcm audio ( suitable for OpenAL ) and all the info you need to create an OpenAL sound ( format, samplerate, buffersize etc ).
the two files you need are here:
https://gist.github.com/ofTheo/5171369
hope it helps!
theo
Try if this works: http://kcat.strangesoft.net/openal-tutorial.html
You might try to use a third party library to load a mp3-ogg into a char* buffer, and then give this buffer to openAL. That would solve the file size problem.
For ogg, you should find the libraries on their website
For mp3, I honestly don't know where to find a lightweight library which could do that. But that should exist.
In my application, I am receiving audio data in LinearPCM format, which I need to play.
I am following iOS SpeakHere example. However I cannot get how and where I should provide a buffer to AudioQueue.
Can anyone provide me a working example of playing audio buffer in iOS via AudioQueue?
In the SpeakHere example playback is achieved using AudioQueue.
In the set up of AudioQueue, a function is specified that will be called when the queue wants more data.
You can see that in this method:
void AQPlayer::SetupNewQueue()
Here's the line that specifies the callback function:
XThrowIfError(AudioQueueNewOutput(&mDataFormat, AQPlayer::AQBufferCallback, this,
CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &mQueue), "AudioQueueNew failed");
If you take a look at AQPlayer::AQBufferCallback, you'll see where it gets the data from. In this example, the data has been written out to a file on disk. That's a good solution if you want to save memory, or if there's a possibility the audio file could be quite large.
Anyway, looking at AQPlayer::AQBufferCallback, you'll see a call to a function AudioFileReadPackets. That's what reads in the audio packets from the file on disk. It reads them straight into the buffer that AudioQueue will use:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
That buffer is inCompleteAQBuffer->mAudioData.
Finally, the callback function must enqueue the buffer as follows:
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
inCompleteAQBuffer->mPacketDescriptionCount = nPackets;
AudioQueueEnqueueBuffer(inAQ, inCompleteAQBuffer, 0, NULL);
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
}
Note first that it has to check that we have some packets to play. It also has to specify how many bytes are in the buffer.
Then, this line here:
THIS->mCurrentPacket = (THIS->GetCurrentPacket() + nPackets);
That keeps a track of where we are overall in our audio buffer. In other words, as more data is copied in from the file, we need to position the mCurrentPacket forward to that the next copy puts data in the correct place.
While there are plenty of tutorials for how to use AVCaptureSession to grab camera data, I can find no information (even on apple's dev network itself) on how to properly handle microphone data.
I have implemented AVCaptureAudioDataOutputSampleBufferDelegate, and I'm getting calls to my delegate, but I have no idea how the contents of the CMSampleBufferRef I get are formatted. Are the contents of the buffer one discrete sample? What are its properties? Where can these properties be set?
Video properties can be set using [AVCaptureVideoDataOutput setVideoSettings:], but there is no corresponding call for AVCaptureAudioDataOutput (no setAudioSettings or anything similar).
They are formatted as LPCM! You can verify this by getting the AudioStreamBasicDescription like so:
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *streamDescription = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);
and then checking the stream description’s mFormatId.
Hey fellows,
Iam trying to build an application for realtime voicechanging.
In a first step I managed to record audiodata to a specified file and to play it after recording.
Now I try to change the code for playing back the audiobuffers right after recording them in loop.
My question is, how it is possible to read the Audiodata directly from the recording Audioqueue and not (like shown in documentation) from a file.
Iam thankful for any ideas and could show code-parts if needed.
Thanks in advance,
Lukas (from Germany)
Have a look at the SpeakHere example. This line sources the audio data:
OSStatus result = AudioFileReadPackets(THIS->GetAudioFileID(), false, &numBytes, inCompleteAQBuffer->mPacketDescriptions, THIS->GetCurrentPacket(), &nPackets,
inCompleteAQBuffer->mAudioData);
So, rather than call AudioFileReadPackets, you can just use a memcpy to copy over the recorded data buffer. Or, alternatively, supply to the playback AudioQueue a pointer to the audio data buffer. As playback continues, advance a mCurrentPacket pointer through the buffer.
To record, you'll do something very similar. Rather than writing out to a file, you'll write out to a buffer in memory. You'll first need to allocate that with a malloc. Then are your incoming AudioQueue captures recorded data, you copy that data to the buffer. As more data is copied, you advance the recording head, or mCurrentPacket to a new position.