I have extracted the audio data from .m4a file using mp4v2 library (sample-by-sample). Does this library have function that decodes the data? Anybody with experience with this library and can provide some help?
The documentation says:
MP4ReadSample function reads the specified sample from the specified track.
Typically this sample is then decoded in a codec dependent fashion and
rendered in an appropriate fashion.
I am interesed in decoding the output.
Thanks in advance.
You tagged MP4(video data) and M4A(audio data). Since you are extracting from M4A, I can only imagine you actually have either AAC or MP3 audio data.
Each extracted sample (bytes) is audio frame.
To make a playable MP3 file : Simply join all MP3 frames' bytes together. Save as .mp3 to play later.
To make a playable AAC file : For each AAC frame, first create an ADTS header (7 bytes) followed by that frame's data. You can test your header bytes here (site shows what your byte values mean). When all your AAC frames each begin with an ADTS header, simply save as .aac to play later using some audio payer code.
I have researched everything and the answer is NO. There is no decoder in mp4/mp4v2 libraries. One has to use some other library to do that.
Related
How to create an Mp4 file from H264 raw data that I am receiving from a live streamer (no predefined duration or moov atom), unfortunately can't use FFMPEG, I have to write my own code using live555. Can somebody help me with Mp4 container and how h264 data has to be pushed into it.? Thank you in advance : )
There are several operations to be made to store H.264 raw data into MP4, among them:
create box structures, in particular the moov box
store the NAL units in a mdatbox, possibly storing non-VCL NAL units in the moovbox
replace start codes with length fields
It also depends on your requirements. If you want to do the conversion on-the-fly, you have to use fragmented mp4. If you can store the H264 and then do the conversion, you may use non-fragmented mp4. In particular using MP4Box:
MP4Box -add file.264 file.mp4
I'm using AVPlayer to play local .mp3 file and audio stream from server.
And i want to play local .pcm file too.
NSArray *paths=NSSearchPathForDirectoriesInDomains(NSDocumentDirectory
, NSUserDomainMask
, YES);
NSString * voiceFile = [[paths objectAtIndex:0] stringByAppendingPathComponent:#"OutPut.pcm"];
But it didn't work. i got unknown error.
It seems AudioQueue can play .pcm correctly.
But is there a sample way can let AVPlayer direct play .pcm just like .mp3?
Neither .pcm as a file extension or PCM data specifies a readable format. The player cannot recognize an arbitrary data stream. It is certainly capable of reading file formats which contain PCM data, but this PCM representation is missing several things typical audio file formats represent:
Sample Rate
Sample Size
Sample Format
Channel Count
and so on.
You should instead save that PCM data in an audio file format the player supports (e.g. a WAV file).
If you prefer to simply stream PCM audio information and you know the stream format, you can approach that problem using an AudioQueue.
I'm using speak here code for audio recording with audio format kAudioFormatMPEG4AAC.
How can i change bit rate to 96K, 128K or 320K?
Regards,
John
I'm not sure if you can do this directly using AudioQueue by setting a parameter. However, I think the following approach will work:
Setup your AudioQueue to record to linear PCM
Setup an ExtAudioFile with a client data format matching the AudioQueue and a file data format of AAC
Set the desired AAC bitrate by getting the AudioConverter associated with the ExtAudioFile (kExtAudioFileProperty_AudioConverter) and set the converter's bitrate (kAudioConverterEncodeBitRate).
I haven't tried this on iOS, but if the AAC encoder is using a hardware codec I doubt you will be able to set the bitrate. AudioFormat.h gives some methods to determine which codecs are hardware vs. software and ways to request one implementation vs. another.
The fact is, AudioQueue is using the same backend as AudioConverter, although there is no key for bitRate in AudioQueueProperty enom, you can still borrow them from converter. Get the bit rate like this:
AudioQueueGetProperty(mQueue, kAudioConverterEncodeBitRate, &bitRate, &propertySize);
and set it like this:
AudioQueueSetProperty(mQueue, kAudioConverterEncodeBitRate, &bitRate, propertySize);
I need help in my iphone application, I want to convert byte array or binary data into audio file.Please help me out and tell me any way so i con do it.
If your array is supposed to contain audio data in PCM format, than you can simply add WAVE header at the beginning to get ready to play audio file. Format of header is described here.
I'm trying to figure out the proper technique for performing skipping ahead or seeking within an mp4 (or m4a) audio file while playing it using the AudioFileStream and AudioQueue APIs on the iPhone.
If I pass the complete mp4 header (up to the mdat box) to an open AudioFileStream, the underlying audio file type is properly identified (in my case, AAC) and when I then pass the actual mdat data portion of the file, the AudioFileStream correctly begins generating audio packets and these can be sent to the AudioQueue and playback works.
However, if I try a random access approach to the playing back the file, I can't seem to get it to work properly, unless I always send the first frame of the mdat box to the AudioFileStream. If instead, after sending the mp4 header to the AudioFileStream, I then attempt to initially skip ahead to a later frame in the mdat by first calling AudioFileStreamSeek() and then passing the data for the associated packets, the AudioFileStream appears to generate audio packets, but when I pass these on to the AudioQueue and call AudioQueuePrime(), I always get an error of 'nope' returned.
My question is this: am I always required to at least pass in the first packet of the mdat box before attempting to do random playback of other packets in the mp4 file?
I can't seem to find any documentation on doing random playback of sections of an mp4 file while using an AudioFileStream and an AudioQueue. I've found Apple's QuickTime File Format pdf which describes the technique of randomly seeking within an mp4 file, but it's just a high level description and doesn't have any mention of using specific APIs (such as AudioFileStream).
Thanks for any insights.
It turns out the approach I was using with AudioFileStreamSeek() is valid, I just wasn't sending the full initial mp4 header to the AudioFileStreamParseBytes() routine.
The problem was I had assumed the packets began immediately after the mdat box tag. By examining the data offset value (kAudioFileStreamProperty_DataOffset) returned by the AudioFileStream Property Listener callback, I discovered the true start of the packet data was 18 bytes later.
These 18 bytes are considered part of the initial mp4 header that must be sent to the AudioFileStream parser before sending the data of arbitrary packets after calls to AudioFileStreamSeek().
If these extra bytes are left out, then the AudioQueuePrime() call will always fail with a 'nope' error even though you may have sent valid parsed audio packets to the AudioQueue.