AAC header and other info in iPhone - iphone

I'm building an iPhone Application that records sound. I make use of Audio Queue Services, and everything works great for the recording.
The thing is, I'm using AudioFileWritePackets for file writing, and I'm trying to put the same "AAC + ADTS" packets to a network socket.
The resulting file is different since some "headers" or "adts header" might be missing. I am searching for ideas on how to write the ADTS header and/or AAC header? Could the community assist me with this or refer me to a guide that demonstrated how to do this?
I currently have my Buffer Handler method:
void Recorder::MyInputBufferHandler(void inUserData,
AudioQueueRefinAQ, AudioQueueBufferRefinBuffer,
const AudioTimeStamp*inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription*inPacketDesc) {
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
// write packets to file
XThrowIfError(AudioFileWritePackets(aqr->mRecordFile,
FALSE,
inBuffer->mAudioDataByteSize,
inPacketDesc,
aqr->mRecordPacket,
&inNumPackets,
inBuffer->mAudioData),
"AudioFileWritePackets failed");
fprintf(stderr, "Writing.");
// We write the Net Buffer.
[aqr->socket_if writeData :(void *)(inBuffer->mAudioData)
:inBuffer->mAudioDataByteSize];
aqr->mRecordPacket += inNumPackets;
}
// if we're not stopping, re-enqueue the buffe so that it gets filled again
if (aqr->IsRunning()) {
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL),
"AudioQueueEnqueueBuffer failed");
}
}
catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}

I've found the solution for this:
I implemented the callback
XThrowIfError(
AudioFileInitializeWithCallbacks(
this,
nil,
BufferFilled_callback,
nil,
nil,
//kAudioFileCAFType,
kAudioFileAAC_ADTSType,
&mRecordFormat,
kAudioFileFlags_EraseFile,
&mRecordFile),
"InitializeWithCallbacks failed");
... And voilá! The real callback you have to implement is BufferFilled_callback. Here is my implementation:
OSStatus AQRecorder::BufferFilled_callback(
void * inUserData,
SInt64 inPosition,
UInt32 requestCount,
const void * buffer,
UInt32 * actualCount) {
AQRecorder *aqr = (AQRecorder *)inUserData;
// You can write these bytes to anywhere.
// You can build a streaming server
return 0;
}
If you want to see more about audio queue services, you can get some ideas from Flipzu for iPhone (ex-app for live broadcasting audio // we have to shut it down because we could not raise money).
https://github.com/lucaslain/Flipzu_iPhone
Best,
Lucas.

I've recently encountered this issue with the iLBC codec, and arrived at the solution as follows:
Record the audio data you want and just write it to a file. Then, take that file and do an octal dump on it. You can use the -c flag to see ascii characters.
Then, create a separate file that you know doesn't contain the header. This is just your data from the buffers on the audio queue. Octal dump that, and compare.
From this, you should have the header and enough info on how to proceed. Hope this helps.

Related

Extract frames from pcap files (tcpdump output) without using Libraries

I need to parse the pcap files and count the packets separately (TCP,UDP,IP). I found a lot of libraries for this like pcap, jnetpcap but I want to do this without using any external libraries.I do not need a code but a just a conceptual explanation.
Question
While parsing pcap files how should I distinguish between the frames(be it TCP,UDP,IP). I tried reading about the format but what I do not understand is how would I come to know about how many bytes should I read for a particular frame and how would i know what type of a frame is it.Because only once I am able to extract the packets separately I will be able to filter out other information.
You'd have to parse each frame separately and have a counter for each value you are trying to count. Assuming the capture you are examining is in pcap/pcapng format you might find libpcap helpful.
To give a quick run of what you might have to do (assuming the lower level is Ethernet without VLAN tags)
uint64_t ip_count, tcp_count, udp_count;
void parse_pkt(uint8_t *data, uint32_t data_len) {
uint8_t *ether_hdr = data;
uint16_t ether_type = ntohs(*(uint16_t *) (data + 12))
if (ether_type != 0x800) {
return;
}
ip_count += 1;
uint8_t *ip_hdr = data + 14;
protocol = ntohs(*(uint16_t *) (ip_hdr + 9))
//protocol is either udp/tcp/sctp...etc
if (protocol == 0x11) {
udp_count++;
} else if (protocol == 0x06) {
tcp_count++;
}
}
// foreach pkt from libpcap_open call parse_pkt with the data and data_len
This code is fragile. Jumping to direct offsets without the proper length and type checks is not a good idea.

libpcap not receiving in real time, seems to be buffering packets

So I'm working with a device where I need to send and receive raw ethernet frames. It's a wireless radio and it uses ethernet to send status messages to its host. The protocol it uses is actually IPX, but I figured it would be easier to send raw ethernet frames using libpcap than to dig through decades old code implementing IPX (which got replaced by TCP/IP, so it's quite old).
My program sends a request packet (this packet is exactly the same every time, it's stateless) and the device returns a response packet with the data I need. I'm using pcap_inject to send the frame and pcap_loop in another thread to do the receiving. I originally had it in one thread, but tried 2 threads to see if it fixed the issue I'm having.
The issue is that libpcap doesn't seem to be receiving the packets in real time. It seems to buffer about 5 of them and then process them all at once. I want to be able to read them as fast as they come. Is there some way to disable this buffering on libpcap, or increase the refresh rate?
Some example output (I just printed out the time that a packet was received). Notice how there is about a second of time between each group
Time: 1365792602.805750
Time: 1365792602.805791
Time: 1365792602.805806
Time: 1365792602.805816
Time: 1365792602.805825
Time: 1365792602.805834
Time: 1365792603.806886
Time: 1365792603.806925
Time: 1365792603.806936
Time: 1365792603.806944
Time: 1365792603.806952
Time: 1365792604.808007
Time: 1365792604.808044
Time: 1365792604.808055
Time: 1365792604.808063
Time: 1365792604.808071
Time: 1365792605.809158
Time: 1365792605.809194
Time: 1365792605.809204
Time: 1365792605.809214
Time: 1365792605.809223
Here's the inject code:
char errbuf[PCAP_ERRBUF_SIZE];
char *dev="en0";
if(dev==NULL){
fprintf(stderr,"Pcap error: %s\n",errbuf);
return 2;
}
printf("Device: %s\n",dev);
pcap_t *handle;
handle=pcap_open_live(dev, BUFSIZ, 1, 1000, errbuf);
if(handle==NULL){
fprintf(stderr, "Device open error: %s\n",errbuf);
return 2;
}
//Construct the packet that will get sent to the radio
struct ether_header header;
header.ether_type=htons(0x0170);
int i;
for(i=0;i<6;i++){
header.ether_dhost[i]=radio_ether_address[i];
header.ether_shost[i]=my_ether_address[i];
}
unsigned char frame[sizeof(struct ether_header)+sizeof(radio_request_packet)];
memcpy(frame, &header, sizeof(struct ether_header));
memcpy(frame+sizeof(struct ether_header), radio_request_packet, sizeof(radio_request_packet));
if(pcap_inject(handle, frame, sizeof(frame))==-1){
pcap_perror(handle, errbuf);
fprintf(stderr, "Couldn't send frame: %s\n",errbuf);
return 2;
}
bpf_u_int32 mask;
bpf_u_int32 net;
if(pcap_lookupnet(dev,&net,&mask,errbuf)==-1){
pcap_perror(handle, errbuf);
fprintf(stderr,"Net mask error: %s\n",errbuf);
return 2;
}
char *filter="ether src 00:30:30:01:b1:35";
struct bpf_program fp;
if(pcap_compile(handle, &fp, filter, 0, net)==-1){
pcap_perror(handle, errbuf);
fprintf(stderr,"Filter error: %s\n",errbuf);
return 2;
}
if(pcap_setfilter(handle, &fp)==-1){
pcap_perror(handle, errbuf);
fprintf(stderr, "Install filter error: %s\n",errbuf);
return 2;
}
printf("Starting capture\n");
pthread_t recvThread;
pthread_create(&recvThread, NULL, (void *(*)(void *))thread_helper, handle);
while(1){
if(pcap_inject(handle, frame, sizeof(frame))==-1){
pcap_perror(handle, errbuf);
fprintf(stderr, "Couldn't inject frame: %s\n",errbuf);
return 2;
}
usleep(200000);
}
pcap_close(handle);
return 0;
And the receiving code:
void got_packet(u_char *args,const struct pcap_pkthdr * header,const u_char * packet){
struct timeval tv;
gettimeofday(&tv, NULL);
double seconds=(double)tv.tv_sec + ((double)tv.tv_usec)/1000000.0;
printf("Time: %.6f\n",seconds);
}
void *thread_helper(pcap_t *handle){
pcap_loop(handle, -1, got_packet, NULL);
return NULL;
}
Is there some way to disable this buffering on libpcap
There's currently no libpcap API to do that.
However, depending on what OS you're running, there may be ways to do it for that particular OS, i.e. you can do it, but in a non-portable fashion.
For systems that use BPF, including *BSD and...
...OS X, which, given the "en0", I suspect you're using, the way to do it is to do something such as:
Creating a set_immediate_mode.h header file containing:
extern int set_immediate_mode(int fd);
Creating a set_immediate_mode.c source file containing:
#include <sys/types.h>
#include <sys/time.h>
#include <sys/ioctl.h>
#include <net/bpf.h>
#include "set_immediate_mode.h"
int
set_immediate_mode(int fd)
{
int on = 1;
return ioctl(fd, BIOCIMMEDIATE, &on);
}
Adding #include <string.h> and #include <errno.h> to your program if it's not already including those files, adding #include "set_immediate_mode.h" to your program, and adding, after the pcap_open_live() call succeeds, the following code:
int fd;
fd = pcap_fileno(handle);
if (fd == -1) {
fprintf(stderr, "Can't get file descriptor for pcap_t (this should not happen)\n");
return 2;
}
if (set_immediate_mode(fd) == -1) {
fprintf(stderr, "BIOCIMMEDIATE failed: %s\n", strerror(errno));
return 2;
}
That will completely disable the buffering that BPF normally does (that's the buffering you're seeing with libpcap; see the BPF(4) man page), so that packets are delivered as soon as they arrive. That changes the way buffering is done in ways that might cause BPF's internal buffers to fill up faster than they would if the normal buffering is done, so that might cause packets to be lost when they wouldn't otherwise be lost, but using pcap_set_buffer_size(), as suggested by Kiran Bandla, could help that if it happens (which it might not, especially given that you're using a filter to keep "uninteresting" packets from being put into BPF's buffer in the first place).
On Linux, this is currently not necessary - what buffering is done doesn't have a timeout for the delivery of packets. On Solaris, it would be done similarly on Solaris 11 (as libpcap uses BPF), but would be done differently on earlier versions of Solaris (as they didn't have BPF and libpcap uses DLPI). On Windows with WinPcap, pcap_open() has a flag for that.
A future version of libpcap will probably have an API for this; I can't promise when that will happen.
You can set the capture buffer size by using pcap_set_buffer_size. Make sure you do this before you activate your capture handle.
Lowering the buffer size is not always a good idea. Watchout for your CPU and also dropped packets at high capture rate.

Using AVAssetWriter with raw NAL Units

I noticed in the iOS documentation for AVAssetWriterInput you can pass nil for the outputSettings dictionary to specify that the input data should not be re-encoded.
The settings used for encoding the media appended to the output. Pass nil to specify that appended samples should not be re-encoded.
I want to take advantage of this feature to pass in a stream of raw H.264 NALs, but I am having trouble adapting my raw byte streams into a CMSampleBuffer that I can pass into AVAssetWriterInput's appendSampleBuffer method. My stream of NALs contains only SPS/PPS/IDR/P NALs (1, 5, 7, 8). I haven't been able to find documentation or a conclusive answer on how to use pre-encoded H264 data with AVAssetWriter. The resulting video file is not able to be played.
How can I properly package the NAL units into CMSampleBuffers? Do I need to use a start code prefix? A length prefix? Do I need to ensure I only put one NAL per CMSampleBuffer? My end goal is to create an MP4 or MOV container with H264/AAC.
Here's the code I've been playing with:
-(void)addH264NAL:(NSData *)nal
{
dispatch_async(recordingQueue, ^{
//Adapting the raw NAL into a CMSampleBuffer
CMSampleBufferRef sampleBuffer = NULL;
CMBlockBufferRef blockBuffer = NULL;
CMFormatDescriptionRef formatDescription = NULL;
CMItemCount numberOfSampleTimeEntries = 1;
CMItemCount numberOfSamples = 1;
CMVideoFormatDescriptionCreate(kCFAllocatorDefault, kCMVideoCodecType_H264, 480, 360, nil, &formatDescription);
OSStatus result = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, NULL, [nal length], kCFAllocatorDefault, NULL, 0, [nal length], kCMBlockBufferAssureMemoryNowFlag, &blockBuffer);
if(result != noErr)
{
NSLog(#"Error creating CMBlockBuffer");
return;
}
result = CMBlockBufferReplaceDataBytes([nal bytes], blockBuffer, 0, [nal length]);
if(result != noErr)
{
NSLog(#"Error filling CMBlockBuffer");
return;
}
const size_t sampleSizes = [nal length];
CMSampleTimingInfo timing = { 0 };
result = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, YES, NULL, NULL, formatDescription, numberOfSamples, numberOfSampleTimeEntries, &timing, 1, &sampleSizes, &sampleBuffer);
if(result != noErr)
{
NSLog(#"Error creating CMSampleBuffer");
}
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeVideo];
});
}
Note that I'm calling CMSampleBufferSetOutputPresentationTimeStamp on the sample buffer inside of the writeSampleBuffer method with what I think is a valid time before I'm actually trying to append it.
Any help is appreciated.
I managed to get video playback working in VLC but not QuickTime. I used code similar to what I posted above to get H.264 NALs into CMSampleBuffers.
I had two main issues:
I was not setting CMSampleTimingInfo correctly (as my comment above states).
I was not packing the raw NAL data correctly (not sure where this is documented, if anywhere).
To solve #1, I set timing.duration = CMTimeMake(1, fps); where fps is the expected frame rate. I then set timing.decodeTimeStamp = kCMTimeInvalid; to mean that the samples will be given in decoding order. Lastly, I set timing.presentationTimeStamp by calculating the absolute time, which I also used with startSessionAtSourceTime.
To solve #2, through trial and error I found that giving my NAL units in the following form worked:
[7 8 5] [1] [1] [1]..... [7 8 5] [1] [1] [1]..... (repeating)
Where each NAL unit is prefixed by a 32-bit start code equaling 0x00000001.
Presumably for the same reason it's not playing in QuickTime, I'm still having trouble moving the resulting .mov file to the photo album (the ALAssetLibrary method videoAtPathIsCompatibleWithSavedPhotosAlbum is failing stating that the "Movie could not be played." Hopefully someone with an idea about what's going on can comment. Thanks!

Sending bytes/data from C class to an Objective-C class

I am using the apple example code SpleakHere. in this code there is a calss called AQRecorder, it is a C class. i want to send the sound bytes to an objective-c object i have. this object send the dat over udp. here is the function in which im trying to perform an objective-c function inside the c code.
void AQRecorder::MyInputBufferHandler( void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
// write packets to file
XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize,
inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioData),
"AudioFileWritePackets failed");
aqr->mRecordPacket += inNumPackets;
NSData* data=[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize];
[(Udp*)udpp sendData:data];
}
// if we're not stopping, re-enqueue the buffe so that it gets filled again
if (aqr->IsRunning())
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
although the NSData functions works fine, i am not able to ad my object (Udp) to the class and therefore cant call its functions. i tried declaring the Udp object everywhere (inside class,outside class,header .mm file ,as void*...) but nothing would compile...
though i wan't able to add an object in order to send the data... i used NSNptification instead in order to port the data from the c object to the objective-c object.
help please
though i wan't able to add an object in order to send the data... i used NSNotification instead in order to port the data from the c object to the objective-c object
Try changing the extension of all .m files to .mm.
If that works, then you have a dependency issue. If an Objective-C file includes a C++ file, then the extension of that Obj-C file should be .mm instead of .m. Moreover, if this Objective-C file now gets included in some other Objective-C file, then the extension of that file should be .mm as well.
Check file type. Select "Objective C++ preprocessed".
You can make in File inspector window on the right panel for XCode4
or
"Get info" in file context menu for older XCode versions.

Using FFMPEG Audio conversion in the iPhone

I'm using ffmpeg in the iPhone, reading an wma stream from an mms server, but I want to save the stream to an m4a file using the ALAC encoder in ffmpeg, the problem is that trying to save the raw stream, the stream processed using avcodec_decode_audio2 , the file is not even recognized with the wma format, and obviously, not played, so before convert the stream to m4a (using avcodec_encode_audio) I want to be sure that the streaming is being processed and saved correctly. Anyone had experiencie doing this kind of stuff ? thanks
P.S. I'm writing the bytes buffer using CFWriteStreamWrite, and everything seems to be ok.
My code :
while (av_read_frame(mms_IOCtx, &_packet) >= 0) {
if (_packet.stream_index == audioStreamIdx) {
uint8_t *_packetData = _packet.data;
int _packetSize = _packet.size;
// Align output buffer
uint8_t audio_buf[(AVCODEC_MAX_AUDIO_FRAME_SIZE * 3) / 2 + 16];
int16_t *aligned_buffer;
size_t buffer_size;
int audio_size, len;
buffer_size = sizeof(audio_buf);
aligned_buffer = align16(audio_buf, &buffer_size);
while (currentState != STATE_CLOSED && (_packetSize > 0)) {
audio_size = buffer_size;
len = avcodec_decode_audio2(mms_CodecCtx, aligned_buffer, &audio_size, _packetData, _packetSize);
// call to the method that write the bytes ....
}
}
}
After decoding, it is no longer a WMA stream, it's a raw audio stream. If you want to write the WMA stream, you'd write out the data before decoding.