I'm using some code related to RTSP, RTP to listen to various RTSP Streams Using FFMPEG, it works!
BUT
The noise is being decoded in such a way that every 10 seconds a glitch in the ASF decoding of the stream occurs, where the Volume Peaks and makes a loud Popping sound.
Generally the sound you hear when a packet is corrupted...
I'm just wondering if anyone can help me with where to look for Troubleshooting, when working with WMA ASF Audio Streams.
Any help/tips/pointers are appreciated.
I'm not sure if it's in the RTSP Parser, Data Buffer, WMA Decoder...
I know nothing about WMA/ASF, but have you checked the sequence numbers in the RTP headers are contiguous (how you would find out what these are will depend on what RTSP/RTP library you are using)? At least then you will know whether packets are going missing or not.
Related
I've just started working on a project that requires me to do lots of audio related stuff on iOS.
This is the first time I'm working in the realm of audio, and have absolutely no idea how to go about it. So, I googled for documents, and was mostly relying on Apple docs. Firstly, I must mention that the documents are extremely confusing, and often, misleading.
Anyways, to test a recording, I used AVAudioSession and AVAudioRecorder. From what I understand, these are okay for simple recording and playback. So, here are a couple of questions I have regarding doing anything more complex:
If I wish to do any real-time processing with the audio, while recording is in progress, do I need to use Audio Queue services?
What other options do I have apart from Audio Queue Services?
What are Audio Units?
I actually got Apple's Audio Queue Services programming guide, and started writing an audio queue for recording. The "diagram" on their audio queue services guide (pg. 19 of the PDF) shows recording being done using an AAC codec. However, after some frustration and wasting a lot of time, I found out that AAC recording is not available on iOS - "Core Audio Essentials", section "Core Audio Plug-ins: Audio Units and Codecs".
Which brings me to my another two question:
What's a suitable format for recording, given Apple Lossless, iLBC, IMA/ADPCM, Linear PCM, uLaw/aLaw? Is there some chart somewhere that someone might be able to refer to?
Also, if MPEG4AAC (.m4a) recording is not available using an audio queue, how is it that I can record an MPEG4AAC (.m4a) using AVAudioRecorder?!
Super thanks a ton in advance for helping me out on this. I'll super appreciate any links, directions and/or words of wisdom.
Thanks again and cheers!
For your first question, Audio Queue services or using the RemoteIO Audio Unit are the appropriate APIs for real-time audio processing, with RemoteIO allowing lower and more deterministic latency, but with stricter real-time requirements than Audio Queues.
For creating aac recordings, one possibility is to record to raw linear PCM audio, then later use AV file services to convert buffered raw audio into your desired compressed format.
I'm developing a kind of proxy for video streaming and I'm now dealing with an issue related to packets received out-of-order (without losses). This issue (maybe) is the reason why there are frequent noises in the video playback.
Do you know by chance if VLC is able to reorder packets? If so, it would mean that the reason why there are some noises in the playback is something else, if not, I should just develop an additional layer that ensure the reception with the correct order.
Thanks.
Assuming that you are talking about RTP over UDP, AFAIK VLC uses live555 libraries for client-side RTSP/RTP functionality and live555 has a built in jitter buffer that should take care of re-ordering for you. I can't recall the size of the jitter buffer of hand but 100ms seems to ring a bell.
In case you didn't know: When developing media streaming applications (esp. over UDP) it is important to increase the size of the receiver buffer. If it is full and packets get dropped, which could explain your artifacts.
Also, UDP being unreliable means that you will experience artifacts if packets are lost/corrupted and you have no suitable mechanism to deal with it.
I am developing an iphone app, which sends/receives data to audio/headphone jack of iphone. I assume we can send/receive data to headphone jack but that data is audio file with some codec applied. I want to read an audio file and send raw data to headphone jack.. How can i do that..? Any help or code snippet appreciated.
Best regards,
Abdul Qavi
Since the headphone jack is on the other side of a couple of D>A converters, I guess you'd need a codec that converts to modem tones. Hope you files are really small 'cos transfer rate would be pretty slow, even if you could figure out how to use both channels.
Just 'tooth 'em.
Rgds,
Martin
I have a program on the server side that keeps generating a series of JPEG files, and I want to play these files on the client browser as a video stream, with a desired frame rates (this video should be playing while the new JPEG files are being generated). Meanwhile, I have a wav file that is handy and I want to play this wav file in the client side, when the streaming video is being played.
Is there anyway to do it? I have done a plenty of research but can't find a satisfactory solution -- they are either just for video streaming or just for audio streaming.
I know mjpg-streamer at http://sourceforge.net/projects/mjpg-streamer/ is capable of playing streaming videos in MJPG format from JPEG files, but it doesn't look like that it can play streaming audios.
I am very new to this area, so more detailed explanation will be extremely appreciated. Thank you so much!!!
P.S. a solution/library in C++ is preferred but anything else would help as well. I am working on linux.
The browser should be able to do this natively, no? Firefox can do this certainly, if you simply give it the correct url of the streaming mjpeg source. The mjpeg stream should be properally formatted.
I figured it out. The proper way of doing it is to use ffmpeg, libav and an RTMP server, such as red5.
Using FFMPEG, Live555, JSON
Not sure how it works but if you look at the source files at http://github.com/dropcam/dropcam_for_iphone you can see that they are using a combination of open source projects like FFMPEG, Live555, JSON etc. Using Wireshark to sniff the packets sent from one of the public cameras that's available to view with the free "Dropcam For Iphone App" at the App Store, I was able to confirm that the iphone was receiving H264 video via RTP/RTSP/RTCP and even RTMPT which looks like maybe some of the stream is tunneled?
Maybe someone could take a look at the open source files and explain how they got RTSP to work on the iphone.
Thanks for the info TinC0ils. After digging a little deeper I'v read that they have modified the Axis camera with custom firmware to limit the streaming to just a single 320x240 H264 feed, to better provide a consistent quality video over different networks and, as you point out, be less of a draw on the phone's hardware etc. My interest was driven by a desire to use my iphone to view live video and audio from a couple of IP cameras that I own without the jerkiness of MJPEG or the inherent latency that is involved with "http live streaming". I think Dropcam have done an excellent job with their hardware/software combo, I just don't need any new hardware at the moment.
Oh yeah, I almost forgot the reason of this post RTSP PROTOCOL DOES WORK ON THE IPHONE!
They are using open source projects to receive the frames and decoding in software instead of using hardware decoders. This will work, however, this runs counter to Apple's requirement that you use their HTTP Streaming. It will also require greater CPU resources such that it doesn't decode video at the desired fps/resolution on older devices and/or decrease battery life compared to HTTP streaming.