Packet order for UDP video streaming - streaming

I'm developing a kind of proxy for video streaming and I'm now dealing with an issue related to packets received out-of-order (without losses). This issue (maybe) is the reason why there are frequent noises in the video playback.
Do you know by chance if VLC is able to reorder packets? If so, it would mean that the reason why there are some noises in the playback is something else, if not, I should just develop an additional layer that ensure the reception with the correct order.
Thanks.

Assuming that you are talking about RTP over UDP, AFAIK VLC uses live555 libraries for client-side RTSP/RTP functionality and live555 has a built in jitter buffer that should take care of re-ordering for you. I can't recall the size of the jitter buffer of hand but 100ms seems to ring a bell.
In case you didn't know: When developing media streaming applications (esp. over UDP) it is important to increase the size of the receiver buffer. If it is full and packets get dropped, which could explain your artifacts.
Also, UDP being unreliable means that you will experience artifacts if packets are lost/corrupted and you have no suitable mechanism to deal with it.

Related

Streaming audio from a microphone on a Mac to an iPhone

I'm working on a personal project where the iPhone connects to a server-type application running on a Mac. The iPhone send and receives textual/ASCII data via standard sockets. I now need to stream the microphone from the Mac to the iPhone. I've done some work with AudioServices before but wanted to check my thoughts here before getting too deep.
I'm thinking I can:
1. Create an Audio Queue in the standard Cocoa application on the Mac.
2. In my Audio Queue Callback function, rather than writing it to a file, write it to another socket I open for audio streaming.
3. On the iPhone, receive the raw sampled/encoded audio data from the TCP stream and dump it into an Audio Queue Player which outputs to headphone/speaker.
I know this is no small task and I've greatly simplified what I need to do but could it be as easy as that?
Thanks for any help you can provide,
Stateful
This looks broadly sensible, but you'll almost certainly need to do a few more things:
Buffering. On the "recording" end, you probably don't want to block the audio queue if the buffer is full. On the "playback" end, I don't think you can just pass buffers into the queue (IIRC you'll need to buffer it until you get a callback).
Concurrency. I'm pretty sure AQ callbacks happen on their own thread, so you'll need some sort of locking/barriers around your buffer accesses.
Buffer pools, if memory allocation ends up being a big overhead.
Compression. AQ might be able to give you "IMA4" frames (IMA ADPCM 4:1, or so); I'm not sure if it does hardware MP3 decompression on the iPhone.
Packetization, if e.g. you need to interleave voice chat with text chat.
EDIT: Playback sync (or whatever you're supposed to call it). You need to be able to handle different effective audio clock rates, whether it's due to a change in latency or something else. Skype does it by changing playback speed (with pitch-correction).
EDIT: Packet loss. You might be able to get away with using TCP over a short link, but that depends a lot on the quality of your wireless network. UDP is a minor pain to get right (especially if you have to detect an MTU hole).
Depending on your data rates, it might be worthwhile going for the lower-level (BSD) socket API and potentially even using readv()/writev().
If all you want is an "online radio" service and you don't care about the protocol used, it might be easier to use AVPlayer/MPMoviePlayer to play audio from a URL instead. This involves implementing a server which speaks Apple's HTTP streaming protocol; I believe Apple has some sample code that does this.

How to imlement Audio Streaming <50 millisecond latency on iPhone

I need to implement audio streaming on iPhone with latency lower than 50 millisecond .
Any ideas on how I can make it work?
I bumped into:
http://cocoawithlove.com/2009/06/revisiting-old-post-streaming-and.html
But it's very important to me to know that the latency will be very low.
thanks
One way to minimize latency on the receiving end is to use the RemoteIO Audio Unit with very short buffers, and feed it from raw PCM audio or a decompressor for an audio format that requires extremely low computational complexity to decode, as well as small packets.
You pretty much need complete control over the entire network source and path, including hand picking all the equipment, as any router or access point can completely destroy latency by buffeting packets or prioritizing other traffic, etc.
You probably want to use UDP for the IP protocol, with a packet size tuned to your network equipment and to the audio buffer size.

What Techniques Are Best To Live Stream iPhone Video Camera Data To a Computer?

I would like to stream video from an iPhone camera to an app running on a Mac. Think sorta like video chat but only one way, from the device to a receiver app (and it's not video chat).
My basic understanding so far:
You can use AVFoundation to get 'live' video camera data without saving to a file but it is uncompressed data and thus I'd have to handle compression on my own.
There's no built in AVCaptureOutput support for sending to a network location, I'd have to work this bit out on my own.
Am I right about the above or am I already off-track?
Apple Tech Q&A 1702 provides some info on saving off individual frames as images - is this the best way to go about this? Just saving off 30fps and then something like ffmpeg to compress 'em?
There's a lot of discussion of live streaming to the iPhone but far less info on people that are sending live video out. I'm hoping for some broad strokes to get me pointed in the right direction.
You can use AVCaptureVideoDataOutput and a sampleBufferDelegate to capture raw compressed frames, then you just need to stream them over the network. AVFoundation provides an API to encode frames to local video files, but doesn't provide any for streaming to the network. Your best bet is to find a library that streams raw frames over the network. I'd start with ffmpeg; I believe libavformat supports RTSP, look at the ffserver code.
Note that you should configure AVCaptureVideoDataOutput to give you compressed frames, so you avoid having to compress raw video frames without the benefit of hardware encoding.
This depends a lot on your target resolution and what type of frame rate performance you are targeting.
From an abstract point of view, I would probably have a capture thread to fill a buffer directly from AVCaptureOutput, and a communications thread to send and rezero the buffer (padded if need be) to a previously specified host every x milliseconds.
After you accomplish initial data transfer, I would work on achieving 15fps at the lowest resolution, and work my way up until the buffer overflows before the communication thread can transmit which would require balancing image resolution, buffer size (probably dependent on GSM, and soon to be CDMA frame sizes), and finally the maximum rate at which you can transmit that buffer.

RTSP Audio Clicks/Pops/Peaks/glitches and ticks. (iPhone Streaming) Any advice?

I'm using some code related to RTSP, RTP to listen to various RTSP Streams Using FFMPEG, it works!
BUT
The noise is being decoded in such a way that every 10 seconds a glitch in the ASF decoding of the stream occurs, where the Volume Peaks and makes a loud Popping sound.
Generally the sound you hear when a packet is corrupted...
I'm just wondering if anyone can help me with where to look for Troubleshooting, when working with WMA ASF Audio Streams.
Any help/tips/pointers are appreciated.
I'm not sure if it's in the RTSP Parser, Data Buffer, WMA Decoder...
I know nothing about WMA/ASF, but have you checked the sequence numbers in the RTP headers are contiguous (how you would find out what these are will depend on what RTSP/RTP library you are using)? At least then you will know whether packets are going missing or not.

Streaming live H.264 video via RTSP to iphone does work! w/example

Using FFMPEG, Live555, JSON
Not sure how it works but if you look at the source files at http://github.com/dropcam/dropcam_for_iphone you can see that they are using a combination of open source projects like FFMPEG, Live555, JSON etc. Using Wireshark to sniff the packets sent from one of the public cameras that's available to view with the free "Dropcam For Iphone App" at the App Store, I was able to confirm that the iphone was receiving H264 video via RTP/RTSP/RTCP and even RTMPT which looks like maybe some of the stream is tunneled?
Maybe someone could take a look at the open source files and explain how they got RTSP to work on the iphone.
Thanks for the info TinC0ils. After digging a little deeper I'v read that they have modified the Axis camera with custom firmware to limit the streaming to just a single 320x240 H264 feed, to better provide a consistent quality video over different networks and, as you point out, be less of a draw on the phone's hardware etc. My interest was driven by a desire to use my iphone to view live video and audio from a couple of IP cameras that I own without the jerkiness of MJPEG or the inherent latency that is involved with "http live streaming". I think Dropcam have done an excellent job with their hardware/software combo, I just don't need any new hardware at the moment.
Oh yeah, I almost forgot the reason of this post RTSP PROTOCOL DOES WORK ON THE IPHONE!
They are using open source projects to receive the frames and decoding in software instead of using hardware decoders. This will work, however, this runs counter to Apple's requirement that you use their HTTP Streaming. It will also require greater CPU resources such that it doesn't decode video at the desired fps/resolution on older devices and/or decrease battery life compared to HTTP streaming.