Possible to integrate jcodec's h264 encoder into red5-screenshare? - mp4

I'm trying to minimize red5-screenshare's bandwith footprint by using jcodec's h264 encoder. The screenvideo codec takes up very little upload bandwidth, but only when used in 128 color mode. When used in full 24-bit RGB color mode it requires at least 5-10 Mbps on a lower resolution screen, which is unacceptable. I'm hoping that by using h264 I'd at least halve that upload bandwidth requirement.
To ask an actual question, would it be too hard to integrate jcodec into red5's screenshare, without having to rewrite the whole encoding and packaging process?
Keep in mind that I'd like to decode this video on the client side by using Adobe Flash Player.
Red5-screenshare: https://code.google.com/p/red5-screenshare/source/checkout
Jcodec: https://github.com/jcodec/jcodec
Also, could someone please give me some hints as to where I could find some info on how to approach this problem? I'm not very familiar with video codecs, encoding, decoding or packaging frames for streaming, so I'd appreciate some learning resources on that.
That would be all, thank you and have a good day!

Related

Bandwidth estimation / calculation

I'm working on a new project (web page) where people can upload their creations (video format). Before starting to code, I need to make a plan, calculations. I know video streaming "eats" a lot of bandwidth, this is why I need to calculate the bandwidth for each video and an acceptable quality.
I know streaming a full HD (1920p) video "eats" 7 ~ 9 megabit / s for 1 client. Actually I'm trying to find the best solutions: less bandwidth & acceptable quality.
What's the best acceptable quality (dimension)? 860p or higher?
I found a pretty good company here in my country, where I could collocate a dedicated server with 1 GB of bandwidth and for an acceptable price. How many video stream could the bandwidth accept?
The best acceptable quality depends on the display and resizing algorithms used... a well encoded 360p video will often look great to most people on a large 1080p display if its upsized well. On a 640p phone display 160p might look great to most people.
It also depends immensely on the codecs used. As well as depending greatly on the video content (high motion will require more bits to encode well)...
There is no real answer to this, and you haven't given a great starting point for even a rule of thumb answer. Sorry but you'll need to encode and evaluate videos of different dimensions, bitrates, and codecs and determine what quality loss is acceptable for yourself. "best acceptable quality" is an entirely subjective question.

What is faster (low network traffic) encoding for video only streaming? VP8 or jpeg?

I'm making an open source project and I'm using gstreamer. What I want is to capture the input from camera and transmit to another IP address. What is faster to do with? vp8enc or jpegenc? If so what settings should I use?
Thanks in advance.
It doesn´t matter the codec that you use to have a very low bandwidth usage, it is just a matter of setting a very low bitrate in the encoder. But, of course, this would trade off the quality of the resulting video. The codec of the two cited that will give a better image quality for the same bitrate is the VP8. Just keep in mind to use the correct parameters for the encoder (using intra only in VP8 could result in an image quality worse than Motion JPEG)

What's the best way of live streaming iphone camera to a media server?

According to this What Techniques Are Best To Live Stream iPhone Video Camera Data To a Computer? is possible to get compressed data from iphone camera, but as I've been reading in the AVFoundation reference you only get uncompressed data.
So the questions are:
1) How to get compressed frames and audio from iPhone's camera?
2) Encoding uncompressed frames with ffmpeg's API is fast enough for real-time streaming?
Any help will be really appreciated.
Thanks.
You most likely already know....
1) How to get compressed frames and audio from iPhone's camera?
You can not do this. The AVFoundation API has prevented this from every angle. I even tried named pipes, and some other sneaky unix foo. No such luck. You have no choice but to write it to file. In your linked post a user suggest setting up the callback to deliver encoded frames. As far as I am aware this is not possible for H.264 streams. The capture delegate will deliver images encoded in a specific pixel format. It is the Movie Writers and AVAssetWriter that do the encoding.
2) Encoding uncompressed frames with ffmpeg's API is fast enough for real-time streaming?
Yes it is. However, you will have to use libx264 which gets you into GPL territory. That is not exactly compatible with the app store.
I would suggest using AVFoundation and AVAssetWriter for efficiency reasons.
I agree with Steve. I'd add that on trying with Apple's API, you're going to have to do some seriously nasty hacking. AVAssetWriter by default spends a second before spilling its buffer to file. I haven't found a way to change that with settings. The way around that seems to be to force small file writes and file close with the use of multiple AVAssetWriters. But then that introduces lots of overhead. It's not pretty.
Definitely file a new feature request with Apple (if you're an iOS developer). The more of us that do, the more likely they'll add some sort of writer that can write to a buffer and/or to a stream.
One addition I'd make to what Steve said on the x264 GPL issue is that I think you can get a commercial license for that which is better than GPL, but of course costs you money. But that means you could still use it and get pretty OK results, and not have to open up your own app source. Not as good as an augmented Apple API using their hardware codecs, but not bad.

What Techniques Are Best To Live Stream iPhone Video Camera Data To a Computer?

I would like to stream video from an iPhone camera to an app running on a Mac. Think sorta like video chat but only one way, from the device to a receiver app (and it's not video chat).
My basic understanding so far:
You can use AVFoundation to get 'live' video camera data without saving to a file but it is uncompressed data and thus I'd have to handle compression on my own.
There's no built in AVCaptureOutput support for sending to a network location, I'd have to work this bit out on my own.
Am I right about the above or am I already off-track?
Apple Tech Q&A 1702 provides some info on saving off individual frames as images - is this the best way to go about this? Just saving off 30fps and then something like ffmpeg to compress 'em?
There's a lot of discussion of live streaming to the iPhone but far less info on people that are sending live video out. I'm hoping for some broad strokes to get me pointed in the right direction.
You can use AVCaptureVideoDataOutput and a sampleBufferDelegate to capture raw compressed frames, then you just need to stream them over the network. AVFoundation provides an API to encode frames to local video files, but doesn't provide any for streaming to the network. Your best bet is to find a library that streams raw frames over the network. I'd start with ffmpeg; I believe libavformat supports RTSP, look at the ffserver code.
Note that you should configure AVCaptureVideoDataOutput to give you compressed frames, so you avoid having to compress raw video frames without the benefit of hardware encoding.
This depends a lot on your target resolution and what type of frame rate performance you are targeting.
From an abstract point of view, I would probably have a capture thread to fill a buffer directly from AVCaptureOutput, and a communications thread to send and rezero the buffer (padded if need be) to a previously specified host every x milliseconds.
After you accomplish initial data transfer, I would work on achieving 15fps at the lowest resolution, and work my way up until the buffer overflows before the communication thread can transmit which would require balancing image resolution, buffer size (probably dependent on GSM, and soon to be CDMA frame sizes), and finally the maximum rate at which you can transmit that buffer.

Lossy compressed format to raw PCM on iPhone

I want to start with an audio file of a modest filesize, and finish with an array of unsigned chars that can be loaded into OpenAL with alBufferData. My trouble is the steps that happen in the middle.
I thought AAC would be the way to go, but according to Apple representative Rincewind (circa 12/08):
Currently hardware assisted compression formats are not supported for decode on iPhone OS. These formats are AAC, MP3 and ALAC.
Using ExtAudioFile with a client format set generates PERM errors, so he's not making things up.
So, brave knowledge-havers, what are my options here? Package the app with .wav's and just suck up having a massive download? Write my own decoder?
Any links to resources or advice you might have would be greatly appreciated.
Offline rendering of compressed audio is now possible, see QA1562.
While Vorbis and the others suggested are good, they can be fairly slow on the iPhone as there is no hardware acceleration.
One codec that is natively supported (but has only a 4:1 compression ratio) is ADPCM, aka ima4. It's handled through the ExtAudioFile interface and is only the tiniest bit slower than loading .wav's directly.
There are some good open source audio decoding libraries that you could use:
mpg123
FAAC
Both are licensed under LGPL, meaning you can use them in closed source applications provided modifications to the library, if any, are open sourced.
You could always make your wave files mono and hence cut your wave file size in half. But that might not be the best alternative for you
Another option for doing your own decoding would be Ogg Vorbis. There's even a low-memory version of their library for integer processors called "Tremor".