What is faster (low network traffic) encoding for video only streaming? VP8 or jpeg? - encoding

I'm making an open source project and I'm using gstreamer. What I want is to capture the input from camera and transmit to another IP address. What is faster to do with? vp8enc or jpegenc? If so what settings should I use?
Thanks in advance.

It doesn´t matter the codec that you use to have a very low bandwidth usage, it is just a matter of setting a very low bitrate in the encoder. But, of course, this would trade off the quality of the resulting video. The codec of the two cited that will give a better image quality for the same bitrate is the VP8. Just keep in mind to use the correct parameters for the encoder (using intra only in VP8 could result in an image quality worse than Motion JPEG)

Related

set start bitrate for adaptive stream in libvlcsharp

I am using LibVlcSharp to play an adaptive video stream (HLS) transcoded by Azure Media Services with the EncoderNamedPreset.AdaptiveStreaming setting in my Xamarin.Forms app.
When viewing my video I notice that the first few (5-6) seconds of my video are very blurry.
This is probably because the player starts with a "safe" low bitrate, and after a few chunks of data have been downloaded, it determines that the bandwith is sufficient for a higher quality video being displayed and it can switch up in quality.
This first few seconds of low quality bother me, and I would rather have it start at a higher bitrate.
I would be happy if the video would switch up sooner (<2 seconds), which would probably mean I need a different encoder setting that produces smaller "chunks" of video.
But maybe the easier solution is to set the starting bitrate to a higher value.
I have seen this done on other media players in the form of
ams.InitialBitrate = ams.AvailableBitrates.Max<uint>();
Does LibVlc have a similar option?
This is probably because the player starts with a "safe" low bitrate, and after a few chunks of data have been downloaded, it determines that the bandwith is sufficient for a higher quality video being displayed and it can switch up in quality.
Probably. You could verify that assumption by checking the bitrate at the start at the video and again when the quality improved, like this:
await media.Parse(MediaParseOptions.ParseNetwork);
foreach(var track in media.Tracks)
{
Debug.WriteLine($"{nameof(track.Bitrate)}: {track.Bitrate}");
}
Does LibVlc have a similar option?
Try this
--adaptive-logic={,predictive,nearoptimal,rate,fixedrate,lowest,highest}
Adaptive Logic
You can try it like new LibVLC("--adaptive-logic=highest")
See the docs for more info, or the forums. If that does not work, I'd set it server-side.

Video encoding quality, azure media services

The video quality of the first few seconds of encoded videos using azure do not look very good. It is lower quality and blurry. The quality improves afterwards dramatically.
What settings do you recommend to make sure the first frames are excellent quality. First perception is very important.
Thanks,
Osama
It may be possible that you observe this kind of behavior if you have encoded your file for adaptive streaming. In this case, the output asset is composed by different files of different quality (poor and high quality).
When you play an adaptive stream, the first parts downloaded are from the poor quality files and then the player detects your bandwidth and adapt the stream to a higher quality, automatically. If you look at YouTube, Netflix or Dailymotion, you will observe the exact same behavior. It allows to adapt the stream to the available bandwidth.
If you do not want an adaptive stream, you need to use a preset that encode the file in a given bitrate / quality.
The list of supported presets is here : https://msdn.microsoft.com/en-us/library/azure/mt269960.aspx
Multiple bitrates presets are for adaptive streaming.
Single bitrate presets are for single bitrate file.
For example, if your original video is 1080p, you can use this preset: https://msdn.microsoft.com/en-us/library/azure/mt269926.aspx
But be careful that all your users that will have a low bandwidth may not be able to play your content in a smooth way.
Agree with Julien,
You are probably seeing the Adaptive Bitrate streaming ramp up when playing back the content. That's normal behavior for adaptive streaming.
You can eliminate some of the lower bitrates from the encoding preset or restrict them at the client side using the Azure Media Player SDK.
Keep in mind that you can always customize your encoding presets. We support JSON schema for preset and you can define your own based on our existing presets that we ship as a "best practice" to get folks started.
I would recommend using the Azure Media Explorer tool to play around with different encoding settings and easily launch the player for preview. Go here to access the tool from our download page:
http://aka.ms/amse

Possible to integrate jcodec's h264 encoder into red5-screenshare?

I'm trying to minimize red5-screenshare's bandwith footprint by using jcodec's h264 encoder. The screenvideo codec takes up very little upload bandwidth, but only when used in 128 color mode. When used in full 24-bit RGB color mode it requires at least 5-10 Mbps on a lower resolution screen, which is unacceptable. I'm hoping that by using h264 I'd at least halve that upload bandwidth requirement.
To ask an actual question, would it be too hard to integrate jcodec into red5's screenshare, without having to rewrite the whole encoding and packaging process?
Keep in mind that I'd like to decode this video on the client side by using Adobe Flash Player.
Red5-screenshare: https://code.google.com/p/red5-screenshare/source/checkout
Jcodec: https://github.com/jcodec/jcodec
Also, could someone please give me some hints as to where I could find some info on how to approach this problem? I'm not very familiar with video codecs, encoding, decoding or packaging frames for streaming, so I'd appreciate some learning resources on that.
That would be all, thank you and have a good day!

What Techniques Are Best To Live Stream iPhone Video Camera Data To a Computer?

I would like to stream video from an iPhone camera to an app running on a Mac. Think sorta like video chat but only one way, from the device to a receiver app (and it's not video chat).
My basic understanding so far:
You can use AVFoundation to get 'live' video camera data without saving to a file but it is uncompressed data and thus I'd have to handle compression on my own.
There's no built in AVCaptureOutput support for sending to a network location, I'd have to work this bit out on my own.
Am I right about the above or am I already off-track?
Apple Tech Q&A 1702 provides some info on saving off individual frames as images - is this the best way to go about this? Just saving off 30fps and then something like ffmpeg to compress 'em?
There's a lot of discussion of live streaming to the iPhone but far less info on people that are sending live video out. I'm hoping for some broad strokes to get me pointed in the right direction.
You can use AVCaptureVideoDataOutput and a sampleBufferDelegate to capture raw compressed frames, then you just need to stream them over the network. AVFoundation provides an API to encode frames to local video files, but doesn't provide any for streaming to the network. Your best bet is to find a library that streams raw frames over the network. I'd start with ffmpeg; I believe libavformat supports RTSP, look at the ffserver code.
Note that you should configure AVCaptureVideoDataOutput to give you compressed frames, so you avoid having to compress raw video frames without the benefit of hardware encoding.
This depends a lot on your target resolution and what type of frame rate performance you are targeting.
From an abstract point of view, I would probably have a capture thread to fill a buffer directly from AVCaptureOutput, and a communications thread to send and rezero the buffer (padded if need be) to a previously specified host every x milliseconds.
After you accomplish initial data transfer, I would work on achieving 15fps at the lowest resolution, and work my way up until the buffer overflows before the communication thread can transmit which would require balancing image resolution, buffer size (probably dependent on GSM, and soon to be CDMA frame sizes), and finally the maximum rate at which you can transmit that buffer.

smallest format for videos in an iphone app

I have a lot of videos I would like to embed into an app, and am currently just streaming them using an UIWebView browser I set up.
I know there are formats available to email videos, where videos can be like 6 mb or less.
What is the best way to do this for an iphone app.
Keeping the quality of the picture to some extent with smaller file sizes.
thanks
The file format (or container) is not the one who gives the file size, but the bitrate of the video stream, when compressing. Since you're going to use these for an iPhone app, I would go with .mov since it's Apple's proprietary format.
As for compression, it isn't really a topic that can be explained in one post, but long story short, the bitrate must be chosen according to the resolution of the video that's being compressed. Go for an h264 multi-pass encoding, and start with a bitrate of 1000 kbps and see if you're satisfied with the results, and keep pushing the bitrate lower and lower, until you get the most satisfying results with the lowest file size. It's really just a matter of fining the right balance, so it's going to take a few tries.
For audio, use AAC with a sample rate of 44.1 KHz and a bitrate of 128kbps if there is music in the audio, or a sample rate of 32KHz and a bitrate of 96kbps which is pretty decent for when there's only voice/narration, or even lower, as long as you're happy with the results.
I explained this process in an answer for a similar question - you can read it here.
Hope this helps ! :)