Bandwidth estimation / calculation - bandwidth

I'm working on a new project (web page) where people can upload their creations (video format). Before starting to code, I need to make a plan, calculations. I know video streaming "eats" a lot of bandwidth, this is why I need to calculate the bandwidth for each video and an acceptable quality.
I know streaming a full HD (1920p) video "eats" 7 ~ 9 megabit / s for 1 client. Actually I'm trying to find the best solutions: less bandwidth & acceptable quality.
What's the best acceptable quality (dimension)? 860p or higher?
I found a pretty good company here in my country, where I could collocate a dedicated server with 1 GB of bandwidth and for an acceptable price. How many video stream could the bandwidth accept?

The best acceptable quality depends on the display and resizing algorithms used... a well encoded 360p video will often look great to most people on a large 1080p display if its upsized well. On a 640p phone display 160p might look great to most people.
It also depends immensely on the codecs used. As well as depending greatly on the video content (high motion will require more bits to encode well)...
There is no real answer to this, and you haven't given a great starting point for even a rule of thumb answer. Sorry but you'll need to encode and evaluate videos of different dimensions, bitrates, and codecs and determine what quality loss is acceptable for yourself. "best acceptable quality" is an entirely subjective question.

Related

Possible to integrate jcodec's h264 encoder into red5-screenshare?

I'm trying to minimize red5-screenshare's bandwith footprint by using jcodec's h264 encoder. The screenvideo codec takes up very little upload bandwidth, but only when used in 128 color mode. When used in full 24-bit RGB color mode it requires at least 5-10 Mbps on a lower resolution screen, which is unacceptable. I'm hoping that by using h264 I'd at least halve that upload bandwidth requirement.
To ask an actual question, would it be too hard to integrate jcodec into red5's screenshare, without having to rewrite the whole encoding and packaging process?
Keep in mind that I'd like to decode this video on the client side by using Adobe Flash Player.
Red5-screenshare: https://code.google.com/p/red5-screenshare/source/checkout
Jcodec: https://github.com/jcodec/jcodec
Also, could someone please give me some hints as to where I could find some info on how to approach this problem? I'm not very familiar with video codecs, encoding, decoding or packaging frames for streaming, so I'd appreciate some learning resources on that.
That would be all, thank you and have a good day!

Is there a way to select the bit rate while using AVPlayer for HTTP live audio streaming?

I'm using AVPlayer to stream audio content delivered in two quality formats.
The problem is that when passing from a lower format to a higher one ( done automatically by the framework when wi-fi is available ) there is a delay while playing.
Is there a way to manually select a desired quality in order to prevent that delay?
It's possible now in iOS8.
Checkout preferredPeakBitRate on AVPlayerItem.
Following copied from Apple's documentation:
The desired limit, in bits per second, of network bandwidth consumption for this item.
SWIFT: var preferredPeakBitRate: Double
OBJECTIVE-C: #property(nonatomic) double preferredPeakBitRate
Set preferredPeakBitRate to non-zero to indicate that the player should attempt to limit item playback to that bit rate, expressed in bits per second.
If network bandwidth consumption cannot be lowered to meet the preferredPeakBitRate, it will be reduced as much as possible while continuing to play the item.
Update: This was accurate at the time for iOS 4. For an updated iOS 8 answer, see here.
I've researched this very question for myself and have not found an answer which means I'm pretty positive there is no way to do this. The Apple docs don't always give all the details of what you can do with things but if you look at all the available properties, methods, etc you will find that there is nothing to allow you to tweak the stream.
I think this is the whole point of HLS. Apple wants iPhone users to have the best streaming experience possible. If they gave the developer the controls to tweak which stream is being used then that defeats the purpose. The system knows best when it comes to switching streams. If the phone cannot handle the additional bandwidth then it won't (or shouldn't) switch to the higher stream. Some things that I have found that you may want to look at...
Are your files chunked into 10 second increments? If it's more than that you might want to shorten them.
Some file conversion programs don't get the bit rates exactly right and if that is the case your phone may think it has the bandwidth for, say, a 96 kbps feed but in reality your feed is 115 kbps. Take a look at the accepted answer in this post: iPhone - App Rejected again, HTTP Live Streaming 64kbps baseline feed
Use Pantomime, is a lightweight framework for iOS, OSX and tvOS that can read and parse HTTP Live Streaming manifests.
Pantomime

What is the smallest audio file format?

I know this is not a specific programming question but I hope someone can give me a suggestion. My applications (iPhone and Blackberry applications) use a lot of audio files. I need a solution for my applications in order to save some spaces.
Is it right that .aac is the most suitable audio format for iPhone? Is it the smallest one? It it also suitable for Blackberry?
Is there any way to make the audio files smaller without losing a lot of quality of the sounds? How about the bitrate, sampling freq and channels? Are they really matter?
AAC is a good format for the iPhone. The iOS is optimized to play AAC.
Yes, things like bitrate, sampling frequency and number of channels are all factors in the audio file's size.
What you should do is take your audio and convert it to different formats with different settings and then just play them on a real device to see if the quality is acceptable.
Sorry, there is no simple answer. Experiment.
Depends on what type of audio you're encoding. For speech, AMR is supported by all major smartphones, and will generally give the smallest file sizes. Quality degredation is noticeable enough that it's not suitable for music, but it's optimized for voice recording (the voice notes app on the BlackBerry uses it as its file format) so it'll give you very nice results with spoken audio.

smallest format for videos in an iphone app

I have a lot of videos I would like to embed into an app, and am currently just streaming them using an UIWebView browser I set up.
I know there are formats available to email videos, where videos can be like 6 mb or less.
What is the best way to do this for an iphone app.
Keeping the quality of the picture to some extent with smaller file sizes.
thanks
The file format (or container) is not the one who gives the file size, but the bitrate of the video stream, when compressing. Since you're going to use these for an iPhone app, I would go with .mov since it's Apple's proprietary format.
As for compression, it isn't really a topic that can be explained in one post, but long story short, the bitrate must be chosen according to the resolution of the video that's being compressed. Go for an h264 multi-pass encoding, and start with a bitrate of 1000 kbps and see if you're satisfied with the results, and keep pushing the bitrate lower and lower, until you get the most satisfying results with the lowest file size. It's really just a matter of fining the right balance, so it's going to take a few tries.
For audio, use AAC with a sample rate of 44.1 KHz and a bitrate of 128kbps if there is music in the audio, or a sample rate of 32KHz and a bitrate of 96kbps which is pretty decent for when there's only voice/narration, or even lower, as long as you're happy with the results.
I explained this process in an answer for a similar question - you can read it here.
Hope this helps ! :)

iPhone: CPU power to do DSP/Fourier transform/frequency domain?

I want to analyze MIC audio on an ongoing basis (not just a snipper or prerecorded sample), and display frequency graph and filter out certain aspects of the audio. Is the iPhone powerful enough for that? I suspect the answer is a yes, given the Google and iPhone voice recognition, Shazaam and other music recognition apps, and guitar tuner apps out there. However, I don't know what limitations I'll have to deal with.
Anyone play around with this area?
Apple's sample code aurioTouch has a FFT implementation.
The apps that I've seen do some sort of music/voice recognition need an internet connection, so it's highly likely that these just so some sort of feature calculation on the audio and send these features via http to do the recognition on the server.
In any case, frequency graphs and filtering have been done before on lesser CPUs a dozen years ago. The iPhone should be no problem.
"Fast enough" may be a function of your (or your customer's) expectations on how much frequency resolution you are looking for and your base sample rate.
An N-point FFT is on the order of N*log2(N) computations, so if you don't have enough MIPS, reducing N is a potential area of concession for you.
In many applications, sample rate is a non-negotiable, but if it was, this would be another possibility.
I made an app that calculates the FFT live
http://www.itunes.com/apps/oscope
You can find my code for the FFT on GitHub (although it's a little rough)
http://github.com/alexbw/iPhoneFFT
Apple's new iPhone OS 4.0 SDK allows for built-in computation of the FFT with the "Accelerate" library, so I'd definitely start working with the new OS if it's a central part of your app's functionality.
You cant just port FFT code written in C into your app...there is the thumb compiler option that complicates floating point arithmetic. You need to put it in arm mode