decreasing buffer time for html5 video streaming - streaming

I'm pushing a live ogg theora/vorbis stream to the browser using icecast and displayed using an HTML5 container. The only problem is that the browser insists on having a video buffer for smooth playback, which adds to the latency. Chrome seems to buffer about three seconds while firefox buffers about five. Is there any way to decrease this buffer length and make the playback closer to real-time?

Related

Hls play only audio, but video is black

I have an application that uses rtmp nginx for streaming. Configure hls_time 1s. Sometimes when I refresh hls player, it only plays audio and video shows black. I used OBS for streaming with keyframe 2s).
I saw that someone said "Segments will be clipped at keyframes, so unless a keyframe exists every second, hls_time will not be accepted.". So I changed the keyframe 1s and it worked fine.

How to delay video streaming in MPMoviePlayer

I am playing video stream from urls,in MPMoviePlayer.When I click the play button,Json
parsing happens and video is getting played after a buffer. all the videos are 30s. After the
first buffer video plays for 5-6 seconds,and stops.then again buffers and play.It continues
till 30th second.So the viewers get disturbed a lot. Is there any idea to overcome this?
one shortcut idea is you out
sleep(4);
in before playing the player
~thanking You

record video in cocos2d iOS game, low resolution for video and high resolution for normal cases

I am using cocos2d's CCRenderTexture to record video of my game. But if recording video in retina display resolution will cost lot of CPU and memory, so I want to use low resolution for video record but keep retina-resolution for normal game play. is it possible?
I've tried "[[CCDirector sharedDirector] enableRetinaDisplay:NO];" during record video, but it seems not work. the generated output totally wrong.
This is not feasible.
You'd have to render each frame twice, once on the screen, then onto the render texture. A serious drop in framerate is inevitable even if you lower the resolution of the render texture somehow.
The reason is simply that you'll also have to write each render texture as an image to flash memory. This is extremely slow. You'll also end up with a huge amount of data. If each (PNG/JPG) image file ends up being a reasonably small 50 KB then one second of recorded data at 60 fps will consume 3 Megabytes of flash memory. One minute would be around 180 Megabytes.
To record a demo of your game, most games follow the simple principle of recording the user input, and then playing back the user input as if the user had issued these commands. This requires careful planning, no breaking changes when updating the app (or invalidating old demos), and no use of non-deterministic randomizers (ie seeded with time).
If you need to record a demo for making a trailer video, there's plenty of screengrabbing solutions around. Some even specialize in grabbing iPhone video, either from the device (usually requires a source code/library component) or from the Simulator.
You should check out Kamcord SDK for recording game play. Check at http://kamcord.com/
Kamcord has a built-in gameplay video and audio recording technology for iOS. It allows you, the game developer, to capture gameplay videos with an API. Your users can then replay and share these gameplay videos via YouTube, Facebook, Twitter, and email.

For how long does the AVPlayer continues to buffer from an URL after a pause?

I was reading the AVPlayer class documentation and I couldn't find the answer for my question.
I'm playing a streamed audio from the Internet on my iPhone app and I'd like to know if after a [myAVPlayer pause]; invocation myAVPlayer will keep downloading the audio file on the background for a long time.
If the user pushes the "Pause" button, invoking [myAVPlayer pause]; and then leaves the app, will myAVPlayer keep downloading a large amount of data?
I'm concerned about this when the user is on 3G Network.
I am faced with the same question and have done some experimentation. My observations are only valid for video, but if they are also valid for audio, then AVPlayer will try to buffer around 30s of content when you press pause. If you have access to the webserver, you could run tcpdump/wireshark and see how long after you press pause that the server continues to send data.
You can manage how long AVPlayer continues to buffer.
You need to manage preferredForwardBufferDuration of avplayer currentItem. If you want to stop buffering set value to 1 because if you set it to 0 it will be set up automatically
self.avPlayer.currentItem.preferredForwardBufferDuration = 1;
From Apple documentation: This property defines the preferred forward buffer duration in seconds. If set to 0, the player will choose an appropriate level of buffering for most use cases. Setting this property to a low value will increase the chance that playback will stall and re-buffer, while setting it to a high value will increase demand on system resources.

Audio Toolbox playback only plays part of output buffers

I'm working on a project which is using Audio Toolbox for recording and playback of PCM data, and I'm having trouble with playback. In the simulator, I can record and play audio just fine, using a custom class to handle storing and sourcing PCM bytes for the recording and playback buffers as needed. On device (iPhone (3.0.1) and iPod 2G (3.1.2)) recording works fine, the audio files produced are correct, but in-app playback stutters, like it's only playing part of each playback buffer. My buffers are one second long, and I've got 3 buffers, which are preloaded before playback starts; stuttering occurs during those first 3 seconds as well, which I think rules out a latency problem.
I've written Audio Toolbox code before that worked, and I'm not doing anything strange here except that I'm using my own class to source PCM data instead of AudioFileReadBytes()
I know the data that comes out of my source is good, because it plays right in the sim, and it writes to disk as a correct audio file
I've played around with sample rates a bit; I'm normally using 11025Hz sampling to cut down on file size (it's all voice, so it sounds fine). at 44100Hz, but with the same size of buffers, I get the same stuttering problem, but the audio segments come a lot faster, about 4 times faster. That's why I think it's only playing part of each buffer.
The only reason I can conceive that it would only play part of each buffer is a latency problem... like the audio toolbox code is running out of full buffers while I'm still filling an empty one. But that would cause it to play the preloaded buffers correctly, and then start stuttering, and that doesn't happen, it stutters the whole way through
I've tried humongous buffers, like 10MB buffers, and I just get silence and a single stutter of audio at the end of playback. I've also tried preloading more buffers than normal, like 10 seconds worth of audio, and it behaves the same.
The audio session is being set with AVAudioSession, not the Audio Toolbox calls, and it's being set to the Playback category for playback
I have no idea how to try and attack this problem, it makes no sense to me that it works fine on the simulator but not the device.
Code for the playing callback and the set up for the audio queue services: http://pastebin.com/mfaa546c
It turns out that the use of NSData's GetBytes:length: was causing the problem. The buffer filled with that method was playing incorrectly. However, doing a memcpy from that buffer to another buffer would prevent the problem.