I have a library of videos (in .avi format). I'd like to make a webapp where I could watch these videos youtube-style, but without having to have all of them converted to flv format all the time -- so, basically, I want use the app to choose one to play, and transcode it on the fly. I'd also like to be able to pause, seek, etc.
Is this possible? If so, what would the overview of the process be? I know ffmpeg can be used to convert avi to flv, but I'm not sure about the rest of the process. Would I create one thread that start the transcoding, and then another which starts playing the output file as it gets transcoded? Or would that cause problems with playback since it would only be a partial file? Is there a better way of doing this?
For reference, I'll be using grails to write the webapp.
On the fly transcoding use vlc or ffmpeg with red5
Related
Hello All I have been working on a project for a while:
I have a non standard MP4 video file I want to play off a server in a IPhone App (I am using Flash builder to create it).
Due to a combination of server problems (not correctly identifying MIME type and cant be changed) and IPhone limitations (e.g. not being able to force the iplayer to play files with wrong extension), I have had to setup a process that reads the file in, saves it locally and then point the video player at the local file.
Although this sort of works, i am having an issue with some of the files that are large (94mb for a 17 min video) and a slow server - which takes 120 seconds to transfer the whole file.
I thought that if you started playing the video, then the transfer rate would be faster than the playback rate so the video would play ok.
However sometimes the video just crashes, which i am guessing is a result of the video reading beyond what has been written.
If the video played the internal file using progressive download I think it would probably not crash but resume once more date had been read but understand that progressive download is triggered by a url extension beginning with HTTP://
Can you make an internal file play using progressive download ? I know this would not normally be expected as logically the system would expect a local file to already be download ?
Any help appreciated
Thanks
Toby
try this to know download file is complete or not
HCDownload
it is very easy to use only write its delegate method.
Edit
also see StitchedStreamPlayer
These days, I was researching the software architechture for iPhone Streaming (Base on MMS protocol).
As we know, in order to playback MMS audio stream, we should call libMMS to read wma stream data from remote media server, and then call FFmpeg to decode the stream data from wma format into PCM data buffer, and finally, enqueue the PCM data buffer into iPhone’s audioqueue to generate real sound.
The introduction above just describe the working process of iPhone streaming. If we only need to implement this simple functionality, that is not difficult. Just follow the introduction above to call libMMS, FFMpeg and audioqueue step by step, we can achieve the streaming function. Actually, I have implemented the code last week.
But, what I need is not only a simple streaming function! I need a software architechture makes FFmpeg accessing libMMS just like accessing local filesystem!
Does anybody know how to hook the libMMS interfaces like mms_read/mms_seek onto FFmpeg filesystem interfaces like av_read_frame/av_seek_frame?
I think I have to answer my own question again this time……
After several weeks reseach and debuging, I got the truth finally.
Actually, we don’t need to “hook” libMMS onto FFMpeg. Why? Because the FFMpeg already has its native mms protocol process module “mms_protocol” (see in mms_protocol.c in FFMpeg).
All we need to do is just configuring the FFMpeg to enable the mms module like this (see in config.h in FFMpeg):
#define ENABLE_MMS_PROTOCOL 1
#define CONFIG_MMS_PROTOCOL 1
After this configuration, FFMpeg will add mms protocol into its protocol list. (Actually, the protocol list has already contained “local file system protocol”). As result, the FFMpeg could be able to treat the “mms://hostserver/abc” media file like local media file. Therefore, we can still open and read the mms media file using:
av_open_input_file();
av_read_frame();
like we did on local media file before!
By the way, in my ffmpeg version, there are still many bugs in libAVFormat module for proccessing mms protocol. It took one week for me to debug it, however, I think it will be much shorter for the guy as smart as you:-)
I have a webservice returning .flv file, it has to be played in iphone application, how do i play a .flv (flash file) in iphone?
Does anyone has faced this scenario? Programmatically is it possible to convert to some format and play in iphone?
Thanks.
IPhone doesn't and judging by the Apple official statements won't ever (or at least in the forseeable future) support flash content.
Converting the content to another format on the server side should be easy to do and would allow content playback on an iDevice.
SInce the video is probably already h.264 encoded inside the FLV container, you may want to try FLV Extract on the server to avoid recompression:
http://www.videohelp.com/tools/FLV_Extract
Basically you just need to run it once for each of the videos on the server and keep the results around.
I would recommend setting up your webservice to use something like ffmpeg ( http://www.ffmpeg.org/ ) to convert the .flv file to an mp4 file which can be played directly from the iPhone's web browser.
Pioto and Josaih are on the right track in suggesting that you should convert the video server-side using a tool like FFMpeg. As far as I know there is zero support for flv in any part of iOS, so you'd be unable to transcode it locally. Even if you could, it would make your users angry, since transcoding is a resource-intensive process that would kill their battery life and take a significant amount of time.
So, your solution is to transcode your videos to h.264 server-side. However, I'd caution against transcoding from flv->h.264 if there are any other options available. If you have the original, uncompressed (or at least less-compressed) source video available, you'll get higher-quality video by transcoding that to h.264. Each time lossy compression (eg, squeeze or h.264) is used on a file, you lose some information and quality. If you've ever seen a 3rd or 4th generation copy of a VHS tape, you can understand what I'm getting at.
Once you have a h.264 formatted video, you can play it on iOS. Not sure about the exact details of this.
You may be able to use ffmpeg or something on your server to transcode it to H.264. I'm not so sure you would really want to do that transcoding on the phone. Given Apple's current stance on Flash, this is probably your best option.
For FLV files, what I do is I upload them on Google Drive and watch them from Google Drive app.
I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art
This is related to my another question
Here I'd like to ask if it is in theory (according to video file formats and codecs, etc) possible to have such scenario:
1) Client on iPhone has a reference to video in flv format. It sends http request to converting "proxy" like http://convproxy.com?source=url_of_original_video.flv by just clicking such link in Safari
2) Converting proxy starts downloading that flv file and converting it to mp4 (which iphone understands) on the fly, returning converted portion as http response, so iPhone can immediately start playing it, before entire flv is downloaded and converted.
I was playing with ffmpeg trying to do such thing, and it indeed converts flv and produces mp4 file, however that mp4 file can not be played until convertion is finished or ffmpeg is stopped. If I just kill ffmpeg process the mp4 file can not be played. If I let it finish or press ctrl-c to stop it, the part that was downloaded and converted can be played. Seems like ffmpeg does some job after it receives stop signal. Is that a necessary part of mp4 format or it can be done differently? I see that iPhone can stream video, by starting playing before the entire file is downloaded to it, so in general it seems like possible scenario for me.
I short words, I can convert flv file to mp4 file, and the question is if I can convert flv stream to mp4 stream.
According to wikipedia, the MP4 container format requires a separate "hint track" to enable streaming. I assume ffmpeg writes this at the end of the conversion. If the iPhone OS requires this track to stream, I don't see a way to stream live video outside of using a different format and having a custom decoder on the iPhone side similar to how the Orb client for iPhone does it.