Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've got what seems like it should be a really simple problem, but it's proving much harder than I expected. Here's the issue:
I've got a fairly large image sequence consisting of numbered frames (output from Maya, for what its worth). The images are currently in Targa (.tga) format, but I could convert them to PNGs or other arbitrary format if that matters. The important thing is, they've got an alpha channel.
What I want to do is programatically turn them into a video clip. The format doesn't really matter, but it needs to be lossless and have an alpha channel. Uncompressed video in a Quicktime container would probably be ideal.
My initial thought was ffmpeg, but after wasting most of a day on it it seems it's got no support at all for alpha channels. Either I'm missing something, or the underlying libavcodec just doesn't do it.
So, what's the right way here? A command line tool like ffmpeg would be nice, but any solution that runs on Windows and could be called from a script would be fine.
Note: Having an alpha chanel in your video isn't actually all that uncommon, and it's really useful if you want to composite it on top of another video clip or a still image. As far as I know uncompressed video, the Quicktime Animation codec, and the Sorenson Video 3 codec all support tranparency, and I've heard H.264 does as well. All we're really talking about is 32-bit color depth, and that's pretty widely supported; both Quicktime .mov files and Windowss .avi files can handle it, and probably a lot more too.
Quicktime Pro is more than happy to turn an image sequence into a 32-bit .mov file. Hit export, change color depth to "Millions of Colors+", select the Animation codec, crank the quality up to 100, and there you are - losslessly compressed video, with an alpha chanel, and it'll play back almost anywhere since the codec has been part of Quicktime since version 1.0. The problem is, Quicktime Pro doesn't have any sort of command-line interface (at least on Windows). ffmpeg supports encoding using the Quicktime Animation codec (which it calls qtrle), but it only supports a bit-depth of 24 bits.
The issue isn't finding a video format that supports an alpha channel. Quicktime Animation would be ideal, but even uncompressed video should work. The problem is finding a tool that supports it.
Yes ffmpeg certainly does support alpha channel in a video file. Not all codecs in ffmpeg seem to support alpha yet tho. Motion PNG in a .MOV file is one good combination for alpha.
To encode/import images with alpha to a video with alpha try: ffmpeg -i %d.png -vcodec png z.mov
Quicktime will play that.
To decode/export a video with alpha to images with alpha try: ffmpeg -i z.mov -f image2 export2\%d.png
Note that I exported them to a directory called 'export2'. Be sure to leave the %d parts in there. These commands will work as is on a Windows system. Linux/Mac users may need to add quote marks and swap some \ for / as usual.
I know this topic is a bit old, but I am posting anyway.
FFMPEG with Quicktime Animation (RLE) or FFVHUFF/HUFFYUV will do.
ffmpeg -i yoursequence%d.png -vcodec qtrle movie_with_alpha.mov
ffmpeg -i yoursequence%d.png -vcodec ffvhuff movie_with_alpha.avi
ffmpeg -i yoursequence%d.png -vcodec huffyuv movie_with_alpha.avi
You will get video files with transparency(alpha channel) preserved.
I have also heard On2-VP6 variation (Not the WebM-VP8 yet) can handle alpha, but I do not have their codec at hand.
This also works.
- ffmpeg -i yoursequence%d.png -vcodec png movie_with_alpha.mov
For web developers reaching this question and banging their heads against the wall in frustration… It is possible to create a transparent WebM video, but at the moment you might need to compile ffmpeg and the required libraries from source.
I wanted to display a rendered Blender video in a website but preserve the transparency. The first step was to render the Blender output as individual PNG files. After that, I spent quite a while trying to coerce ffmpeg to convert those PNG files into a single video. The basic command is simple:
ffmpeg -i input%04d.png output.webm
This command loads all PNGs with the filenames input0000.png through input9999.png and turns them into a video. The transparency was promptly lost. Combing through the output I realized ffmpeg was helpfully selecting a non-transparent format:
Incompatible pixel format 'yuva420p' for codec 'flv', auto-selecting format 'yuv420p'
At this point I was realizing I might have to recompile ffmpeg from scratch. I struggled with a few other tools, but ultimately ended up back with ffmpeg. After compiling libvbx and ffmpeg from the latest source, things worked a charm.
check your version of ffmpeg
ffmpeg -version
ffmpeg version n4.1.4 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
you'll need to update to v4 for alpha support use
sudo snap install ffmpeg
N.B. you'll need to ditch the old ffmpeg from your system.
sudo apt-get remove ffmpeg
ffmpeg -version
ffmpeg version n4.1.4 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-avisynth --enable-cuda --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
N.B. - you can hide the above from constantly appearing every time you run ffmpeg by passing
-hide_banner
Related
I am going to record H264 encoded video stream data in iOS using swift.
I am not familiar with video codec formats so don't know how to do this. But I've tried to write the H264 raw video data to the file sequently and see its file Info. I am surprised that it has almost video file info (compared with standard mp4, MOV file). The only missing info is video duration, file size, overall bit rate, encoded data, etc. So I am just wondering if video can play if I add the MOV file header to this file manually. Spent few hours to googling how to add MOV file header with ffmpeg but stacked. Any help would be appreciated. Thanks
You can nominally use ffmpeg to do this:
ffmpeg -i in.h264 -c copy out.mov
However, due to a bug in ffmpeg relating to generation of PTS for video streams with multiple B-frames, the output video may not play smoothly. Test and check.
If it doesn't there's a workaround which involves using mp4box from GPAC.
mp4box -add in.h264 -new out.mp4
and then
ffmpeg -i out.mp4 -c copy out.mov
Hello I am very new to HTML5 video and I'm having a problem with videos NOT playing on my iPhone 4 (running iOS 6.1.2) when using the HTML5 video tag.
The video runs fine on a computer with Google Chrome. The browser I am using Safari on the iPhone.
I have tried using multiple file formats together such as .OGG .webM and h264 MP4.
Something that is really confusing me however is that I have tried playing a HTML5 video at the bottom of this article on my iPhone and it still does not work.
I thought this blog would be the example to follow for HTML5 video, but now I really can't work out what is going wrong. I also tried it on another iPhone4 and it did not work.
Does anyone know what is going on here or what the problem is with HTML5 video on the iPhone? Could someone help me with a good way to display video that is not using something like Youtube but more along the lines of HTML5?
Thank you!!
This may help you
We need three formats of HTML5 to be able to work on all browsers including mobiles.
mp4 encoded with H.264
webm
ogv
Encoding video using ffmpeg:
ffmpeg -i input.mp4 -codec:v libx264 -profile:v high -preset slow -b:v 500k -maxrate 500k -bufsize 1000k -vf scale=-1:480 -threads 0 -codec:a libfdk_aac -b:a 128k output.mp4
detail here http://skillrow.com/html5-video-for-all-browsers/
I chanced upon the article at Google Speech API which suggested a mechanism for extracting text from audio file through Perl. Now I have recorded a audio file, which you will find at http://vocaroo.com/i/s0lPN5d3YQJj. It is a simple piece of audio, reading I love you. When I go to the Google speech API in Chrome, and speak those words, I get the right result. When I try the code at the above mentioned link with the audio file I pointed out, it returns strange results, like logan. How can I make it more accurate? This is just a sample audio, what I am generally doing is extracting the audio from a video file through FFMpeg using something like ffmpeg -i input.avi -vn -ar 44100 -ac 2 -ab 192 -f mp3 output.mp3, followed by ffmpeg -i input.mp3 output.flac.
Have you tried playing the audio files you are creating?
You are setting an audio bitrate of 192 bits/second which is ridiculously low.
For 192Kbps you need -ab 196608.
The iPhone app I am working on captures images in series within certain user-defined time interval, I am looking for a way to combine these images into H264 encoded videos. I have done some research on Google, it looks like I will have to use something like ffmpeg/mencoder on iPhone? (Also found someone ported ffmpeg to iPhone, ffmpeg4iPhone)
However, I found that x264 is under GPL license, and requires me to open source my project if I use ffmpeg. Also found some people suggested to use Ogg Theora, but I will need to port it to iPhone if I use it. (Which I am not sure how to do it now).
Is there any workaround for this? Any ideas? Thanks.
I think you are in a GPL-bind there and have two suggestions:
Just go ahead and GPL your project. There is no reason you cannot sell open source software, and the app store's delay/penalty period will give you a nice lead time over any potential completing project with the GPL'd code. Your place on iTunes store, your motivation and any branding is probably more valuable than the source code. Plus, you can get other people to fix bugs for you. Update: As of January 2011, GPL and App Store do not mix.
Have the iPhone app upload the raw images to a server and do the processing there. That way you are not releasing and distributing the FFmpeg and x264 code, and are hence not required to distribute it.
Good luck and let us know here if you get it published!
Appears ffmpeg now has support for cisco's "openh264" (BSD FWIW) encoding codec:
https://www.ffmpeg.org/ffmpeg-codecs.html#libopenh264
FWIW here is what I get from my LGPL build:
ffmpeg.exe -codecs | grep h264
...
ffmpeg version n3.1.2 Copyright (c) 2000-2016 the FFmpeg developers
DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_qsv ) (encoders: libopenh264 h264_nvenc h264_qsv nvenc nvenc_h264 )
which mentions a few other encoders FWIW, and FFmpeg might support even others.
I believe you'll only be able to find commercial versions of x264 implementations if you don't intend to use ffmpeg (there might exist a few other opensource versions but with very low quality). Also, you need to bear in mind that if you make use of those codecs and you decide not to use the platform/iPhone ones you will have to pay royalties because of the patents (I think it's roughly 1 dollar per download).
If this is still affordable to you, then I believe you might be able to find an older version of ffmpeg that was LGPL'ed. You can use this in your code without having to open source the whole project. You only need to opensource changes that you might make to ffmpeg.
Hope this helps!
OK. If one uses the System Sounds Services for iPhone sound effects, there is no way to alter the level of the resulting sounds programmatically. Even worse, if one reduces the volume using the ringer volume control on the side of the iPhone to a very low level, then starts the app, the sound effects will be inaudible. On the other hand, if one increases the hardware level to the max before starting the app, sound effects will be uncomfortably loud. To all intents and purposes, this renders the System Sounds APIs useless (or at least ill-advised.) All of this is moot with regard to the iPod, as it does not show this behavior (after all, it isn't a phone.)
I decided to use an AVAudioplayer to play sound effects. Under iPhone SDK 3.1, my existing aiff files (mostly u-law format) work fine, but they won't play under SDK 3.0, and I get an error msg that the codec can't be found. According to Apple's documentation, I can use any supported file format under the caf umbrella, but there must be a codec available.
I have searched diligently, but although I have found lists of codecs available for Mac OS X, I can't find a list of codecs for the iPhone, particularly for SDK 3.0. Can anyone point me to such a list? I want my game (Imp or Oaf?) to work on OS 3.0 and later. I can use mp3 files, but there are latency problems there.
Thanks,
Dan
I can't point you to a list, but ima4 has worked for me since day 1. ima4 has a 1/4 compression ratio.
In Terminal, do this:
afconvert -f caff -d ima4 original_filename.wav