This line of code works perfectly to make .pngs
ffmpeg -i path/video.mp4 -f image2 -vf fps=fps=1/60 path/%03d.png
But the problem is i would like the output to be .jpg i tried the same line with a different ext but i get errors in the command line any suggestions?
i.e. ffmpeg -i path/video.mp4 -f image2 -vf fps=fps=1/60 path/%03d.jpg
errors with the jpg command.
{*NOT* a error seemed important
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 854x480
[SAR 1:1 DAR 427:240], 1352 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc
}
ERRORS IN COMMAND LINE
[mjpeg # 000000000482c9a0] bitrate tolerance too small for bitrate
[mjpeg # 000000000481d020] ff_frame_thread_encoder_init failed
Stream mapping: Stream #0:0 -> #0:0 (h264 -> mjpeg)
Error while opening encoder for output stream #0:0 - maybe incorrect parameters
such as bit_rate, rate, width or height
ffmpeg -i ./video -f image2 -vf fps=1 ./%03d.jpg
this worked quite nicely, make sure you do not give paths that have folders in the output path (if they are not already made) ffmpeg won't make the directories for you.
Related
I have a webradio streamed by Liquidsoap+Icecast on a DigitalOcean droplet (Ubuntu 16.04), and I want to combine this audio stream with a simple jpeg image with ffmpeg, transform it to a video stream and send it to Facebook live.
Facebook Live specifications :
Video Format :
We accept video in maximum 720p (1280 x 720) resolution, at 30 frames
per second. (or 1 key frame every 2 seconds). You must send an I-frame
(keyframe) at least once every two seconds throughout the stream..
Recommended max bit rate is 4000 Kbps. Titles must be less than 255
characters otherwise the stream will fail. The Live API accepts H264
encoded video and AAC encoded audio only.
Video Length :
240 minute maximum length, with the exception of continuous live (see
above). 240 minute maximum length for preview streams (either through
Live dialog or publisher tools). After 240 minutes, a new stream key
must be generated.
Advanced Settings :
Pixel Aspect Ratio: Square. Frame Types: Progressive Scan. Audio
Sample Rate: 44.1 KHz. Audio Bitrate: 128 Kbps stereo. Bitrate
Encoding: CBR.
And the ffmpeg command I tried :
ffmpeg -loop 1 -i radio-background.jpg -thread_queue_size 20480 -i http://localhost:8000/radio -framerate 30 -r 30 -acodec aac -strict -2 -c:v libx264 -strict experimental -b:a 128k -pix_fmt yuvj444p -x264-params keyint=60 -b:v 256k -minrate 128k -maxrate 512k -bufsize 768k -f flv 'rtmp://rtmp-api.facebook.com:80/rtmp/<fb-streaming-key>'
This is actually working, as Facebook receives the live video and allows me to publish it. But I can't figured out why there is a lag almost every 2 or 3 seconds. I asked different people to watch the test video, and everyone gets the same problem : every 2 or 3 seconds the playing "freezes" for half a second and seems to load the video, I even can see the loading icon spinning on the screen.
I tried different combinations of values for the following options : -thread_queue_size / -b:v / -minrate / -maxrate / -bufsize. Nothing seems to produce any change.
Video streaming is new for me, I'm not really confortable with the options listed before, so I think I'm missing something here...
Also, note that the icecast audio stream perfectly works, and according to DigitalOcean graphs, the server is not overloaded. So I think my ffmpeg command is wrong.
What ffmpeg parameters would be working for that case?
specify a frame rate for the image. this would go before the input item.
-r 30 -loop 1 -i radio-background.jpg
if your radio stream is is already aac you can just stream copy, there is no need to re-encode the audio. you can use -c:a copy.
-c:a copy
if you still want to use aac you should switch to using libfdk_aac. ffmpeg by default uses 128k bitrate for audio so there is no need to specify -b:a
-c:a libfdk_aac
ffmpeg will use the input framerate of the first item for the output by default so you dont need to specify anymore frame rates. (you have the output frame rate specified twice. -framerate 30 and -r 30 are the same)
ultrafast preset for better CPU performance, tune, and pixel format. you can also use -g for the keyent.
-c:v h264 -preset ultrafast -tune stillimage -pix_fmt yuvj444p -g 60
set the profile and profile level, bframes
-profile:v high444 -level 4.2
use either -b:v or -minrate -maxrate -bufsize but not both.
-b:v 768k
and out we go
-f flv rtmp://rtmp-api.facebook.com:80/rtmp/streamkey
now to put it all together
ffmpeg -r 30 -loop 1 -i radio-background.jpg \
-i http://localhost:port/mount -c:a libfdk_aac -c:v h264 -b:v 768k \
-preset ultrafast -tune stillimage -pix_fmt yuvj444p -g 60 \
-profile:v high444 -level 4.2 -f flv rtmp://rtmp-api.facebook.com:80/rtmp/streamkey
I wrote an simple opencv desktop application to receive a multicast stream from my raspberry pi.
On the pi I want to use avconv to send the multicast.
This one works with my app and also with VLC-Player:
avconv -i video.mp4 -f mpegts udp://225.0.0.37:4030
But this one is not working:
avconv -i video.h264 -f mpegts udp://225.0.0.37:4030
Error Message as follows:
avconv version 9.18-6:9.18-0ubuntu0.14.04.1, Copyright (c) 2000-2014 the Libav developers
built on Mar 16 2015 13:20:58 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
[h264 # 0x8986980] Estimating duration from bitrate, this may be inaccurate
Input #0, h264, from 'video.h264':
Duration: N/A, bitrate: N/A
Stream #0.0: Video: h264 (High), yuv420p, 320x240, 25 fps, 25 tbr, 25 tbn
Output #0, mpegts, to 'udp://225.0.0.37:4030':
Metadata:
encoder : Lavf54.20.4
Stream #0.0: Video: mpeg2video, yuv420p, 320x240, q=2-31, 200 kb/s, 90k tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (h264 -> mpeg2video)
Press ctrl-c to stop encoding
[fps # 0x8a5cac0] Discarding initial frame(s) with no timestamp.
Last message repeated 445 times
frame= 0 fps= 0 q=0.0 Lsize= 0kB time=10000000000.00 bitrate= 0.0kbits/s
video:0kB audio:0kB global headers:0kB muxing overhead -nan%
Could anybody explain where the problem is and how to solve this issue??
My aim is to get a live stream with the v4l2 driver, like this:
avconv -i /dev/video0 -f mpegts udp://225.0.0.37:4030
If you want to use /dev/video0 with avconv you have to tell avconv that the source is an video4linux2 source/stream.
And for good results you have to tell v4l2 to set the resolution to ex.. 640x480 otherwise it's using 320x240
avconv -f video4linux2 -s 640x480 -i /dev/video0 -f mpegts udp://225.0.0.37:4030
but remember i think you have to purchase a mpeg2 license for that.
if you recompile avconv with the --enable-omx-rpi you can use the hardware h264 coder from the Openmax.
avconv -f video4linux2 -s 640x480 -i /dev/video0 -f mp4 -na \
-c:v h264_omx -b:v 750k udp://225.0.0.37:4030
-na = disable audio
This will Reduce the CPU usage from your pi by 70% or more.
For compiling instructions :
https://ubuntu-mate.community/t/hardware-h264-video-encoding-with-libav-openmax-il/4997/6
i use this command line on Windows to encode all my videos :
ffmpeg -i MyInputFile.wmv -c:v mpeg4 -q:v 1 -c:a libvo_aacenc -q:a 100 MyOutPutFile.mp4
All these originals videos are in .wmv and they have a bitrate of 1200kb/s.
I have to use MPEG4 to read the encoded videos on Android / iOS. With x264 it doesn't work, i have a black screen and audio only.
The command line works fine, but my output files are too big ( bigger than .wmv files, but MPEG4 is better normally ). How can i change my settings ? Select the bitrate ? i'm trying this but it doesn't work, the bitrate is still too high :
ffmpeg -i TestBitRate.wmv -c:v mpeg4 -q:v 1 -b 500k -c:a libvo_aacenc -q:a 100 TestBitRate.mp4
Thank you in advance for your help. :)
I'm trying to interleave MPEG-TS segments but failing. One set of segments was actually captured using the built in camera in the laptop, then encoded using FFMPEG with the following command:
ffmpeg -er 4 -y -f video4linux2 -s 640x480 -r 30 -i %s -isync -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480
And the other one is an avi file that was encoded using the following command:
fmpeg -er 4 -y -f avi -s 640x480 -r 30 -i ./DSCF2021.AVI -vbsf dump_extra -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 640x480 -vcodec libx264 -fflags +genpts -b 386k -coder 0 -me_range 16 -keyint_min 25 -i_qfactor 0.71 -bt 386k -maxrate 386k -bufsize 386k -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -aspect 640:480
Then the output is segmented into ts segments using an open source segmenter.
If both come from the same source (both from the camera) they work fine. However in this case, the second set of segments freeze. Time passes, but the video does not move..
So i think it's an encoding problem. So my question is, how should i change the ffmpeg command for this to work?
By interleave I mean, having a playlist with the first set of segments, and another playlist with the other set of segments, and having the client call one then the other (HTTP Live Streaming)
The ffprobe output of one of the first set of segments:
Input #0, mpegts, from 'live1.ts':
Duration: 00:00:09.76, start: 1.400000, bitrate: 281 kb/s
Program 1 Service01
Metadata:
name : Service01
provider_name : FFmpeg
Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 29.92 fps, 29.92 tbr, 90k tbn, 59.83 tbc
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 111 kb/s
The ffprobe output of one of the second set of segments:
Input #0, mpegts, from 'ad1.ts':
Duration: 00:00:09.64, start: 1.400000, bitrate: 578 kb/s
Program 1 Service01
Metadata:
name : Service01
provider_name : FFmpeg
Stream #0.0[0x100]: Video: h264, yuv420p, 640x480 [PAR 1:1 DAR 4:3], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0.1[0x101]: Audio: aac, 48000 Hz, stereo, s16, 22 kb/s
Thank you,
I have seen quite a few questions in the subject - See:
HTTP Live Streaming MPEG TS segment and
Update .m3u8 playlist file for HTTP Live streaming?
I am not sure exact problem - but i think most people complain that when you mix content from both sources are different then there is a freezing.
I think this situation may arise if PTS and/or PCR is discontinuous and the player is not recognizing this or flushing it. Probably, you can identify the sequence of Timestamps and see if that being fixed solves the problems.
Also, see 3.3.11. of https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming-07 : EXT-X-DISCONTINUITY
The EXT-X-DISCONTINUITY tag indicates an encoding discontinuity
between the media segment that follows it and the one that preceded
it. The set of characteristics that MAY change is:
o file format
o number and type of tracks
o encoding parameters
o encoding sequence
o timestamp sequence
So a discontinuity flag in the playlist file might just help if the problem is any of the above. Please try some of this, and put more details. I guess, this will help lot of other people as well.
I'm trying to setup a simple mobile page for a client with a link to an .mp4 video file. Lke so:
Watch MP4 Video
And then I've obviously got my video file sourced properly and the .mp4 has the following characteristics:
Dimension: 480 * 272
Codecs: AAC, H.264, MPEG-4 SDSM, MPEG-4 ODSM
Channel Count: 2
Total Bitrate: 991
Size: 11.4MB
But, the problem is when I click on the link iPhone says "Movie cannot be played." and doesn't tell me why.
Any help?
The problem was partially to do with encoding but more to do with the dimensions.
I found out that if your .mp4 file is larger in dimension than 640*360 then the iPhone (iPad, iPod) won't even give the user the option to attempt to play it. They just get the X'd out play button icon.
Also, these devices only support .mp4's that are encoded with the baseline H.264 profile, or they can't be played.
Also, there's a bitrate limit of 1.5Mb for the iPhone, but it's suggested to keep the bitrate below 900kb.
If quality is less of a concern than size then you can use m4v's of larger dimensions but I believe the bitrate rules still apply.
I encountered a similar issue and my guess was encoding. I had tried the "iPhone" preset with Adobe Premiere CS4 (Adobe Media Encoder) with no luck.
Running it through ffmpeg with the following did the trick:
ffmpeg -i INPUT -s 320x240 -r 30000/1001 -b 200k -bt 240k -vcodec libx264 -coder 0 -bf 0 -refs 1 -flags2 -wpred-dct8x8 -level 30 -maxrate 10M -bufsize 10M -acodec libfaac -ac 2 -ar 48000 -ab 192k OUTPUT.mp4
I found the above (and many other configurations) here: http://rodrigopolo.com/ffmpeg/cheats.html (I corrected a few typos in their "iPod-iPhone 640 width, without presset" [sic].)
Other searching around will probably yield more information about the encoding requirements (h.264 baseline 3.0) and size requirements for the movie to play on the iPhone.
The official Apple reference on the subject: http://developer.apple.com/library/safari/#documentation/AppleApplications/Reference/SafariWebContent/CreatingVideoforSafarioniPhone/CreatingVideoforSafarioniPhone.html
You need the h254 video to be progressive not lower. Choose the H.264 preset and change the video from lower to progressive.
this did it for me:
ffmpeg -an -i movie.mp4 -vcodec libx264 -codec:a libmp3lame -qscale:a 1 -pix_fmt yuv420p -profile:v baseline -level 3 output.mp4
I used mp3 codec here. This fixed my iPhone mp4 problem!
I ran into a similar situation with video that I was generating. It would play fine on my local machine, or through a browser that supports .mp4; however when I tried viewing it on my iPhone it would invariably bring up the crossed-out play button. After reading ffmpeg docs I tried using the following and it worked beautifully on my iPhone as well as the other devices I've been able to try.
ffmpeg -i input.mkv -c:v libx264 -crf 28 -preset veryslow -tune fastdecode \
-profile:v baseline -level 3.0 -movflags +faststart -c:a libfdk_aac -ac 2 \
-ar 44100 -ab 64k -threads 0 -f mp4 output.mp4
The video that I'm dealing with is 1280x720 at 30fps, and the option that finally got it working was
-profile:v baseline -level 3.0