I'm able to extract all frames that are not similar to the previous frame from a video file using ffmpeg -i video.mp4 -vf "select=gt(scene\,0.003),setpts=N/(30*TB)" frame%d.jpg (source)
I would like to overlay the frame number onto each selected frame. I tried adding drawtext=fontfile=/Windows/Fonts/Arial.ttf: text='frame\: %{frame_num}': x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000#1: fontsize=30 to the filter after select and setpts, however %{frame_num} returns 1, 2, 3, ... (source)
If I put drawtext before select and setpts, I get something like 16, 42, 181, ... as frame numbers (which is exactly what I want), but since the scene detection runs after adding the text overlay, changes in the overlay may be detected as well.
Is it possible to do the scene detection and overlay independently from another? [in] split [out0][out1] can be used to apply filters separately, but I don't know how to "combine" the results again.
You are on the right track. Use split first to create two streams. Run scene detection on one, and draw text on another. Then use overlay to paint the numbered stream on the pruned stream - only the corresponding pruned numbered frames will be emitted.
ffmpeg -i video.mp4 -vf "split=2[num][raw];[raw]select=gt(scene\,0.003)[raw];[num]drawtext=fontfile=/Windows/Fonts/Arial.ttf: text='frame\: %{frame_num}': x=(w-tw)/2: y=h-(2*lh): fontcolor=white: box=1: boxcolor=0x00000000#1: fontsize=30[num];[raw][num]overlay=shortest=1,setpts=N/(30*TB)" -r 30 frame%d.jpg
Related
I've a small issue when generating interpolated videos from a short image sequence for a VQGAN+Clip Art Porject
The problem i've is just that the first frame stucks for a moment, then it jumps to the second one, the second one is also stuck for a moment but then it starts and works nicely. My biggest problem with it is the harsh transition from 1st to 2nd frame, the 2nd frame "start delay" is not that big an issue to me, but woul also be nice to get rid of for
Thats my command for generating the Video from an image sequence
ffmpeg -framerate 1 -i Upscaled\%d_out.png -vcodec h264_nvenc -pix_fmt yuv420p -strict -2 -vf minterpolate="mi_mode=mci:me=hexbs:me_mode=bidir:mc_mode=aobmc:vsbmc=1:mb_size=8:search_param=32:fps=30" InterpolatedVideo.mp4
You can see the result >> HERE <<
Now my question is if thats fixable by editing the command & if so, how?
I'd like to keep the first frame, but having it interpolate to the second frame.
I want to avoid to manually cut it afterwards, as i'd need to know the time to cut etc.
Thanks for any help in advance
Greetings from Vienna
Okay so what the problem was is the scene change detection aka. the scd parameter of the interpolation instruction. It's set to fdiff(frame difference) by default. Setting it to none with scd=none in the interpolation instruction gets rid of it
I also had to copy the 1st frame TWICE at the end to create a smooth loop. With only one, it entirely missed the last(copied first frame). I now copied it once morre at the end and it works now super smoothly. I guess the very last frame could be anything, as it misses it anyway
I am trying to run GIMP using console commands. All I want it to do is rotate an image 90 degrees. I added GIMP to my environmental variables so that I can call it from a console window. I also put the picture I want to rotate in my root console directory to make it easy to open.
I read the GIMP Batch Mode guide and came up with this command:
gimp-2.10 -i -b '(gimp-image-rotate plot.png 0)' -b '(gimp-quit)'
The "0" after "plot.png" is supposed to tell it to rotate 90 degrees. This opens a GIMP output window and outputs two messages saying "batch command executed successfully". However, it never rotates the image.
Any idea why the command I have entered is not working?
gimp-image-rotate rotates a loaded image and not a file that contains an image. So you have to
obtain an image by loading it from a file (see the gimp-file-load or gimp-file-{type}-load calls),
rotate the image,
save the result (gimp-file-{type}-save (caution: these calls save a layer, not the whole image)).
But for simple manipulations you are better off using a toolbox designed to be called from scripts such as ImageMagick:
magick mogrify -rotate 90 plot.png
Consider the following MCVE:
B = 50 - randi(100,100,100,4);
% Show each of the 4 layers of A for 0.50 seconds each, and save image frames:
fig=figure();
for idx = 1:size(B,3)
imagesc( B(:,:,idx) ); title(num2str(idx)); caxis([-50 50]); drawnow;
frame = getframe(fig);
img{idx} = frame2im(frame);
pause(0.50);
end
% Write a .gif file, show each image 1 second in infinite loop.
filename = 'whatsgoingon.gif'; dlyt = 1;
for idx=1:length(img)
[A,map]=rgb2ind(img{idx},256);
if idx==1; imwrite(A,map,filename,'gif','LoopCount',Inf,'DelayTime',dlyt);
else; imwrite(A,map,filename,'gif','WriteMode','append','DelayTime',dlyt);
end
end
Each image shows a layer of the cube B. I wrote some code to make a .gif file out of this to make it easier to share. The problem I have with that is: each time I open the .gif file, it will skip the second frame (i.e. the one associated with B(:,:,2)) on the first loop of showing. Essentially, the .gif shows the following frames in chronological order:
1, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, etc.
This is not a huge problem, just a bit embarassing when I am sharing some results with others. I can't seem to find any topic on a similar issue (neither here nor on Matlab's website), so I would be very curious to hear if you see the same issue when you make a gif using the above code, and if you would have any idea where it originates.
FYI: I am using Matlab R2018a on a Windows machine.
EDIT: here is an example image I created:
Just a summary of the comments for future readers...
You can check the delays and details in an animated GIF with ImageMgick on the command line like this:
magick identify -format "%f[%s] %T\n" matlab.gif
Sample Output
matlab.gif[0] 100 <--- this frame has a 100 centisecond delay
matlab.gif[1] 100
matlab.gif[2] 100
matlab.gif[3] 100
This command is similar - use FINDSTR on Windows in place of grep:
magick identify -verbose matlab.gif | grep Delay
Delay: 100x100
Delay: 100x100
Delay: 100x100
Delay: 100x100
If you want to debug an animated GIF but it is too fast to see, you can reset all the timings - say to 3s per frame - like this:
magick input.gif -coalesce -set delay 300 slooooow.gif
Note that some applications display animated GIFs incorrectly, so try using Open->File in a web-browser to check. Try Chrome, Firefox, Opera, Safari etc.
If you are really having problems passing GIFs to colleagues and getting understood, you can make a cartoon-strip out of an animation like this:
magick input.gif -coalesce +append result.gif
Or, you can make a montage on a grid like this:
magick input.gif -coalesce miff:- | magick montage -geometry +10+10 - result.gif
I am trying to change the dimensions of the video file through FFMPEG.
I want to convert any video file to 480*360 .
This is the command that I am using...
ffmpeg -i oldVideo.mp4 -vf scale=480:360 newVideo.mp4
After this command 1280*720 dimensions are converted to 640*360.
I have also attached video. it will take less than minute for any experts out there. Is there anything wrong ?
You can see here. (in Video, after 20 seconds, direclty jump to 1:35 , rest is just processing time).
UPDATE :
I found the command from this tutorial
Every video has a Sample Aspect Ratio associated with it. A video player will multiply the video width with this SAR to produce the display width. The height remains the same. So, a 640x720 video with a SAR of 2 will be displayed as 1280x720. The ratio of 1280 to 720 i.e. 16:9 is labelled the Display Aspect Ratio.
The scale filter preserves the input's DAR in the output, so that the output does not look distorted. It does this by adjusting the SAR of the output. The remedy is to reset the SAR after scaling.
ffmpeg -i oldVideo.mp4 -vf scale=480:360,setsar=1 newVideo.mp4
Since the DAR may no longer be the same, the output can look distorted. One way to avoid this is by scaling proportionally and then padding with black to achieve target resolution.
ffmpeg -i oldVideo.mp4 -vf scale=480:360:force_original_aspect_ratio=decrease,pad=480:360:(ow-iw)/2:(oh-ih)/2,setsar=1 newVideo.mp4
I've read a lot of tutorials on how to overlay an image in AviSynth, but wonder if there is a way to place several images on a video at specific time positions. I've been able to render videos with a transparent png logo, but didn't find any tutorial how to place different images at different frame positions.
I believe you have to figure out time positions from the frame rate. For instance the below sample will show the overlay image between 101 - 200 frames (4th to 8th second):
AviSource("sample.avi", false).AssumeFPS(25).ConvertToRGB
img = ImageSource("sample.png")
Trim(0, 100) + Trim(101, 200).Overlay(img, 20, 30, opacity = 0.5) + Trim(201, 0)
Thanks!
Depending on your input codec you might need to replace
AviSource("sample.avi", false).AssumeFPS(25).ConvertToRGB
with
DirectShowSource("sample.avi")
If you use the wrong one you might get a error in the line of AVISource couldn't locate a decompressor for fourcc mjpg