I try to mix multiple URL of sticker with video .
command= "-y -i ${video.path} -i ${sticker.url} -filter_complex \"[1:v]scale=100:100[ovrl];[0:v][ovrl]overlay=${sticker.offset!.dx}:${sticker.offset!.dy}\" -frames:v 900 -codec:a copy -codec:v libx264 -max_muxing_queue_size 2048 -preset ultrafast $outputFilePath",
Related
This command is for add video in to video :
ffmpeg -i 1.mp4 -i over.mp4 -filter_complex
"[0:v]setpts=PTS-STARTPTS,scale=224x400[top];[1:v]setpts=PTS-STARTPTS,scale=100x44[bottom];[top][bottom]overlay=x=115:y=346:eof_action=pass;[0]volume=0.7[a1];[1]volume=0.3[a2];[a1][a2]amix=inputs=2[a]"
-acodec aac -vcodec libx264 -map 0:v -map "[a]" out.mp4
This command is for add water mark and username text in to video :
ffmpeg -i 1.mp4 -i watermark.png -filter_complex
"overlay=main_w-overlay_w-5:main_h-overlay_h-15,drawtext=fontfile=/path/to/font.ttf:text=‘#Unknown':
fontcolor=white: fontsize=10: box=1: boxcolor=black#0.0: boxborderw=5:
x=160: y=380" -codec:a copy output.mp4
If i want to execute this two command together then what i have to do OR how can i join this two command in one ?
Add the watermark and drawtext after the overlay.
ffmpeg -i 1.mp4 -i over.mp4 -i watermark.png -filter_complex "[0:v]setpts=PTS-STARTPTS,scale=224x400[top];[1:v]setpts=PTS-STARTPTS,scale=100x44[bottom];[top][bottom]overlay=x=115:y=346:eof_action=pass[vid];[vid][2]overlay=main_w-overlay_w-5:main_h-overlay_h-15,drawtext=fontfile=/path/to/font.ttf:text=‘#Unknown': fontcolor=white: fontsize=10: box=1: boxcolor=black#0.0: boxborderw=5: x=160: y=380;[0]volume=0.7[a1];[1]volume=0.3[a2];[a1][a2]amix=inputs=2" -acodec aac -vcodec libx264 out.mp4
I'm trying to re-encode some of my old videos to "archive" them.
I do not need to keep the audio 5.1, but I would like to down-mix it to 2.1 instead of Stereo which sounds just too dull.
This is the relevant part which takes care of the down-mix to Stereo and re-encodes the audio, I would like to adjust it to down-mix to 2.1.
-ac 2 -c:a libfdk_aac -vbr 3
I did some research and it seems that there is a -layouts switch which does support 2.1, but I don't know, how to use it. What channel should go where?
Just for illustration and for you to get the whole picture - I'm currently using this script:
#!/bin/bash
for i in *.mkv;
do
#Output new files by prepending "x265" to the names
/cygdrive/c/media-autobuild_suite/local32/bin-video/ffmpeg.exe -y -i "$i" -c:v libx265 -preset slow -b:v 512k -x265-params pass=1 -c:s copy -c:a copy -f matroska NUL && \
/cygdrive/c/media-autobuild_suite/local32/bin-video/ffmpeg.exe -i "$i" -c:v libx265 -preset slow -b:v 512k -x265-params pass=2 -c:s copy -ac 2 -c:a libfdk_aac -vbr 3 x265_"$i"
done
The FDK aac encoder does not support 2.1, but the native encoder does.
ffmpeg -i "$i" ... -c:s copy -af pan=2.1 -c:a aac x265_"$i"
When I try to encode a video file with a two pass in ffmpeg, the output file of the first pass is empty using vp9. Hence I can not proceed with the second part.
Code for the two-pass:
1.pass:
ffmpeg -y -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9 -crf 20
-pass 1 -an -f avi NULL && \
2.pass
ffmpeg -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9
-pass 2 -b:v 1000K -f avi out.avi
Any help would be greatly appreciated. Thanks.
You don't need to generate a file for the first pass. The purpose is simply to send the frames to the encoder so that it can log stats. However, you should skip the muxer.
So, Pass 1
ffmpeg -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9 -b:v 1000k -pass 1 -an -f null -
Pass 2
ffmpeg -s:v 3840x1920 -framerate 30 -i video_framerate_resolution.yuv -c:v libvpx-vp9 -pass 2 -b:v 1000K out.avi
I have a 25MP uncompressed video file of 100 frames.
I tried to encode it with ffmpeg and h264 encoder into a .mp4 file, but the encoding got stuck around the 10th frame.
This is the script:
avconv -y -i input.avi -c:v libx264 -preset medium -b:v 5000K -pass 1 -c:a libfdk_aac -b:a 5000K -f mp4 /dev/null && \
avconv -i input.avi -c:v libx264 -preset medium -b:v 5000K -pass 2 -c:a libfdk_aac -b:a 5000K output.mp4
I am running it on a jetson TK1 with nvidia gpu, is there any way to use an accelarating encoding in order to make the encoding possible?
Please, if you can, give me a sampler script of something that might work.
Right now, I dont care how much time the encoding take, as long as it will work.
Thank you in advance! :)
I'm transcoding a rtmp stream from a red5 server for use to live stream on a iphone or ipad device. I built latest ffmpeg version from git repo using the built in segmenter to create .ts files and m3u8 playlist file using the following:
ffmpeg -probesize 50k -i "rtmp://localhost/oflaDemo/red5StreamDemo live=1" \
-c:v libx264 -b:v 128k -vpre ipod320 -flags -global_header -map 0 \
-f segment -segment_time 3 -segment_list foo.m3u8 -segment_list_flags +live \
-segment_list_type m3u8 -segment_list_size 5 -segment_format mpegts foo%d.ts
This works fine, but I can't get the segment size smaller than about 12 sec even set to 3 (-segment_time 3). It seems to be caused by libx264 vcodec.
Am I missing any flag?
By the way, you can simple run the ffmpeg command above successfully by starting red5 SimpleBroadcaster example.
i suspect it is because of GOP size. segmenter needs I-frame boundary to be able to create segments.
ffmpeg -probesize 50k -i "rtmp://localhost/oflaDemo/red5StreamDemo live=1" \
-c:v libx264 -b:v 128k -g 90 -vpre ipod320 -flags -global_header -map 0 \
-f segment -segment_time 3 -segment_list foo.m3u8 -segment_list_flags +live \
-segment_list_type m3u8 -segment_list_size 5 -segment_format mpegts foo%d.ts
added -g 90. could help.