ffmpeg cutet video lose sound last seconds - command-line

Hello I use this command and last seconds of a video sound is mute.
ffmpeg -ss <start_time> -i <output_result> -t <duration_of_video> -c copy <name_of_a_file.mp4>
So last 2 or 3 to 5,6 second are muted no sound only video.When I play it in VLC Player is stop 1 second before when i post on Instagram is playing till the end but 2-3 sec to 5-6 sec before that sound stops only video.I am using Ubuntu 16.04 LTS.Any suggestion Thank you.

The problem was in -c copy option for default codec. Instead I use -acodec libmp3lame -vcodec libx264.SO command is:
ffmpeg -ss -i -t
-acodec libmp3lame -vcodec libx264

Related

Issue regarding Slides not being shown in converted video to mp4 in BigBlueButton

We are using BigBlueButton 2.4 for webinars. When an webinar video is processed by bigbluebutton, the presentation shows the slides that were uploaded in the webiner, but the converted video that we are downloading, does not have those slides shown (Rest of the video is okay).
Does anyone know how to fix this for this particular version?
The code that we are using is mentioned below, if it helps:
#!/bin/sh
# Convert the deskshare and webcam to a combined video stream including logo
cd /var/bigbluebutton/published/presentation/
meetingId=$1
cd $meetingId
# add webcam sound to deskshare
if [ -e deskshare/deskshare.webm ]
then
ffmpeg -nostdin -threads 4 -i video/webcams.webm -i deskshare/deskshare.webm -af afftdn deskshare_with_sound.mp4
else
ffmpeg -nostdin -threads 4 -i video/webcams.webm -af afftdn deskshare_with_sound.mp4
fi
ffmpeg -nostdin -threads 4 -i video/webcams.webm -vf

Flutter: How to use flutter_ffmpeg to add overlays like watermarks and texts to a video?

I try to implement the video_editing feature to my app, and I'd tried the Tapioca Package and the Video_Manipulation package but found that they both do not meet my criteria, so I put my last hope on the flutter_ffmpeg package.
But as I read through its official doc on pub.dev, not a thing on my mind but "WHAT THE HECK", I can't understand what those commands are used for, and though I can't find anything related to add widget overlays to a video. And almost no tutorial on the web that explains how to use it.
So if you successfully implemented adding watermarks/texts to a video with the ffmpeg package, please show me how. Thanks!~
ffmpeg -i video.mp4 -i logo.png -filter_complex "[0:v][1:v]overlay=5:5,drawtext=text=:x=(w-0)/8:y=(h-4)/10:fontsize=64:fontcolor=white:ryanwangTV" -c:a copy -movflags +faststart output.mp4
ffmpeg -i video.mp4 -i logo.png
there are the video in cuestion to work and the png image that we want to apply how watermark
the video.mp4 has two "parts" a video and a audio file, remember it
the logo.png is a one image, but it consederer a "video" the duration is miliseconds.
how you call parts of video.mp4 and logo.png?
using mapping, for file 1) you will called [0] and for file 2 (logo.png) you will used [1]
if you want to use the video of video.mp4 you will call [0:v] and the video of png is [1:v]
for watermark use filter complex, to "mix" the image on the video
"[0:v][1:v]overlay=5:5,drawtext=text=:x=(w-0)/8:y=(h-4)/10:fontsize=64:fontcolor=white:ryanwangTV
[0:v][1:v] is the video of video.mp4 and image of logo.png
overlay=5:5 the first 5 is the main video, and the second 5 is the image to put on of the video.
x=(w-0)/8 : is the coordenada x y=(h-4)/10 : the coordenada y
fontsize=64 fontcolor=white and the ultimate word is text that you
want to draw in video
-c:a copy its mean: copy the audio of file 1
-movflags +faststart : is to fast start for users of internet on browsers
output.mp4 is the name final
//audio replace on video
- String commandToExecute ='-r 15 -f mp4 -i ${AllUrl.VIDEO_PATH} -f mp3 -i ${AllUrl.AUDIO_PATH} -c:v copy -c:a aac -map 0:v:0 -map 1:a:0 -t $timeLimit -y ${AllUrl.OUTPUT_PATH}';
//To combine audio with image
String commandToExecute = '-r 15 -f mp3 -i ${AllUrl.AUDIO_PATH} -f image2 -i ${AllUrl.IMAGE_PATH} -pix_fmt yuv420p -t $timeLimit -y ${AllUrl.OUTPUT_PATH}';
//overlay Image on video
String commandToExecute = "-i ${AllUrl.VIDEO_PATH} -i ${AllUrl.IMAGE_PATH} -filter_complex overlay=10:10 -codec:a copy ${AllUrl.OUTPUT_PATH}";
/// To combine audio with gif
String commandToExecute = '-r 15 -f mp3 -i ${AllUrl.AUDIO_PATH} -f gif -re -stream_loop 5 -i ${AllUrl.GIF_PATH} -y ${AllUrl.OUTPUT_PATH}';
/// To combine audio with sequence of images
String commandToExecute = '-r 30 -pattern_type sequence -start_number 01 -f image2 -i ${AllUrl.IMAGES_PATH} -f mp3 -i ${AllUrl.AUDIO_PATH} -y ${AllUrl.OUTPUT_PATH}';

No such file of directory Windows 10 PowerShell

Till now the command i use to convert HDR videos to SDR worked just fine till now. I am gettong always the error message "F:\_4k_Movies_\Dolittle: no such file or directory" any idea what's whong?
.\ffmpeg.exe -i F:\_4k_Movies_\Dolittle 4K.mkv -vf zscale=t=linear:npl=100,format=gbrpf32le,
zscale=p=bt709,tonemap=tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,
format=yuv420p -c:v libx265 -crf 10 -preset fast F:\_4k_Movies_\Dolittle.SDR.mkv

Text streaming with RTMP?

I'm trying to get the output of a bash file to an RTMP stream.
I've successfully done it with FFMPEG using a filter, but the stream stops at Random intervals.
I assume that it's FFMPEG reading NULL data from the file.
I already write another file "output.txt", delete " input.txt" (which FFMPEG is reading) and rename "output.txt" to "input.txt".
Is there any way to do it more atomic in bash so it will work? Or is there a more elegant way to turn a changing text (max one time per second) to an FFMPEG stream?
Here is my current script:
ffmpeg -s 1920x1080 -f rawvideo -pix_fmt rgb24 -r 10 -i /dev/zero -f lavfi -i anullsrc -vcodec h264 -pix_fmt yuv420p -r 10 -b:v 2500k -qscale:v 3 -b:a 712000 -bufsize 512k -vf "drawtext=fontcolor=0xFFFFFF:fontsize=15:fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf:textfile=input.txt:x=0:y=0:reload=1" -f flv "rtmp://example.com/key"

ffmpeg rtmp webcam live stream iphone/pad segment size too big

I'm transcoding a rtmp stream from a red5 server for use to live stream on a iphone or ipad device. I built latest ffmpeg version from git repo using the built in segmenter to create .ts files and m3u8 playlist file using the following:
ffmpeg -probesize 50k -i "rtmp://localhost/oflaDemo/red5StreamDemo live=1" \
-c:v libx264 -b:v 128k -vpre ipod320 -flags -global_header -map 0 \
-f segment -segment_time 3 -segment_list foo.m3u8 -segment_list_flags +live \
-segment_list_type m3u8 -segment_list_size 5 -segment_format mpegts foo%d.ts
This works fine, but I can't get the segment size smaller than about 12 sec even set to 3 (-segment_time 3). It seems to be caused by libx264 vcodec.
Am I missing any flag?
By the way, you can simple run the ffmpeg command above successfully by starting red5 SimpleBroadcaster example.
i suspect it is because of GOP size. segmenter needs I-frame boundary to be able to create segments.
ffmpeg -probesize 50k -i "rtmp://localhost/oflaDemo/red5StreamDemo live=1" \
-c:v libx264 -b:v 128k -g 90 -vpre ipod320 -flags -global_header -map 0 \
-f segment -segment_time 3 -segment_list foo.m3u8 -segment_list_flags +live \
-segment_list_type m3u8 -segment_list_size 5 -segment_format mpegts foo%d.ts
added -g 90. could help.