Issue regarding Slides not being shown in converted video to mp4 in BigBlueButton - bigbluebutton

We are using BigBlueButton 2.4 for webinars. When an webinar video is processed by bigbluebutton, the presentation shows the slides that were uploaded in the webiner, but the converted video that we are downloading, does not have those slides shown (Rest of the video is okay).
Does anyone know how to fix this for this particular version?
The code that we are using is mentioned below, if it helps:
#!/bin/sh
# Convert the deskshare and webcam to a combined video stream including logo
cd /var/bigbluebutton/published/presentation/
meetingId=$1
cd $meetingId
# add webcam sound to deskshare
if [ -e deskshare/deskshare.webm ]
then
ffmpeg -nostdin -threads 4 -i video/webcams.webm -i deskshare/deskshare.webm -af afftdn deskshare_with_sound.mp4
else
ffmpeg -nostdin -threads 4 -i video/webcams.webm -af afftdn deskshare_with_sound.mp4
fi
ffmpeg -nostdin -threads 4 -i video/webcams.webm -vf

Related

Flutter: How to use flutter_ffmpeg to add overlays like watermarks and texts to a video?

I try to implement the video_editing feature to my app, and I'd tried the Tapioca Package and the Video_Manipulation package but found that they both do not meet my criteria, so I put my last hope on the flutter_ffmpeg package.
But as I read through its official doc on pub.dev, not a thing on my mind but "WHAT THE HECK", I can't understand what those commands are used for, and though I can't find anything related to add widget overlays to a video. And almost no tutorial on the web that explains how to use it.
So if you successfully implemented adding watermarks/texts to a video with the ffmpeg package, please show me how. Thanks!~
ffmpeg -i video.mp4 -i logo.png -filter_complex "[0:v][1:v]overlay=5:5,drawtext=text=:x=(w-0)/8:y=(h-4)/10:fontsize=64:fontcolor=white:ryanwangTV" -c:a copy -movflags +faststart output.mp4
ffmpeg -i video.mp4 -i logo.png
there are the video in cuestion to work and the png image that we want to apply how watermark
the video.mp4 has two "parts" a video and a audio file, remember it
the logo.png is a one image, but it consederer a "video" the duration is miliseconds.
how you call parts of video.mp4 and logo.png?
using mapping, for file 1) you will called [0] and for file 2 (logo.png) you will used [1]
if you want to use the video of video.mp4 you will call [0:v] and the video of png is [1:v]
for watermark use filter complex, to "mix" the image on the video
"[0:v][1:v]overlay=5:5,drawtext=text=:x=(w-0)/8:y=(h-4)/10:fontsize=64:fontcolor=white:ryanwangTV
[0:v][1:v] is the video of video.mp4 and image of logo.png
overlay=5:5 the first 5 is the main video, and the second 5 is the image to put on of the video.
x=(w-0)/8 : is the coordenada x y=(h-4)/10 : the coordenada y
fontsize=64 fontcolor=white and the ultimate word is text that you
want to draw in video
-c:a copy its mean: copy the audio of file 1
-movflags +faststart : is to fast start for users of internet on browsers
output.mp4 is the name final
//audio replace on video
- String commandToExecute ='-r 15 -f mp4 -i ${AllUrl.VIDEO_PATH} -f mp3 -i ${AllUrl.AUDIO_PATH} -c:v copy -c:a aac -map 0:v:0 -map 1:a:0 -t $timeLimit -y ${AllUrl.OUTPUT_PATH}';
//To combine audio with image
String commandToExecute = '-r 15 -f mp3 -i ${AllUrl.AUDIO_PATH} -f image2 -i ${AllUrl.IMAGE_PATH} -pix_fmt yuv420p -t $timeLimit -y ${AllUrl.OUTPUT_PATH}';
//overlay Image on video
String commandToExecute = "-i ${AllUrl.VIDEO_PATH} -i ${AllUrl.IMAGE_PATH} -filter_complex overlay=10:10 -codec:a copy ${AllUrl.OUTPUT_PATH}";
/// To combine audio with gif
String commandToExecute = '-r 15 -f mp3 -i ${AllUrl.AUDIO_PATH} -f gif -re -stream_loop 5 -i ${AllUrl.GIF_PATH} -y ${AllUrl.OUTPUT_PATH}';
/// To combine audio with sequence of images
String commandToExecute = '-r 30 -pattern_type sequence -start_number 01 -f image2 -i ${AllUrl.IMAGES_PATH} -f mp3 -i ${AllUrl.AUDIO_PATH} -y ${AllUrl.OUTPUT_PATH}';

How to http stream FFMPEG encoded frames with VLC

I have a python script that write images (numpy arrays) on the standard output.
I want to keep this frames and encode them h264 with FFMPEG, using GPU, then give it to vlc to expose a stream over http.
Here there's a working example of my apporach, without the part of h264 encoding:
python3 script.py | ffmpeg -r 24 -s 1920x1080 -f rawvideo -i - -vcodec copy -f avi - | cvlc --demux=rawvideo --rawvid-fps=25
--rawvid-width=1920 --rawvid-height=1080 --rawvid-chroma=RV24 - --no-audio --sout '#transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{user=pippo,pwd=pluto,mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:10001/}'
Now, I'm having troubles in writing working pipes to do what I need.
Here the pipe I'm actually working on, FFMPEG process is managed by GPU, but VLC cannot correctly manage the flow, I suppose, in fact I can connect to VLC from another VLC instance used as client, but then I got an error in which VLC client cannot open MRL.
Here the pipe:
python3 script.py | ffmpeg -y -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -f rawvideo -s 1920x1080 -i - -c:a copy -c:v h264_nvenc -f h264 - | cvlc --demux=rawvideo --rawvid-fps=25 --rawvid-width=1920 --rawvid-height=1080 --rawvid-chroma=RV24 - --no-audio --sout '#transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{user=pippo,pwd=pluto,mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:10001/}'
I don't understand how to set vlc parameters in order to manage the incoming stream. I also could have made errors in ffmpeg pipe, any suggestion is welcome.

ffmpeg cutet video lose sound last seconds

Hello I use this command and last seconds of a video sound is mute.
ffmpeg -ss <start_time> -i <output_result> -t <duration_of_video> -c copy <name_of_a_file.mp4>
So last 2 or 3 to 5,6 second are muted no sound only video.When I play it in VLC Player is stop 1 second before when i post on Instagram is playing till the end but 2-3 sec to 5-6 sec before that sound stops only video.I am using Ubuntu 16.04 LTS.Any suggestion Thank you.
The problem was in -c copy option for default codec. Instead I use -acodec libmp3lame -vcodec libx264.SO command is:
ffmpeg -ss -i -t
-acodec libmp3lame -vcodec libx264

How do you generate a looping animated gif for Facebook with ffmpeg

When I paste a giphy url (like this one) into a facebook post or comment, the gif plays immediately and loops indefinitely. When I upload ones I make from ffmpeg, neither of those things happen. You have to click a play button to start the animation and then it ends after one time through.
I have a couple of ffmpeg commands I use to create the gifs. They are:
ffmpeg -ss 10 -t 5 -i input.m4v -vf "fps=15,scale=800:-2:flags=lanczos,palettegen" -y palette.png
and
ffmpeg -ss 10.6 -t 5 -i input.m4v -i palette.png -filter_complex "fps=15,scale=800:-1:lanczos[video];[video][1:v]paletteuse" output.gif
The first one generates a custom color pallet that's used by the second one to create a high quality animated gif. I've also tried adding -loop 0 (i.e. ffmpeg -ss 10.6 -t 5 -i input.m4v -i palette.png -filter_complex "fps=15,scale=800:-1:lanczos[video];[video][1:v]paletteuse" -loop 0 output.gif) but that didn't work either.
I also tried uploading the ffmpeg generated images to a personal website and calling them from there but those didn't load at all.
In case it helps, here's a copy of one of the gifs (which autostarts and loops on StackOverflow for me but not on FB)
How does one go about creating a gif that will autostart and loop indefinitely for facebook?
(Note: I've got no problem if I need to do something with a personal website, but I don't want to use Giphly or the other animated gif sites directly if at all possible. Also worth pointing out that I discovered if I download the image from giphly and upload it, it doesn't autostart either. So, this may be something internal to FB, but I'd still like to figure that out.)
The file from Giphy appears to be a WebP, not a GIF:
ffmpeg -i giphy
[…]
Stream #0:0: Video: webp, none, 25 tbr, 25 tbn, 25 tbc
Firefox and mediainfo concur.
So try making a WebP instead:
ffmpeg -ss 10.6 -t 5 -i input.m4v -loop 0 output.webp
Your ffmpeg will have to be compiled with --enable-libwebp.
See ffmpeg -h encoder=libwebp and ffmpeg -h muxer=webp for more options.

Use of FFmpeg in the project

I am taking some screens shot images and want to convert them to videos of some format. Iknw FFmpeg is the solution for this but I am unable to find the code for using it in my project and also am unable to find this library. Please give me a step by step process with details of coding this.
An example to help you get started:
ffmpeg -ab 256k -s 640x480 -f image2 -r .1 -b 180000 -i pics*.jpg -i coolmusic.mp3 final-video.mpg
Read the man page for further details.