ffmpeg images to video with different start times and durations - command-line

I've recently learned of FFMPEG's existence and I am trying to use it on my wordpress site.
On the site I am working on a html/php/js form page that lets users upload pictures, and set when the image shows and for how long.
Right now the code I have is only showing one image for the entire video.
<?php if (isset($_POST['button'])) {
echo shell_exec('ffmpeg -t '.$cap_1.' -i /myurl/beach-1866431.jpg -t '.$cap_2.' -i /myurl/orlando-1104481-1.jpg -filter_complex "scale=1280:-2" -i /myurl/audio.mp3 -c:v libx264 -pix_fmt yuv420p -t 30 -y /myurl/'.$v_title.'.mp4 2>&1');
} ?>
I tried setting "-t" for the duration with my php variables but nothing changes and I cant figure out what to use for the start time of each image.
Also, when writing shell_exec commands, instead of it all being on one line, is there a way to write working command code in php files with line breaks? For example -
<?php if (isset($_POST['button'])) {
echo shell_exec('ffmpeg -t '.$cap_1.' -i /myurl/beach-1866431.jpg
-t '.$cap_2.' -i /myurl/orlando-1104481-1.jpg
-filter_complex "scale=1280:-2"
-i /myurl/audio.mp3
-c:v libx264 -pix_fmt yuv420p -t 30 -y /myurl/'.$v_title.'.mp4 2>&1');
} ?>
EDIT
So far the concat text file seems to work, however I do not know how to set the start times for each image ---
ffconcat version 1.0
file /path/beach-1866431.jpg
duration 3
file /path/orlando-1104481-1.jpg
duration 5
file /path/beach-1866431.jpg
And ffmpeg command -
shell_exec('ffmpeg -f concat -safe 0 -i /path/file.txt -filter_complex "scale=1280:-2" -i /path/audio.mp3 -c:v libx264 -pix_fmt yuv420p -t 30 -y /path/'.$v_title.'.mp4 2>&1');
EDIT 2
Using the concat method suggested, my code now looks like this --
<?php if (isset($_POST['button'])) {
echo shell_exec('ffmpeg \
-loop 1 -framerate 24 -t 10 -i goldmetal.jpg \
-i 3251.mp3 \
-loop 1 -framerate 24 -t 10 -i cash-register-1885558.jpg \
-loop 1 -framerate 24 -t 10 -i ice-1915849.jpg \
-filter_complex "[0:v][1:a][2:v][3:v]concat=n=4:v=1:a=1[v][a]" -map "[v]" -map "[a]" -c:v libx264 /path/'.$v_title.'.mp4 2>&1');
} ?>
But I'm getting this error --
**Stream specifier ':v' in filtergraph description [0:v][1:a][2:v][3:v]concat=n=4:v=1:a=1[v][a] matches no streams.**
EDIT 3
I almost got it working as needed, using 2 commands, one for the images and fade, the other to combine the audio. The only issue I'm having is changing the time each image shows up. --
echo shell_exec('ffmpeg \
-loop 1 -t 5 -i '.$thepath .'/'.$v_pix1.' \
-loop 1 -t 5 -i ' .$thepath . '/cash-register-1885558.jpg \
-loop 1 -t 5 -i ' .$thepath . '/ice-1915849.jpg \
-loop 1 -t 5 -i '.$thepath .'/'.$v_pix1.' \
-loop 1 -t 5 -i ' .$thepath . '/ice-1915849.jpg \
-filter_complex \
"[0:v]setpts=PTS-STARTPTS,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v0]; \
[1:v]setpts=PTS-STARTPTS,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v1]; \
[2:v]setpts=PTS-STARTPTS,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v2]; \
[3:v]setpts=PTS-STARTPTS,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v3]; \
[4:v]setpts=PTS-STARTPTS,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v4]; \
[v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,format=yuv420p[v]" -map "[v]" -y '.$thepath .'/fadeout.mp4 2>&1');
echo shell_exec('ffmpeg \
-i '.$thepath .'/fadeout.mp4 \
-i '.$thepath .'/3251.mp3 \
-filter_complex "[0:v:0][1:a:0] concat=n=1:v=1:a=1 [vout] [aout]" -map "[vout]" -map "[aout]" -c:v libx264 -r 1 -y '.$thepath .'/mergetest.mp4 2>&1');

This answer addresses the ffmpeg specific questions in your broad multi-question.
This example will take images of any arbitrary size, fit them into a 1280x720 box, fade between images, and play the audio at the same time. The video will end whenever the images or audio ends: whichever is shortest.
ffmpeg \
-i audio1.mp3 \
-i audio2.wav \
-loop 1 -t 5 -i image1.jpg \
-loop 1 -t 5 -i image2.png \
-loop 1 -t 5 -i image3.jpg \
-filter_complex \
"[2:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=out:st=4:d=1[v1]; \
[3:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v2]; \
[4:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v3]; \
[0:a][1:a]amerge=inputs=2[a];
[v1][v2][v3]concat=n=3:v=1:a=0,format=yuv420p[v]" \
-map "[v]" -map "[a]" -ac 2 -shortest -movflags +faststart output.mp4

Related

Alternative to long lines

Context: I have a script in that long lines are being striped.
In bash, I could do this
CURL=/usr/bin/curl
declare -a ARGS=(
--silent
--location
--output /dev/null
--write-out "%{http_code}"
--request GET
--max-time 30
--retry 3
"https://httpbin.org/status/200"
)
http_code=$("$CURL" "${ARGS[#]}")
However, ash does not have arrays. Is there another alternative to avoid long lines like I can do on bash in ash or in sh?
Long lines can be split with backslashes:
CURL=/usr/bin/curl
curlWithArgs(){
"$CURL" \
--silent \
--location \
--output /dev/null \
--write-out "%{http_code}" \
--request GET \
--max-time 30 \
--retry 3 \
"https://httpbin.org/status/200" \
;
}
http_code=$(curlWithArgs)

How to convert STL to rotating GIF using OpenSCAD?

Given an STL file, how can you convert it to an animated gif using the command line (bash)?
I've discovered a few articles that vaguely describe how to do this through the GUI. I've been able to generate the following, however the animation is very rough and the shadows jump around.
for ((angle=0; angle <=360; angle+=5)); do
openscad /dev/null -o dump$angle.png -D "cube([2,3,4]);" --imgsize=250,250 --camera=0,0,0,45,0,$angle,25
done
# https://unix.stackexchange.com/a/489210/39263
ffmpeg \
-framerate 24 \
-pattern_type glob \
-i 'dump*.png' \
-r 8 \
-vf scale=512:-1 \
out.gif \
;
OpenScad has a built in --animation X parameter, however using that likely won't work when passing in the camera angle as a parameter.
Resources
https://github.com/openscad/openscad/issues/1632#issuecomment-219203658
https://blog.prusaprinters.org/how-to-animate-models-in-openscad_29523/
https://github.com/openscad/openscad/issues/1573
https://github.com/openscad/openscad/pull/1808
https://forum.openscad.org/Product-Video-produced-with-OpenSCAD-td15783.html
Bash + Docker
Converting an STL to a GIF requires several steps
Center the STL at the origin
Convert the STL into a collection of .PNG files from different angles
Combine those PNG files into a .gif file
Assuming you have docker installed you can run the the following to convert an STL into an animated GIF
(Note: A more up to date version of this script is available at spuder/CAD-scripts/stl2gif
This depends on 3 docker containers
spuder/stl2origin
openscad/openscad:2021.01
linuxserver/ffmpeg:version-4.4-cli
# 1. Use spuder/stl2origin:latest docker container to center the file at origin
# A file with the offsets will be saved to `${MYTMPDIR}/foo.sh`
file=/tmp/foo.stl
MYTMPDIR="$(mktemp -d)"
trap 'rm -rf -- "$MYTMPDIR"' EXIT
docker run \
-e OUTPUT_BASH_FILE=/output/foo.sh \
-v $(dirname "$file"):/input \
-v $MYTMPDIR:/output \
--rm spuder/stl2origin:latest \
"/input/$(basename "$file")"
cp "${file}" "$MYTMPDIR/foo.stl"
# 2. Read ${MYTMPDIR}/foo.sh and load the offset variables ($XTRANS, $XMID,$YTRANS,$YMID,$ZTRANS,$ZMID)
# Save the new centered STL to `$MYTMPDIR/foo-centered.stl`
source $MYTMPDIR/foo.sh
docker run \
-v "$MYTMPDIR:/input" \
-v "$MYTMPDIR:/output" \
openscad/openscad:2021.01 openscad /dev/null -D "translate([$XTRANS-$XMID,$YTRANS-$YMID,$ZTRANS-$ZMID])import(\"/input/foo.stl\");" -o "/output/foo-centered.stl"
# 3. Convert the STL into 60 .PNG images with the camera rotating around the object. Note `$t` is a built in openscad variable that is automatically set based on time when --animate option is used
# OSX users will need to replace `openscad` with `/Applications/OpenSCAD.app/Contents/MacOS/OpenSCAD`
# Save all images to $MYTMPDIR/foo{0..60}.png
# This is not yet running in a docker container due to a bug: https://github.com/openscad/openscad/issues/4028
openscad /dev/null \
-D '$vpr = [60, 0, 360 * $t];' \
-o "${MYTMPDIR}/foo.png" \
-D "import(\"${MYTMPDIR}/foo-centered.stl\");" \
--imgsize=600,600 \
--animate 60 \
--colorscheme "Tomorrow Night" \
--viewall --autocenter
# 4. Use ffmpeg to combine all images into a .GIF file
# Tune framerate (15) and -r (60) to produce a faster/slower/smoother image
yes | ffmpeg \
-framerate 15 \
-pattern_type glob \
-i "$MYTMPDIR/*.png" \
-r 60 \
-vf scale=512:-1 \
"${file}.gif" \
;
rm -rf -- "$MYTMPDIR"
STL File
Gif without centering
Gif with centering

ffmpeg transcoding to flv low bitrate

Trying to combine an audio and picture to mp4 I can hit my target bitrate:
ffmpeg -y \
-loop 1 -r 4 -i ./image.jpg \
-stream_loop -1 -i ./audio.mp3 \
-vcodec libx264 -pix_fmt yuv420p \
-filter:v "crop=2560:1440:0:0" \
-video_size 2560x1440 \
-b:v 13000k -minrate 13000k -maxrate 16000k -bufsize 26000k \
-preset ultrafast \
-tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-af "dynaudnorm=f=150:g=15" \
-r 30 -g 60 \
-f flv tmp.flv
When i change the output container from mp4 to flv the bitrate drops
ffmpeg -y \
-loop 1 -r 4 -i ./image.jpg \
-stream_loop -1 -i ./audio.mp3 \
-vcodec libx264 -pix_fmt yuv420p \
-filter:v "crop=2560:1440:0:0" \
-video_size 2560x1440 \
-b:v 13000k -minrate 13000k -maxrate 16000k -bufsize 26000k \
-preset ultrafast \
-tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-af "dynaudnorm=f=150:g=15" \
-r 30 -g 60 \
-f mp4 tmp.flv
When I write to tmp.flv; the bitrate drops to about 1500k but with the tmp.mp4 the bitrate stays close to what I describe with the -b:v 13000k
Why does flv cause the bitrate to drop so low?

cannot open connection network is unreachable

I want live streaming on YouTube from my Raspberry Pi 3. The script properly work when I run the manually from shell. When I add that script in the file sudo nano /etc/rc.local to run it automatically on startup it run only first time when the Raspberry Pi starts next time its stop working and give an error 'cannot open connection network is unreachable'.
here is the script that i use for live streaming on YouTube from Raspberry Pi.
raspivid -o - -t 0 -vf -hf -fps 30 -b 6000000 | avconv -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/[your-secret-key-here]
I want to run this script automatically each time when the Raspberry Pi startup without any error.
for more information check this link.
After caring out many attempts I found the solution.
It is very Simple. just make a new python file with any name in my case livestream.py and paste the code.
import os
os.system(raspivid -o - -t 0 -vf -hf -fps 25 -b 600000 | avconv -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/[your-secret-key-here]
)
It can be be run by installing php in Raspberry Pi by using this script
sudo apt-get install php5-fpm php5-mysql
and run file livestream.php the php code is
<?php
exec("raspivid -o - -t 0 -vf -hf -fps 25 -b 600000 | avconv -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/[your-secret-key-here]
");
?>

Weka: doing bagging from the command line

I can train a model using Bagging from the command line like this --
java -Xmx512m -cp $CLASSPATH weka.classifiers.meta.Bagging -P 100 -S 1 -num-slots 1 -I 10 \
-split-percentage 66 \
-t $traindata \
-d $model \
-W weka.classifiers.trees.REPTree -- -M 2 -V 0.001 -N 3 -S 1 -L -1 -I 0.0 \
> $out
But I can't reuse the same model to do prediction from the command line. I guess the command should be something like --
java -Xmx512m -cp $CLASSPATH weka.classifiers.meta.Bagging \
-l $model \
-T $testdata \
-W weka.classifiers.trees.REPTree \
-p 0 \
> $wkresult
But it does not work, any idea?
EDIT: However, when I am doing with a single classifier (i.e. no bagging), it works. The commands were like this --
java -Xmx512m -cp $CLASSPATH weka.classifiers.bayes.NaiveBayesMultinomial \
-split-percentage 66 \
-t $traindata \
-d $model \
> $out
java -Xmx512m -cp $CLASSPATH weka.classifiers.bayes.NaiveBayesMultinomial \
-T $testdata \
-l $model \
-p 0 \
> $wkresult
You need to call a different class to evaluate the model. The command line should be something like
java -cp $CLASSPATH weka.classifiers.Evaluation weka.classifiers.meta.Bagging \
-T $testdata -l $model
You may need to specify some of the additional options you gave when training the classifier. Also have a look at the commandline options for the evaluation class. More information here.