cannot open connection network is unreachable - raspberry-pi

I want live streaming on YouTube from my Raspberry Pi 3. The script properly work when I run the manually from shell. When I add that script in the file sudo nano /etc/rc.local to run it automatically on startup it run only first time when the Raspberry Pi starts next time its stop working and give an error 'cannot open connection network is unreachable'.
here is the script that i use for live streaming on YouTube from Raspberry Pi.
raspivid -o - -t 0 -vf -hf -fps 30 -b 6000000 | avconv -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/[your-secret-key-here]
I want to run this script automatically each time when the Raspberry Pi startup without any error.
for more information check this link.

After caring out many attempts I found the solution.
It is very Simple. just make a new python file with any name in my case livestream.py and paste the code.
import os
os.system(raspivid -o - -t 0 -vf -hf -fps 25 -b 600000 | avconv -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/[your-secret-key-here]
)
It can be be run by installing php in Raspberry Pi by using this script
sudo apt-get install php5-fpm php5-mysql
and run file livestream.php the php code is
<?php
exec("raspivid -o - -t 0 -vf -hf -fps 25 -b 600000 | avconv -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/[your-secret-key-here]
");
?>

Related

ffmpeg transcoding to flv low bitrate

Trying to combine an audio and picture to mp4 I can hit my target bitrate:
ffmpeg -y \
-loop 1 -r 4 -i ./image.jpg \
-stream_loop -1 -i ./audio.mp3 \
-vcodec libx264 -pix_fmt yuv420p \
-filter:v "crop=2560:1440:0:0" \
-video_size 2560x1440 \
-b:v 13000k -minrate 13000k -maxrate 16000k -bufsize 26000k \
-preset ultrafast \
-tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-af "dynaudnorm=f=150:g=15" \
-r 30 -g 60 \
-f flv tmp.flv
When i change the output container from mp4 to flv the bitrate drops
ffmpeg -y \
-loop 1 -r 4 -i ./image.jpg \
-stream_loop -1 -i ./audio.mp3 \
-vcodec libx264 -pix_fmt yuv420p \
-filter:v "crop=2560:1440:0:0" \
-video_size 2560x1440 \
-b:v 13000k -minrate 13000k -maxrate 16000k -bufsize 26000k \
-preset ultrafast \
-tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-af "dynaudnorm=f=150:g=15" \
-r 30 -g 60 \
-f mp4 tmp.flv
When I write to tmp.flv; the bitrate drops to about 1500k but with the tmp.mp4 the bitrate stays close to what I describe with the -b:v 13000k
Why does flv cause the bitrate to drop so low?

How can i do Facebook Live (page) using FFMPEG

SOLVED & EDITED -
Catch: Doing live stream on Facebook with FMMPEG.
In past it was easy i did many times as facebook was using rtmp.
But now facebook is using RTMPS so i am getting different errors i have tried 100 commands.
I have a image test.png and a audio file test.m4a (its a podcast) and facebook stream key is 1234.
( i have tried 100 types of commands so cant post here and cant post errors aswell.)
so please can someone help me to go live on my facebook page with image+m4a file.
i prefer centos but i will manage ubuntu if you prefer.
Regards..
Solved : See my answer might help someone.
SOLVED
Hope it will be useful for someone.
I was trying with all possible results from google and stackoverflow.
nothing worked.
Then i did my own way and it worked after 2 hour.
i will stream video out.mp4 from my server on Facebook.
Install FFMPEG4 ( older version has issue with rtmps )
ffmpeg -re -y -i out.mp4 -c:a copy -ac 1 -ar 44100 -b:a 96k -vcodec libx264 -pix_fmt yuv420p -tune zerolatency -f flv -maxrate 2000k -preset veryfast "rtmps://live-api-s.facebook.com:443/rtmp/key"
You can stream on as many platforms as you want.
PS. if you want to stream image+audio replace our.mp4 .
but i used ffmpeg to make video from m4a file ( it will stream without lag) & buffer)
If you are on CentOS or Redhat, you may find it difficult to intall ffmpeg. Or your installation can miss some of the libraries/protocols required by ffmpeg to be able to do FB live. For this purpose, running docker image would be a great idea:
Install docker and pull ffmpeg docker image
Run docker ffmpeg image with necessary arguments
For Installing docker, run following commands:
remove older version of docker if installed
sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
install docker yum repository
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
install docker and start/enable docker service
sudo yum install docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker
Reference : https://docs.docker.com/engine/install/centos/
To run ffmpeg for FB live, run following command:
docker run -v $(pwd):$(pwd) -w $(pwd) jrottenberg/ffmpeg -re -y -i [VIDEO_FILE] -c:a copy -ac 1 -ar 44100 -b:a 96k -vcodec libx264 -pix_fmt yuv420p -tune zerolatency -f flv -maxrate 2000k -preset veryfast "rtmps://live-api-s.facebook.com:443/rtmp/[KEY]"
replace [VIDEO_FILE] with the file you want to stream live to fb and [KEY] with the stream key from facebook. Reference : https://hub.docker.com/r/jrottenberg/ffmpeg/
Please note that you need to visit facebook live page (https://www.facebook.com/live/producer/) before you can run the above command.
Thanks Mate for your answer. even you are late anyway i did something similar.
I have already figured it by using DOCKER.
using Restreamer : https://datarhei.github.io/restreamer/
It work on both centos & Ubuntu
Aslo i tried it manually it was very difficult to install and setup but i did that and will not recommended.

(FFmpeg) VP9 Vaapi encoding to a .mp4 or .webm container from given official ffmpeg example

I'm trying to implement vp9 hardware acceleration encoding process. I followed ffmpeg offical github's example (Here -> vaapi_encode.c).
But given example only save a .yuv file to .h264 file, I would like to save the frames to either .mp4 or .webm container. And having the ability to control the quality, and etc.
I'm not reading frames from a file, I'm collecting frames from a live feed. When having full 5 secs of frames from the live feed, encode those frames using vp9_vaapi to a 5 secs .mp4 file.
I'm able to save all the 5 secs frames from my live feed to a .mp4 or .webm file, but they couldn't be played correctly (more precisely: keep loading, and I open).
The result from the official site's example:
The cpu encoded vp9 .mp4 file result:
Edit:
Result
You will need to use FFmpeg directly, where you may optionally add the vp9_superframe and the vp9_raw_reorder bitstream filters in the same command line if you enable B-frames in the vp9_vaapi encoder.
Example:
ffmpeg -threads 4 -vaapi_device /dev/dri/renderD128 \
-hwaccel vaapi -hwaccel_output_format vaapi \
-i http://server:port \
-c:v vp9_vaapi -global_quality 50 -bf 1 \
-bsf:v vp9_raw_reorder,vp9_superframe \
-f segment -segment_time 5 -segment_format_options movflags=+faststart output%03d.mp4
Adjust your input and output paths/urls as needed.
What this command does:
It will create 5 second long mp4 segments, via the segment muxer.
See the usage of the movflags=+faststart , and how it has been passed as a format option to the underlying mp4 muxer via -segment_format_options flag above.
The segment lengths may not be exactly 5 seconds long, as each segment begins (gets cut on) (with) a keyframe.
However, I'd not recommend enabling B-frames in that encoder, as these bitstream filters have other undesired effects, such as mucking around with the encoder's rate control and triggering bugs like this one. This is not desirable in a production environment. This is why the scripts below do not have that option enabled, and instead, we define a set rate control mode directly in the encoder options.
If you need to take advantage of 1:N encoding with VAAPI, use these snippets:
If you need to deinterlace, call up the deinterlace_vaapi filter:
ffmpeg -loglevel debug -threads 4 \
-init_hw_device vaapi=va:/dev/dri/renderD128 -hwaccel vaapi \
-hwaccel_device va -filter_hw_device va \
-hwaccel_output_format vaapi \
-i 'http://server:port' \
-filter_complex "[0:v]deinterlace_vaapi,split=3[n0][n1][n2]; \
[n0]scale_vaapi=1152:648[v0]; \
[n1]scale_vaapi=848:480[v1];
[n2]scale_vaapi=640:360[v2]" \
-b:v:0 2250k -maxrate:v:0 2250k -bufsize:v:0 360k -c:v:0 vp9_vaapi -g:v:0 50 -r:v:0 25 -rc_mode:v:0 2 \
-b:v:1 1750k -maxrate:v:1 1750k -bufsize:v:1 280k -c:v:1 vp9_vaapi -g:v:1 50 -r:v:1 25 -rc_mode:v:1 2 \
-b:v:2 1000k -maxrate:v:2 1000k -bufsize:v:2 160k -c:v:2 vp9_vaapi -g:v:2 50 -r:v:2 25 -rc_mode:v:2 2 \
-c:a aac -b:a 128k -ar 48000 -ac 2 \
-flags -global_header -f tee -use_fifo 1 \
-map "[v0]" -map "[v1]" -map "[v2]" -map 0:a \
"[select=\'v:0,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path0/output%03d.mp4| \
[select=\'v:1,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path1/output%03d.mp4| \
[select=\'v:2,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path2/output%03d.mp4"
Without deinterlacing:
ffmpeg -loglevel debug -threads 4 \
-init_hw_device vaapi=va:/dev/dri/renderD128 -hwaccel vaapi \
-hwaccel_device va -filter_hw_device va -hwaccel_output_format vaapi \
-i 'http://server:port' \
-filter_complex "[0:v]split=3[n0][n1][n2]; \
[n0]scale_vaapi=1152:648[v0]; \
[n1]scale_vaapi=848:480[v1];
[n2]scale_vaapi=640:360[v2]" \
-b:v:0 2250k -maxrate:v:0 2250k -bufsize:v:0 2250k -c:v:0 vp9_vaapi -g:v:0 50 -r:v:0 25 -rc_mode:v:0 2 \
-b:v:1 1750k -maxrate:v:1 1750k -bufsize:v:1 1750k -c:v:1 vp9_vaapi -g:v:1 50 -r:v:1 25 -rc_mode:v:1 2 \
-b:v:2 1000k -maxrate:v:2 1000k -bufsize:v:2 1000k -c:v:2 vp9_vaapi -g:v:2 50 -r:v:2 25 -rc_mode:v:2 2 \
-c:a aac -b:a 128k -ar 48000 -ac 2 \
-flags -global_header -f tee -use_fifo 1 \
-map "[v0]" -map "[v1]" -map "[v2]" -map 0:a \
"[select=\'v:0,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path0/output%03d.mp4| \
[select=\'v:1,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path1/output%03d.mp4| \
[select=\'v:2,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path2/output%03d.mp4"
Using Intel's QuickSync (on supported platforms):
On Intel Icelake and above, you can use the vp9_qsv encoder wrapper with the following known limitations (for now):
(a). You must enable low_power mode because only the VDENC decode path is exposed by the iHD driver for now.
(b). Coding option1 and extra_data are not supported by MSDK.
(c). The IVF header will be inserted in MSDK by default, but it is not needed for FFmpeg, and remains disabled by default.
See the examples below:
If you need to deinterlace, call up the vpp_qsv filter:
ffmpeg -nostdin -y -fflags +genpts \
-init_hw_device vaapi=va:/dev/dri/renderD128,driver=iHD \
-filter_hw_device va -hwaccel vaapi -hwaccel_output_format vaapi
-threads 4 -vsync 1 -async 1 \
-i 'http://server:port' \
-filter_complex "[0:v]hwmap=derive_device=qsv,format=qsv,vpp_qsv=deinterlace=2:async_depth=4,split[n0][n1][n2]; \
[n0]vpp_qsv=w=1152:h=648:async_depth=4[v0]; \
[n1]vpp_qsv=w=848:h=480:async_depth=4[v1];
[n2]vpp_qsv=w=640:h=360:async_depth=4[v2]" \
-b:v:0 2250k -maxrate:v:0 2250k -bufsize:v:0 360k -c:v:0 vp9_qsv -g:v:0 50 -r:v:0 25 -low_power:v:0 2 \
-b:v:1 1750k -maxrate:v:1 1750k -bufsize:v:1 280k -c:v:1 vp9_qsv -g:v:1 50 -r:v:1 25 -low_power:v:1 2 \
-b:v:2 1000k -maxrate:v:2 1000k -bufsize:v:2 160k -c:v:2 vp9_qsv -g:v:2 50 -r:v:2 25 -low_power:v:2 2 \
-c:a aac -b:a 128k -ar 48000 -ac 2 \
-flags -global_header -f tee -use_fifo 1 \
-map "[v0]" -map "[v1]" -map "[v2]" -map 0:a \
"[select=\'v:0,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path0/output%03d.mp4| \
[select=\'v:1,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path1/output%03d.mp4| \
[select=\'v:2,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path2/output%03d.mp4"
Without deinterlacing:
ffmpeg -nostdin -y -fflags +genpts \
-init_hw_device vaapi=va:/dev/dri/renderD128,driver=iHD \
-filter_hw_device va -hwaccel vaapi -hwaccel_output_format vaapi
-threads 4 -vsync 1 -async 1 \
-i 'http://server:port' \
-filter_complex "[0:v]hwmap=derive_device=qsv,format=qsv,split=3[n0][n1][n2]; \
[n0]vpp_qsv=w=1152:h=648:async_depth=4[v0]; \
[n1]vpp_qsv=w=848:h=480:async_depth=4[v1];
[n2]vpp_qsv=w=640:h=360:async_depth=4[v2]" \
-b:v:0 2250k -maxrate:v:0 2250k -bufsize:v:0 2250k -c:v:0 vp9_qsv -g:v:0 50 -r:v:0 25 -low_power:v:0 2 \
-b:v:1 1750k -maxrate:v:1 1750k -bufsize:v:1 1750k -c:v:1 vp9_qsv -g:v:1 50 -r:v:1 25 -low_power:v:1 2 \
-b:v:2 1000k -maxrate:v:2 1000k -bufsize:v:2 1000k -c:v:2 vp9_qsv -g:v:2 50 -r:v:2 25 -low_power:v:2 2 \
-c:a aac -b:a 128k -ar 48000 -ac 2 \
-flags -global_header -f tee -use_fifo 1 \
-map "[v0]" -map "[v1]" -map "[v2]" -map 0:a \
"[select=\'v:0,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path0/output%03d.mp4| \
[select=\'v:1,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path1/output%03d.mp4| \
[select=\'v:2,a\':f=segment:segment_time=5:segment_format_options=movflags=+faststart]$output_path2/output%03d.mp4"
Note that we use the vpp_qsv filter with the async_depth option set to 4. This massively improves transcode performance over using scale_qsv and deinterlace_qsv. See this commit on FFmpeg's git.
Note: If you use the QuickSync path, note that MFE (Multi-Frame encoding mode) will be enabled by default if the Media SDK library on your system supports it.
Formulae used to derive the snippet above:
Optimal bufsize:v = target bitrate (-b:v value)
Set GOP size as: 2 * fps (GOP interval set to 2 seconds).
We limit the thread counts for the video encoder(s) via -threads:v to prevent VBV overflows.
Resolution ladder used: 640p, 480p and 360p in 16:9, see this link.
Adjust this as needed.
Substitute the variables above ($output_path{0-2}, the input, etc) as needed.
Test and report back.
Current observations:
On my system, I'm able to encode up to 5 streams at real-time with VP9 using Apple's recommended resolutions and bit-rates for HEVC encoding for HLS as a benchmark. See the picture below on system load, etc.
Platform details:
I'm on a Coffee-lake system, using the i965 driver for this workflow:
libva info: VA-API version 1.5.0
libva info: va_getDriverName() returns 0
libva info: User requested driver 'i965'
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_5
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.5 (libva 2.4.0.pre1)
vainfo: Driver version: Intel i965 driver for Intel(R) Coffee Lake - 2.4.0.pre1 (2.3.0-11-g881e67a)
vainfo: Supported profile and entrypoints
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointEncSliceLP
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointEncSliceLP
VAProfileH264MultiviewHigh : VAEntrypointVLD
VAProfileH264MultiviewHigh : VAEntrypointEncSlice
VAProfileH264StereoHigh : VAEntrypointVLD
VAProfileH264StereoHigh : VAEntrypointEncSlice
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileNone : VAEntrypointVideoProc
VAProfileJPEGBaseline : VAEntrypointVLD
VAProfileJPEGBaseline : VAEntrypointEncPicture
VAProfileVP8Version0_3 : VAEntrypointVLD
VAProfileVP8Version0_3 : VAEntrypointEncSlice
VAProfileHEVCMain : VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointEncSlice
VAProfileHEVCMain10 : VAEntrypointVLD
VAProfileHEVCMain10 : VAEntrypointEncSlice
VAProfileVP9Profile0 : VAEntrypointVLD
VAProfileVP9Profile0 : VAEntrypointEncSlice
VAProfileVP9Profile2 : VAEntrypointVLD
My ffmpeg build info:
ffmpeg -buildconf
ffmpeg version N-93308-g1144d5c96d Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.3.0-27ubuntu1~18.04)
configuration: --pkg-config-flags=--static --prefix=/home/brainiarc7/bin --bindir=/home/brainiarc7/bin --extra-cflags=-I/home/brainiarc7/bin/include --extra-ldflags=-L/home/brainiarc7/bin/lib --enable-cuda-nvcc --enable-cuvid --enable-libnpp --extra-cflags=-I/usr/local/cuda/include/ --extra-ldflags=-L/usr/local/cuda/lib64/ --enable-nvenc --extra-cflags=-I/opt/intel/mediasdk/include --extra-ldflags=-L/opt/intel/mediasdk/lib --extra-ldflags=-L/opt/intel/mediasdk/plugins --enable-libmfx --enable-libass --enable-vaapi --disable-debug --enable-libvorbis --enable-libvpx --enable-libdrm --enable-opencl --enable-gpl --cpu=native --enable-opengl --enable-libfdk-aac --enable-libx265 --enable-openssl --extra-libs='-lpthread -lm' --enable-nonfree
libavutil 56. 26.100 / 56. 26.100
libavcodec 58. 47.103 / 58. 47.103
libavformat 58. 26.101 / 58. 26.101
libavdevice 58. 6.101 / 58. 6.101
libavfilter 7. 48.100 / 7. 48.100
libswscale 5. 4.100 / 5. 4.100
libswresample 3. 4.100 / 3. 4.100
libpostproc 55. 4.100 / 55. 4.100
configuration:
--pkg-config-flags=--static
--prefix=/home/brainiarc7/bin
--bindir=/home/brainiarc7/bin
--extra-cflags=-I/home/brainiarc7/bin/include
--extra-ldflags=-L/home/brainiarc7/bin/lib
--enable-cuda-nvcc
--enable-cuvid
--enable-libnpp
--extra-cflags=-I/usr/local/cuda/include/
--extra-ldflags=-L/usr/local/cuda/lib64/
--enable-nvenc
--extra-cflags=-I/opt/intel/mediasdk/include
--extra-ldflags=-L/opt/intel/mediasdk/lib
--extra-ldflags=-L/opt/intel/mediasdk/plugins
--enable-libmfx
--enable-libass
--enable-vaapi
--disable-debug
--enable-libvorbis
--enable-libvpx
--enable-libdrm
--enable-opencl
--enable-gpl
--cpu=native
--enable-opengl
--enable-libfdk-aac
--enable-libx265
--enable-openssl
--extra-libs='-lpthread -lm'
--enable-nonfree
And output from inxi:
inxi -F
System: Host: cavaliere Kernel: 5.0.0 x86_64 bits: 64 Desktop: Gnome 3.28.3 Distro: Ubuntu 18.04.2 LTS
Machine: Device: laptop System: ASUSTeK product: Zephyrus M GM501GS v: 1.0 serial: N/A
Mobo: ASUSTeK model: GM501GS v: 1.0 serial: N/A
UEFI: American Megatrends v: GM501GS.308 date: 10/01/2018
Battery BAT0: charge: 49.3 Wh 100.0% condition: 49.3/55.0 Wh (90%)
CPU: 6 core Intel Core i7-8750H (-MT-MCP-) cache: 9216 KB
clock speeds: max: 4100 MHz 1: 2594 MHz 2: 3197 MHz 3: 3633 MHz 4: 3514 MHz 5: 3582 MHz 6: 3338 MHz
7: 3655 MHz 8: 3684 MHz 9: 1793 MHz 10: 3651 MHz 11: 3710 MHz 12: 3662 MHz
Graphics: Card-1: Intel Device 3e9b
Card-2: NVIDIA GP104M [GeForce GTX 1070 Mobile]
Display Server: x11 (X.Org 1.19.6 ) drivers: modesetting,nvidia (unloaded: fbdev,vesa,nouveau)
Resolution: 1920x1080#144.03hz
OpenGL: renderer: GeForce GTX 1070/PCIe/SSE2 version: 4.6.0 NVIDIA 418.43
Audio: Card-1 Intel Cannon Lake PCH cAVS driver: snd_hda_intel Sound: ALSA v: k5.0.0
Card-2 NVIDIA GP104 High Definition Audio Controller driver: snd_hda_intel
Card-3 Kingston driver: USB Audio
Network: Card: Intel Wireless-AC 9560 [Jefferson Peak] driver: iwlwifi
IF: wlo1 state: up mac: (redacted)
Drives: HDD Total Size: 3050.6GB (94.5% used)
ID-1: /dev/nvme0n1 model: Samsung_SSD_960_EVO_1TB size: 1000.2GB
ID-2: /dev/sda model: Crucial_CT2050MX size: 2050.4GB
Partition: ID-1: / size: 246G used: 217G (94%) fs: ext4 dev: /dev/nvme0n1p5
ID-2: swap-1 size: 8.59GB used: 0.00GB (0%) fs: swap dev: /dev/nvme0n1p6
RAID: No RAID devices: /proc/mdstat, md_mod kernel module present
Sensors: System Temperatures: cpu: 64.0C mobo: N/A gpu: 61C
Fan Speeds (in rpm): cpu: N/A
Info: Processes: 412 Uptime: 3:32 Memory: 4411.3/32015.5MB Client: Shell (bash) inxi: 2.3.56
Why that last bit is included:
I'm running the latest Linux kernel to date, version 5.0.
The same also applies to the graphics driver stack, on Ubuntu 18.04LTS.
FFmpeg was built as shown here as this laptop has both NVIDIA+ Intel GPU enabled via Optimus. That way, I can tap into VAAPI, QuickSync and NVENC hwaccels as needed. Your mileage may vary even if our hardware is identical.
References:
See the encoder options, including rate control methods supported:
ffmpeg -h encoder=vp9_vaapi
See the deinterlace_vaapi filter usage options:
ffmpeg -h filter=deinterlace_vaapi
On the vpp_qsv filter usage, see:
ffmpeg -h filter=vpp_qsv
For instance, if you want field-rate output rather than frame-rate output from the deinterlacer, you could pass the rate=field option to it instead:
-vf=vaapi_deinterlace=rate=field
This feature, for instance, is tied to encoders that support MBAFF. Others, such as the NVENC-based ones in FFmpeg, do not have this implemented (as of the time of writing).
Tips on getting ahead with FFmpeg:
Where possible, infer to the built-in docs, as with the examples shown above.
They can uncover potential pitfalls that you may be able to avoid by understanding how filter chaining and encoder initialization works, unsupported features, etc, and the impact on performance.
For example, you'll see that in the snippets above, we call up the deinterlacer only once, then split its' output via the split filter to separate scalers. This is done to lower the overhead that would be incurred had we called up the deinterlacer more than once, and it would've been wasteful.
Warning:
Note that the SDK requires at least 2 threads to prevent deadlock, see this code block. This is why we set -threads 4 in ffmpeg.

ffmpeg images to video with different start times and durations

I've recently learned of FFMPEG's existence and I am trying to use it on my wordpress site.
On the site I am working on a html/php/js form page that lets users upload pictures, and set when the image shows and for how long.
Right now the code I have is only showing one image for the entire video.
<?php if (isset($_POST['button'])) {
echo shell_exec('ffmpeg -t '.$cap_1.' -i /myurl/beach-1866431.jpg -t '.$cap_2.' -i /myurl/orlando-1104481-1.jpg -filter_complex "scale=1280:-2" -i /myurl/audio.mp3 -c:v libx264 -pix_fmt yuv420p -t 30 -y /myurl/'.$v_title.'.mp4 2>&1');
} ?>
I tried setting "-t" for the duration with my php variables but nothing changes and I cant figure out what to use for the start time of each image.
Also, when writing shell_exec commands, instead of it all being on one line, is there a way to write working command code in php files with line breaks? For example -
<?php if (isset($_POST['button'])) {
echo shell_exec('ffmpeg -t '.$cap_1.' -i /myurl/beach-1866431.jpg
-t '.$cap_2.' -i /myurl/orlando-1104481-1.jpg
-filter_complex "scale=1280:-2"
-i /myurl/audio.mp3
-c:v libx264 -pix_fmt yuv420p -t 30 -y /myurl/'.$v_title.'.mp4 2>&1');
} ?>
EDIT
So far the concat text file seems to work, however I do not know how to set the start times for each image ---
ffconcat version 1.0
file /path/beach-1866431.jpg
duration 3
file /path/orlando-1104481-1.jpg
duration 5
file /path/beach-1866431.jpg
And ffmpeg command -
shell_exec('ffmpeg -f concat -safe 0 -i /path/file.txt -filter_complex "scale=1280:-2" -i /path/audio.mp3 -c:v libx264 -pix_fmt yuv420p -t 30 -y /path/'.$v_title.'.mp4 2>&1');
EDIT 2
Using the concat method suggested, my code now looks like this --
<?php if (isset($_POST['button'])) {
echo shell_exec('ffmpeg \
-loop 1 -framerate 24 -t 10 -i goldmetal.jpg \
-i 3251.mp3 \
-loop 1 -framerate 24 -t 10 -i cash-register-1885558.jpg \
-loop 1 -framerate 24 -t 10 -i ice-1915849.jpg \
-filter_complex "[0:v][1:a][2:v][3:v]concat=n=4:v=1:a=1[v][a]" -map "[v]" -map "[a]" -c:v libx264 /path/'.$v_title.'.mp4 2>&1');
} ?>
But I'm getting this error --
**Stream specifier ':v' in filtergraph description [0:v][1:a][2:v][3:v]concat=n=4:v=1:a=1[v][a] matches no streams.**
EDIT 3
I almost got it working as needed, using 2 commands, one for the images and fade, the other to combine the audio. The only issue I'm having is changing the time each image shows up. --
echo shell_exec('ffmpeg \
-loop 1 -t 5 -i '.$thepath .'/'.$v_pix1.' \
-loop 1 -t 5 -i ' .$thepath . '/cash-register-1885558.jpg \
-loop 1 -t 5 -i ' .$thepath . '/ice-1915849.jpg \
-loop 1 -t 5 -i '.$thepath .'/'.$v_pix1.' \
-loop 1 -t 5 -i ' .$thepath . '/ice-1915849.jpg \
-filter_complex \
"[0:v]setpts=PTS-STARTPTS,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v0]; \
[1:v]setpts=PTS-STARTPTS,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v1]; \
[2:v]setpts=PTS-STARTPTS,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v2]; \
[3:v]setpts=PTS-STARTPTS,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v3]; \
[4:v]setpts=PTS-STARTPTS,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1,scale=1280:720,setdar=16/9,setsar=sar=300/300[v4]; \
[v0][v1][v2][v3][v4]concat=n=5:v=1:a=0,format=yuv420p[v]" -map "[v]" -y '.$thepath .'/fadeout.mp4 2>&1');
echo shell_exec('ffmpeg \
-i '.$thepath .'/fadeout.mp4 \
-i '.$thepath .'/3251.mp3 \
-filter_complex "[0:v:0][1:a:0] concat=n=1:v=1:a=1 [vout] [aout]" -map "[vout]" -map "[aout]" -c:v libx264 -r 1 -y '.$thepath .'/mergetest.mp4 2>&1');
This answer addresses the ffmpeg specific questions in your broad multi-question.
This example will take images of any arbitrary size, fit them into a 1280x720 box, fade between images, and play the audio at the same time. The video will end whenever the images or audio ends: whichever is shortest.
ffmpeg \
-i audio1.mp3 \
-i audio2.wav \
-loop 1 -t 5 -i image1.jpg \
-loop 1 -t 5 -i image2.png \
-loop 1 -t 5 -i image3.jpg \
-filter_complex \
"[2:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=out:st=4:d=1[v1]; \
[3:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v2]; \
[4:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fade=t=in:st=0:d=1,fade=t=out:st=4:d=1[v3]; \
[0:a][1:a]amerge=inputs=2[a];
[v1][v2][v3]concat=n=3:v=1:a=0,format=yuv420p[v]" \
-map "[v]" -map "[a]" -ac 2 -shortest -movflags +faststart output.mp4

Raspivid save to disk and stream concurrently

I am trying to run a home security camera using Rasberry Pi Model B
I want to save the stream to a file locally (USB if possible) and also stream so I can pick this up on my network
The command I have is not working for both - any suggestions?
raspivid-o security.h264 -t 0 -n -w 600 -h 400 -fps 12 | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264
Try this command:
raspivid -o - -t 0 -n -w 600 -h 400 -fps 12 | tee security.h264 | cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264
The tee command writes the output to the standard output and to the specified files.