Capturing images with mjpg-streamer at a higher resolution than 640x480 - streaming

I'm using mjpg-streamer on Angstrom Linux on a BeagleBone and have been able to capture images from the stream. I can’t however seem to get the resolution to go above 640x480. When I try to set that option the result says it is streaming at the resolution I chose but the software doesn’t actually save any images.
For example, this works:
# ./mjpg_streamer -i "./input_uvc.so -d /dev/video0 -r 640x480 -yuv -n -f 1 -q 80" -o "./output_file.so -f ./tests/ -d 5000"
MJPG Streamer Version: svn rev:
i: Using V4L2 device.: /dev/video0
i: Desired Resolution: 640 x 480
i: Frames Per Second.: 1
i: Format............: YUV
i: JPEG Quality......: 80
o: output folder.....: ./tests
o: input plugin.....: 0: ./input_uvc.so
o: delay after save..: 5000
o: ringbuffer size...: no ringbuffer
o: command...........: disabled
While this does not:
# ./mjpg_streamer -i "./input_uvc.so -d /dev/video0 -r 1280x720 -yuv -n -f 1 -q 80" -o "./output_file.so -f ./tests/ -d 5000"
MJPG Streamer Version: svn rev:
i: Using V4L2 device.: /dev/video0
i: Desired Resolution: 1280 x 720
i: Frames Per Second.: 1
i: Format............: YUV
i: JPEG Quality......: 80
o: output folder.....: ./tests
o: input plugin.....: 0: ./input_uvc.so
o: delay after save..: 5000
o: ringbuffer size...: no ringbuffer
o: command...........: disabled
I was successful in changing the resolution to lower than what appears to be the default though.
# ./mjpg_streamer -i "./input_uvc.so -d /dev/video0 -r 320x240 -yuv -n -f 1 -q 80" -o "./output_file.so -f ./tests/ -d 5000"
MJPG Streamer Version: svn rev:
i: Using V4L2 device.: /dev/video0
i: Desired Resolution: 320 x 240
i: Frames Per Second.: 1
i: Format............: YUV
i: JPEG Quality......: 80
o: output folder.....: ./tests
o: input plugin.....: 0: ./input_uvc.so
o: delay after save..: 5000
o: ringbuffer size...: no ringbuffer
o: command...........: disabled
I have tried playing with the frame rate to no avail.
Any help is appreciated.

I faced the same issue before (but I'm using Raspberry pi), just adjust the permissions of the destination folder. I made the permissions on the folder 777, just for testing purposes, and I ran a similar command like the one you used ./mjpg_streamer -i "input_uvc.so -y --device /dev/video0" -o "output_file.so -f /home/pi/images -d 1500"
And it worked like a charm
P.S. Not really sure why it still shows this: o: ringbuffer size...: no ringbuffer .. but it works!!

For me, the solution was simple - to not specify the framerate at all, only the resolution. It started to work.

Related

Raspberry Pi libcamera-vid to Youtube

I'm setting up a nature cam using a Raspberry Pi 4 livestreaming to Youtube. I can live stream video to Youtube using:
raspivid -o - -t 0 -w 1280 -h 720 -fps 25 -b 4000000 -g 50 | ffmpeg -re -ar 44100 -ac 2 -acodec pcm_s16le -f vs16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/<mykey>
but this requires legacy support to be enabled - which means I can't remote to my pi using VNC. I can use Putty to run the raspivid command, but I then need to have another computer running Youtube in a browser to enable the live stream. I'd rather just have the Pi do this, but I can't open Chromium from the Putty command line. If I turn off legacy support, I can use VNC and run Chromium, but I can't run Raspivid. libcamera-vid is meant to replace Raspivid, but I have not found anything that tells me what settings to use.
libcamera-vid -o - -t 0 --width 854 --height 480 --brightness 0.1 --inline --autofocus --framerate 25 -g 50 | ffmpeg -f lavfi -i anullsrc -thread_queue_size 1024 -use_wallclock_as_timestamps 1 -i pipe:0 -c:v copy -b:v 2500k -f flv rtmp://a.rtmp.youtube.com/live2/mykey
gives errors, particularly around audio settings (my Pi isn't recording audio).
I'd be grateful if someone could give me a newbies guide to converting Raspivid commands to Libcamera-vid!
Thanks
Thanks
Yes, you have to define null audio like this -i anullsrc=channel_layout=stereo:sample_rate=44100
So I have something similar to you:
libcamera-vid --inline --nopreview -t 0 --width 640 --height 480 --framerate 15 --codec h264 -o - | ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -thread_queue_size 1024 -use_wallclock_as_timestamps 1 -i pipe:0 -c:v copy -c:a aac -preset fast -strict experimental -f flv rtmp://0.0.0.0:1935/live/1

Issue regarding Slides not being shown in converted video to mp4 in BigBlueButton

We are using BigBlueButton 2.4 for webinars. When an webinar video is processed by bigbluebutton, the presentation shows the slides that were uploaded in the webiner, but the converted video that we are downloading, does not have those slides shown (Rest of the video is okay).
Does anyone know how to fix this for this particular version?
The code that we are using is mentioned below, if it helps:
#!/bin/sh
# Convert the deskshare and webcam to a combined video stream including logo
cd /var/bigbluebutton/published/presentation/
meetingId=$1
cd $meetingId
# add webcam sound to deskshare
if [ -e deskshare/deskshare.webm ]
then
ffmpeg -nostdin -threads 4 -i video/webcams.webm -i deskshare/deskshare.webm -af afftdn deskshare_with_sound.mp4
else
ffmpeg -nostdin -threads 4 -i video/webcams.webm -af afftdn deskshare_with_sound.mp4
fi
ffmpeg -nostdin -threads 4 -i video/webcams.webm -vf

How to http stream FFMPEG encoded frames with VLC

I have a python script that write images (numpy arrays) on the standard output.
I want to keep this frames and encode them h264 with FFMPEG, using GPU, then give it to vlc to expose a stream over http.
Here there's a working example of my apporach, without the part of h264 encoding:
python3 script.py | ffmpeg -r 24 -s 1920x1080 -f rawvideo -i - -vcodec copy -f avi - | cvlc --demux=rawvideo --rawvid-fps=25
--rawvid-width=1920 --rawvid-height=1080 --rawvid-chroma=RV24 - --no-audio --sout '#transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{user=pippo,pwd=pluto,mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:10001/}'
Now, I'm having troubles in writing working pipes to do what I need.
Here the pipe I'm actually working on, FFMPEG process is managed by GPU, but VLC cannot correctly manage the flow, I suppose, in fact I can connect to VLC from another VLC instance used as client, but then I got an error in which VLC client cannot open MRL.
Here the pipe:
python3 script.py | ffmpeg -y -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -f rawvideo -s 1920x1080 -i - -c:a copy -c:v h264_nvenc -f h264 - | cvlc --demux=rawvideo --rawvid-fps=25 --rawvid-width=1920 --rawvid-height=1080 --rawvid-chroma=RV24 - --no-audio --sout '#transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{user=pippo,pwd=pluto,mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:10001/}'
I don't understand how to set vlc parameters in order to manage the incoming stream. I also could have made errors in ffmpeg pipe, any suggestion is welcome.

Better way to use ffmpeg with vidstab and encoding 2 pass

I scan old 8mm films
so I have folder with set of jpeg
I transform them to films using ffmpeg ( I choose x264 2 pass encoding)
//On all folder that start by 1 I launch the pass1 for x264
for f in 1*/ ; do cd "$f"; ffmpeg -y -r 18 -i img%05d.jpg -c:v libx264 -s 1200x898 -b:v 3000k -pass 1 -an -f mp4 /dev/null; cd ..; done
//On all folder that start by 1 I launch the pass2 x264
for f in 1*/ ; do cd "$f"; ffmpeg -y -r 18 -i img%05d.jpg -c:v libx264 -s 1200x898 -b:v 3000k -pass 2 ../"`echo ${PWD##*/}`.mp4"; cd ..; done
--> Before I have set of folder with jpeg
1965-FamilyStuff01\img1111.jpg,..,img9999.jpg
1965-FamilyStuff02\img1111.jpg,..,img9999.jpg
and I get
1965-FamilyStuff01.mp4
1965-FamilyStuff02.mp4
then I discover vidstab that also need 2 pass
// Stabilize every Video of a folder
mkdir stab;for f in ./*.mp4 ; do echo "Stabilize $f" ;
ffmpeg -i "$f" -vf vidstabdetect=shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=2 -y -f mp4 /dev/null;
ffmpeg -i "$f" -vf vidstabtransform=smoothing=30:input="transforms.trf":interpol=linear:crop=black:zoom=0:optzoom=1,unsharp=5:5:0.8:3:3:0.4 -y "stab/$f"
; done; rm transforms.trf;
But I ask myself, that perhaps the order is not correct or perhaps there is a way to do the encoding with vidstab in less than 4 pass (2 pass for x264 encoding then 2 pass for vidstab)
or perhaps the order should be change to optimize quality of film output)
You will need to run two commands to use vidstab. But x264 does not need two-passes for best quality. Two-pass encoding is used to target a specific output file size. Just use a single pass with the -crf option.
So you only need to use two commands:
ffmpeg -i input.mp4 -vf "scale=1200:-2,vidstabdetect=shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=2" -f null -
ffmpeg -i input.mp4 -vf "scale=1200:-2,vidstabtransform=smoothing=30:interpol=linear:crop=black:zoom=0:optzoom=1,unsharp=5:5:0.8:3:3:0.4,format=yuv420p" -crf 23 -preset medium output.mp4
See FFmpeg Wiki: H.264 for more info on -crf and -preset.

Play online Radio Station as a Music on Hold in Asterisk

Is there a way (Tool or any idea) to play radio station (Streamed via IceCast) as a Music On Hold in Asterisk?, I Have a streaming server and Asterisk Server running and working independently very well, only I want to integrate both of two.
Your Help Please, THANKS IN ADVANCE
My OS: Linux - Centos
My Music On Hold Class:
mode=custom
application=/usr/bin/sox mystreamingurl -b 64000 -r 44100 -t ogg -
This script produces upnormal and noisy sound which is totally different from the sound produced by the Streaming Server(IceCas).
Used MPG123 player and worked like a charm
Udated MOH Class:
mode=custom
application=/usr/bin/mpg123 -q -r 8000 -f 8192 --mono -s http://mystreamingurl
Asterisk's internal sound format is 8khz mono PCM
You should directly specify for sox which output format to use for in and out.
Also sox is NOT streaming utility, you should use something like MPlayer.
https://www.voip-info.org/asterisk-config-musiconholdconf/#StreamradiousingMPlayerforMOH
#!/bin/bash
if -n "`ls /tmp/asterisk-moh-pipe.*`" ; then
rm /tmp/asterisk-moh-pipe.*
fi
PIPE="/tmp/asterisk-moh-pipe.$$"
mknod $PIPE p
mplayer http://address_of_radio_station -really-quiet -quiet -ao pcm:file=$PIPE -af resample=8000,channels=1,format=mulaw 2>/dev/null | cat $PIPE 2>/dev/null
rm $PIPE