How to replicate libcamera-still with gstreamer pipeline? - raspberry-pi

My goal is to use gstreamer pipeline instead of libcamera-still. The problem is that the frames generated by the gstreamer pipeline look concave.
Gstreamer Pipeline
def gstreamer_pipeline(
# Issue: the sensor format used by Raspberry Pi 4B and NVIDIA Jetson Nano B01 are different
# in Raspberry Pi 4B, this command
# $ libcamera-still --width 1280 --height 1280 --mode 1280:1280
# uses sensor format 2328x1748.
# However, v4l2-ctl --list-formats-ext do not have such format.
sensor_id=0,
capture_width=1920,
capture_height=1080,
display_width=640,
display_height=360,
framerate=21,
flip_method=0,
):
return (
"nvarguscamerasrc sensor-id=%d ! "
"video/x-raw(memory:NVMM),format=(string)NV12,framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw,width=(int)%d,height=(int)%d,format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw,format=(string)BGR ! "
"appsink"
% (
sensor_id,
framerate,
flip_method,
display_width,
display_height
)
)
The result with gstreamer pipeline
The code I run to take frame
libcamera-still -t 5000 --width 1280 --height 1280 --mode 1280:1280 --autofocus-on-capture -o test.jpg
The result with libcamera-still

Related

Raspberry Pi libcamera-vid to Youtube

I'm setting up a nature cam using a Raspberry Pi 4 livestreaming to Youtube. I can live stream video to Youtube using:
raspivid -o - -t 0 -w 1280 -h 720 -fps 25 -b 4000000 -g 50 | ffmpeg -re -ar 44100 -ac 2 -acodec pcm_s16le -f vs16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/<mykey>
but this requires legacy support to be enabled - which means I can't remote to my pi using VNC. I can use Putty to run the raspivid command, but I then need to have another computer running Youtube in a browser to enable the live stream. I'd rather just have the Pi do this, but I can't open Chromium from the Putty command line. If I turn off legacy support, I can use VNC and run Chromium, but I can't run Raspivid. libcamera-vid is meant to replace Raspivid, but I have not found anything that tells me what settings to use.
libcamera-vid -o - -t 0 --width 854 --height 480 --brightness 0.1 --inline --autofocus --framerate 25 -g 50 | ffmpeg -f lavfi -i anullsrc -thread_queue_size 1024 -use_wallclock_as_timestamps 1 -i pipe:0 -c:v copy -b:v 2500k -f flv rtmp://a.rtmp.youtube.com/live2/mykey
gives errors, particularly around audio settings (my Pi isn't recording audio).
I'd be grateful if someone could give me a newbies guide to converting Raspivid commands to Libcamera-vid!
Thanks
Thanks
Yes, you have to define null audio like this -i anullsrc=channel_layout=stereo:sample_rate=44100
So I have something similar to you:
libcamera-vid --inline --nopreview -t 0 --width 640 --height 480 --framerate 15 --codec h264 -o - | ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -thread_queue_size 1024 -use_wallclock_as_timestamps 1 -i pipe:0 -c:v copy -c:a aac -preset fast -strict experimental -f flv rtmp://0.0.0.0:1935/live/1

How to http stream FFMPEG encoded frames with VLC

I have a python script that write images (numpy arrays) on the standard output.
I want to keep this frames and encode them h264 with FFMPEG, using GPU, then give it to vlc to expose a stream over http.
Here there's a working example of my apporach, without the part of h264 encoding:
python3 script.py | ffmpeg -r 24 -s 1920x1080 -f rawvideo -i - -vcodec copy -f avi - | cvlc --demux=rawvideo --rawvid-fps=25
--rawvid-width=1920 --rawvid-height=1080 --rawvid-chroma=RV24 - --no-audio --sout '#transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{user=pippo,pwd=pluto,mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:10001/}'
Now, I'm having troubles in writing working pipes to do what I need.
Here the pipe I'm actually working on, FFMPEG process is managed by GPU, but VLC cannot correctly manage the flow, I suppose, in fact I can connect to VLC from another VLC instance used as client, but then I got an error in which VLC client cannot open MRL.
Here the pipe:
python3 script.py | ffmpeg -y -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -f rawvideo -s 1920x1080 -i - -c:a copy -c:v h264_nvenc -f h264 - | cvlc --demux=rawvideo --rawvid-fps=25 --rawvid-width=1920 --rawvid-height=1080 --rawvid-chroma=RV24 - --no-audio --sout '#transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{user=pippo,pwd=pluto,mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:10001/}'
I don't understand how to set vlc parameters in order to manage the incoming stream. I also could have made errors in ffmpeg pipe, any suggestion is welcome.

Play online Radio Station as a Music on Hold in Asterisk

Is there a way (Tool or any idea) to play radio station (Streamed via IceCast) as a Music On Hold in Asterisk?, I Have a streaming server and Asterisk Server running and working independently very well, only I want to integrate both of two.
Your Help Please, THANKS IN ADVANCE
My OS: Linux - Centos
My Music On Hold Class:
mode=custom
application=/usr/bin/sox mystreamingurl -b 64000 -r 44100 -t ogg -
This script produces upnormal and noisy sound which is totally different from the sound produced by the Streaming Server(IceCas).
Used MPG123 player and worked like a charm
Udated MOH Class:
mode=custom
application=/usr/bin/mpg123 -q -r 8000 -f 8192 --mono -s http://mystreamingurl
Asterisk's internal sound format is 8khz mono PCM
You should directly specify for sox which output format to use for in and out.
Also sox is NOT streaming utility, you should use something like MPlayer.
https://www.voip-info.org/asterisk-config-musiconholdconf/#StreamradiousingMPlayerforMOH
#!/bin/bash
if -n "`ls /tmp/asterisk-moh-pipe.*`" ; then
rm /tmp/asterisk-moh-pipe.*
fi
PIPE="/tmp/asterisk-moh-pipe.$$"
mknod $PIPE p
mplayer http://address_of_radio_station -really-quiet -quiet -ao pcm:file=$PIPE -af resample=8000,channels=1,format=mulaw 2>/dev/null | cat $PIPE 2>/dev/null
rm $PIPE

FFmpeg VP9 - Different Quantisation Parameters but same output files

I want to encode a video with vp9 with different quantisation parameters (qp=[16,20,24,28,32]). Unfortunately the output files have the same data rate after encoding and don't show any quality differences.
This is my code for qp=20:
ffmpeg -s:v 3840x1920 -framerate 30 -i video_3840x1920_30fps_8bit_420_erp.yuv -c:v libvpx-vp9 -qp 20 -f avi out.avi
Many thanks for any pointers you can give me.
-qp only works for internal mpegvideoenc-derived encoders, such as FFmpeg's built-in MPEG-1/2/4 encoders. Libvpx, like x264/5, uses -crf to do this instead. See the Wiki for more details. You can also type ffmpeg -h encoder=libvpx-vp9:
$ ffmpeg -h encoder=libvpx-vp9
[..]
-crf <int> E..V.... Select the quality for constant quality mode (from -1 to 63) (default -1)
So for qp=20, you would use ffmpeg -s:v 3840x1920 -framerate 30 -i video_3840x1920_30fps_8bit_420_erp.yuv -c:v libvpx-vp9 -crf 20 -b:v 0 out.avi.

Using gstreamer with videomixer & 2 cameras streaming over UDP

I have a Raspberry Pi Compute module with 2 cameras. I'm trying to use gstreamer with v4l2src selecting /dev/video0 & /dev/video1 to continually run at about 20FPS and use videomixer to combine the images side-by-side then output H264 over RTP to a UDP port (read by another host)/
The default (current) RPi v4l2src driver does not support two cameras, but as of today a beta is available that does, however it requires the beta 4.4.6 kernel.
The problem I'm having is in getting the mixer connected.
#!/bin/bash -x
#
# Script to start RPi Compute Module streaming over RTP (RFC3984)
# from both cameras
#
FPS=20 # Frames per second
WIDTH=640 # Image width
HEIGHT=480 # Image height
UPLINK_HOST=192.168.1.73 # Receiving host
PORT=5200 # UDP port
#
# TESTING WITH ONE CAMERA ONLY FOR THE MOMENT
#
function start_streaming
{
gst-launch-1.0 -ve videomixer name=mixer \
! x264enc \
! h264parse \
! rtph264pay config-interval=10 pt=96 \
! udpsink host=$UPLINK_HOST port=$PORT \
v4l2src device=/dev/video0 \
! video/x-raw,format=AYUV,width=$WIDTH,height=$HEIGHT,framerate=$FPS/1 \
! mixer.
}
# Start streaming on both cameras simultaneously
echo Image size: $WIDTH x $HEIGHT
echo Frame rate: $FPS
echo Starting cameras 0 and 1 streaming to $UPLINK_HOST:$PORT
start_streaming
# Wait until everything has finished
wait
exit 0
# end
What I'm getting is the rather useless message:
WARNING: erroneous pipeline: could not link v4l2src0 to mixer
I've fiddled about rather a lot and got nowhere - it's probably something trivial, but be blowed if I can see it !
Many thanks
Nick
I think the problem is the chosen format. You use the AYUV while your camera does not support it. Try to replace the AYUV by I420.