I have a Raspberry Pi Compute module with 2 cameras. I'm trying to use gstreamer with v4l2src selecting /dev/video0 & /dev/video1 to continually run at about 20FPS and use videomixer to combine the images side-by-side then output H264 over RTP to a UDP port (read by another host)/
The default (current) RPi v4l2src driver does not support two cameras, but as of today a beta is available that does, however it requires the beta 4.4.6 kernel.
The problem I'm having is in getting the mixer connected.
#!/bin/bash -x
#
# Script to start RPi Compute Module streaming over RTP (RFC3984)
# from both cameras
#
FPS=20 # Frames per second
WIDTH=640 # Image width
HEIGHT=480 # Image height
UPLINK_HOST=192.168.1.73 # Receiving host
PORT=5200 # UDP port
#
# TESTING WITH ONE CAMERA ONLY FOR THE MOMENT
#
function start_streaming
{
gst-launch-1.0 -ve videomixer name=mixer \
! x264enc \
! h264parse \
! rtph264pay config-interval=10 pt=96 \
! udpsink host=$UPLINK_HOST port=$PORT \
v4l2src device=/dev/video0 \
! video/x-raw,format=AYUV,width=$WIDTH,height=$HEIGHT,framerate=$FPS/1 \
! mixer.
}
# Start streaming on both cameras simultaneously
echo Image size: $WIDTH x $HEIGHT
echo Frame rate: $FPS
echo Starting cameras 0 and 1 streaming to $UPLINK_HOST:$PORT
start_streaming
# Wait until everything has finished
wait
exit 0
# end
What I'm getting is the rather useless message:
WARNING: erroneous pipeline: could not link v4l2src0 to mixer
I've fiddled about rather a lot and got nowhere - it's probably something trivial, but be blowed if I can see it !
Many thanks
Nick
I think the problem is the chosen format. You use the AYUV while your camera does not support it. Try to replace the AYUV by I420.
Related
My goal is to use gstreamer pipeline instead of libcamera-still. The problem is that the frames generated by the gstreamer pipeline look concave.
Gstreamer Pipeline
def gstreamer_pipeline(
# Issue: the sensor format used by Raspberry Pi 4B and NVIDIA Jetson Nano B01 are different
# in Raspberry Pi 4B, this command
# $ libcamera-still --width 1280 --height 1280 --mode 1280:1280
# uses sensor format 2328x1748.
# However, v4l2-ctl --list-formats-ext do not have such format.
sensor_id=0,
capture_width=1920,
capture_height=1080,
display_width=640,
display_height=360,
framerate=21,
flip_method=0,
):
return (
"nvarguscamerasrc sensor-id=%d ! "
"video/x-raw(memory:NVMM),format=(string)NV12,framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw,width=(int)%d,height=(int)%d,format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw,format=(string)BGR ! "
"appsink"
% (
sensor_id,
framerate,
flip_method,
display_width,
display_height
)
)
The result with gstreamer pipeline
The code I run to take frame
libcamera-still -t 5000 --width 1280 --height 1280 --mode 1280:1280 --autofocus-on-capture -o test.jpg
The result with libcamera-still
Is there a way (Tool or any idea) to play radio station (Streamed via IceCast) as a Music On Hold in Asterisk?, I Have a streaming server and Asterisk Server running and working independently very well, only I want to integrate both of two.
Your Help Please, THANKS IN ADVANCE
My OS: Linux - Centos
My Music On Hold Class:
mode=custom
application=/usr/bin/sox mystreamingurl -b 64000 -r 44100 -t ogg -
This script produces upnormal and noisy sound which is totally different from the sound produced by the Streaming Server(IceCas).
Used MPG123 player and worked like a charm
Udated MOH Class:
mode=custom
application=/usr/bin/mpg123 -q -r 8000 -f 8192 --mono -s http://mystreamingurl
Asterisk's internal sound format is 8khz mono PCM
You should directly specify for sox which output format to use for in and out.
Also sox is NOT streaming utility, you should use something like MPlayer.
https://www.voip-info.org/asterisk-config-musiconholdconf/#StreamradiousingMPlayerforMOH
#!/bin/bash
if -n "`ls /tmp/asterisk-moh-pipe.*`" ; then
rm /tmp/asterisk-moh-pipe.*
fi
PIPE="/tmp/asterisk-moh-pipe.$$"
mknod $PIPE p
mplayer http://address_of_radio_station -really-quiet -quiet -ao pcm:file=$PIPE -af resample=8000,channels=1,format=mulaw 2>/dev/null | cat $PIPE 2>/dev/null
rm $PIPE
I currently have two command-line pipelines set up to stream video from a Raspberry Pi camera (ArduCam module) to a PC over ethernet; these work great:
gst-sender.sh
./video2stdout | gst-launch-1.0 -v fdsrc fd=0 ! \
video/x-h264, width=1280, height=800, framerate=60/1 ! \
h264parse ! rtph264pay ! \
udpsink host=xxx.xxx.xx.xxx port=xxxx
gst-reciever.sh
gst-launch-1.0 -v -e udpsrc port=xxxx \
caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, \
encoding-name=(string)H264, payload=(int)96" ! \
rtph264depay ! h264parse ! mp4mux ! filesink location=video.mp4
However, I will ultimately be running multiple cameras, synchronized via an external hardware trigger, and since I can't guarantee that the streams will begin at the same time I need timestamps--either for the stream start time or for each frame.
By adding 'identity silent=false' between h264parse and rtph264pay in gst-sender.sh, I can access the stream's buffer data, and with the following command I can retrieve the frame timestamps:
./gst-sender.sh | grep -oP "(?<=dts: )(\d+:){2}\d+.\d+"
But these timestamps are relative to the start of the stream, so I can't use them to line up saved videos from multiple streams!
Start video encoding...
0:00:00.000000000
0:00:00.016666666
0:00:00.033333332
0:00:00.049999998
0:00:00.066666664
0:00:00.083333330
0:00:00.099999996
0:00:00.116666662
0:00:00.133333328
0:00:00.149999994
0:00:00.166666660
0:00:00.183333326
It looks like gstreamer has an "absolute" clock time that it uses for latency calculations [1], but I have been unable to find any way to access it from the command line.
Is there a way to access gstreamer's absolute/system clock from the command line? Or another way to get the stream start timestamp?
Is my code correct ?
I try to convert .mp4 to .mkv H.264 with gst-launch-1.0 on Raspberry Pi
gst-launch-1.0 -v filesrc location=sample_mpeg4.mp4 ! omxmpeg4videodec ! omxh264enc ! matroskamux ! filesink location=out.mkv
Do you get any errors? Please remember to mention that in future questions as it helps narrowing down the problems.
It should not be right, .mp4 is usually a termination for mp4 container format and not for mpeg4 video codec. You should need something like:
gst-launch-1.0 -v filesrc location=sample_mpeg4.mp4 ! qtdemux ! omxmpeg4videodec ! queue ! videoconvert ! omxh264enc ! matroskamux ! filesink location=out.mkv
This will only convert the video, audio on the original media file will be lost. It might also be more practical to just use uridecodebin for the decoding part:
gst-launch-1.0 -v uridecodebin uri=file:///path/to/sample.mp4 ! queue ! videoconvert ! omxh264enc ! matroskamux ! filesink location=out.mkv
I've read https://stackoverflow.com/a/23869705/4073836 and it was very usefull to me.
At least I am able to play HD from my filesystem. But.
When I use software decoder
$ gst-launch-1.0 filesrc location=./test720p3kbps.mp4 ! qtdemux ! h264parse ! avdec_h264 ! eglglessink
I got normal picture on my screen while it is very slow.
Using omxplayer gives me brilliant picture. It is fast and correct.
And my own goal
$ gst-launch-1.0 filesrc location=./test720p3kbps.mp4 ! qtdemux ! h264parse ! omxh264dec ! eglglessink
also plays smooth. But it flips the picture upside-down! :'(
I've tried omxh263dec and omxmjpegdec with the same result.
decodebin and playbin did no result either.
I could use videoflip but it crashes my pipe as stably as AK-74 would do:
*** glibc detected *** gst-launch-1.0: free(): invalid pointer: 0x004aaf50 ***
Aborted
My gpu_mem in config.txt is set to 256
$ gst-launch-1.0 --version
gst-launch-1.0 version 1.2.0
GStreamer 1.2.0
http://packages.qa.debian.org/gstreamer1.0
I've installed it via apt-get install.
Thanks in advance!
The video is actually playing "correctly", it is the OpenGL coordinate system that is flipped.
I have had success with this issue by adding a format string as a work-around.
gst-launch-1.0 filesrc location=./test720p3kbps.mp4 ! qtdemux ! h264parse ! avdec_h264 ! "video/x-raw, format=(string)I420" ! eglglessink