gst-launch-1.0 two pipelines/sinkfiles - streaming

I am working on a flying drone that sends live stream from a Raspberry Pi 2 to my computer trough a 3G modem/WI-FI, and the stream is made with this command :
sudo raspivid -t 999999999 -w 320 -h 240 -fps 20 -rot 270 -b 100000 -o - | gst-launch-1.0 -e -vvv fdsrc ! h264parse ! rtph264pay pt=96 config-interval=5 ! udpsink host=192.168.0.103 port=5000
The stream works very well, but i have a problem, while raspivid is running i want to take pictures from 5 to five seconds, and when i am executing this command while running raspivid i'm getting this :
root#raspberrypi:/var/www/camera# /usr/bin/raspistill -o cam2.jpg
mmal: mmal_vc_component_enable: failed to enable component: ENOSPC
mmal: camera component couldn't be enabled
mmal: main: Failed to create camera component
mmal: Failed to run camera app. Please check for firmware updates
Now what solutions do i have? Another idea is that i use gstream with both udpsink and filesink to a .avi, but i get error again :
WARNING: erroneous pipeline: could not link multifilesink0 to filesink0
What can i do in this case?
Thanks.

AFAIK only one Raspberry Pi program can grab the camera at a time. Since you're always streaming live video that precludes you from adding the five second snapshots on the Pi side (unless you write something custom from scratch).
What I'd suggest doing instead is handling the five second snapshots on the receiving side using the same encoded video data you're using for the live stream. This will ease battery usage on your drone and all the data you need is being sent already.

Related

Setting up an USB webcam RTSP stream with GStreamer

I'm using GStreamer to send the camera feed of /dev/video1 (Raspberry Pi's usb webcam) through a RTSP server that I can connect with another Raspberry Pi.
Result of v4l2-ctl -d /dev/video1 --list-formats:
ioctl: VIDIOC_ENUM_FMT
Type: Video Capture
[0]: 'MJPG' (Motion-JPEG, compressed)
[1]: 'YUYV' (YUYV 4:2:2)
The pipeline I'm using is
./gst-rtsp-launch --port 8555 '( v4l2src device='/dev/video1 ! image/jpeg,width=800,height=600,framerate=30/1 ! jpegparse ! rtpjpegpay name=pay0 pt=96 )' --gst-debug-level=3`
When running it, and letting the other machine connect, the console gives this message:
0:00:02.097412343 3234 0xb4c1c0c0 FIXME default gstutils.c:3981:gst_pad_create_stream_id_internal:<appsrc0:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
0:00:02.102907578 3234 0xb5a07600 WARN v4l2src gstv4l2src.c:692:gst_v4l2src_query:<v4l2src0> Can't give latency since framerate isn't fixated !
0:00:02.170888076 3234 0xb4c1b980 WARN v4l2bufferpool gstv4l2bufferpool.c:790:gst_v4l2_buffer_pool_start:<v4l2src0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:02.410829991 3234 0x166ba90 FIXME rtspmedia rtsp-media.c:3581:gst_rtsp_media_suspend: suspend for dynamic pipelines needs fixing
0:00:02.414457433 3234 0x166ba90 FIXME rtspmedia rtsp-media.c:3581:gst_rtsp_media_suspend: suspend for dynamic pipelines needs fixing
0:00:02.414551635 3234 0x166ba90 WARN rtspmedia rtsp-media.c:3607:gst_rtsp_media_suspend: media 0xb5a34130 was not prepared
0:00:03.878249884 3234 0x166ba90 WARN rtspmedia rtsp-media.c:3868:gst_rtsp_media_set_state: media 0xb5a34130 was not prepared
On the client Raspberry Pi, using VLC on the static IP vlc rtsp://192.168.0.10:8555/video, gives this error (and triggers the previous one in the other board):
mmal: mmal_component_create_core: could not create component 'vc.ril.hvs' (1)
mmal: mmal_vc_component_create: failed to create component 'vc.ril.hvs' (1:ENOMEM)
mmal: mmal_component_create_core: could not create component 'vc.ril.hvs' (1)
mmal: mmal_vc_component_create: failed to create component 'vc.ril.hvs' (1:ENOMEM)
mmal: mmal_component_create_core: could not create component 'vc.ril.hvs' (1)
mmal: mmal_vc_component_create: failed to create component 'vc.ril.hvs' (1:ENOMEM)
mmal: mmal_component_create_core: could not create component 'vc.ril.hvs' (1)
mmal: mmal_vc_port_info_set: failed to set port info (3:0): EINVAL
mmal: mmal_vc_port_set_format: mmal_vc_port_info_set failed 0x909bcaa0 (EINVAL)
Falha de segmentação
The last line means "Segmentation fault". The screen in the client board flickers black before giving this error, and the board connect to the webcam only shows this error after the client connected.
Connecting to localhost on the same board using vlc rtsp://127.0.0.1:8555/video works for a little bit, then it breaks.
How can I fix this pipeline, so the video can be shown correctly through connection between the two boards?
For the record:
I asked in the comments which version of gstreamer you were using, to which the answer was "1.14.4".
I suggested you update to the latest version (1.20.1), because a segmentation fault where you see it sounds like a potential bug in gstreamer.
Turns out that it was correct: updating gstreamer (to 1.18.4) resolved the problem!

Electron RPi: Requested device not found (camera)

I'm building an Electron streaming app that I deploy on Raspberry Pi 3 with attached camera (OV5647 5Mpx) that supposedly supports YUV/RAW WGB formats. When I try to access it via:
const constraints = {"video": true}
navigator.mediaDevices.getUserMedia(constraints)
I get an error: DOMException: Requested device not found and basically no further details.
I also have a different app based on gstreamer and it is able to stream image from this camera with following input settings (it works on the same device):
gst_parse_launch ("webrtcbin bundle-policy=max-bundle name=sendrecv "
TURN_SERVER
"v4l2src ! video/x-raw,width=640,height=480,framerate=30/1 ! v4l2h264enc ! video/x-h264,level=(string)3.1,stream-format=(string)byte-stream ! h264parse ! rtph264pay ! "
"" RTP_CAPS_H264 "96 ! sendrecv. "
"alsasrc device=hw:0 ! audioamplify amplification=15 ! audio/x-raw,channels=(int)2,format=(string)S32LE,rate=(int)44100,layout=(string)interleaved ! deinterleave name=d d.src_0 ! queue ! audioconvert ! audioresample ! webrtcdsp echo-suppression-level=2 noise-suppression-level=3 voice-detection=true ! queue ! audioconvert ! opusenc ! rtpopuspay ! "
"queue ! " RTP_CAPS_OPUS "97 ! sendrecv. ", &error);
My question is, what would be the further steps to analyse why Chromium can't recognize my camera? Is it a matter of drivers or rather app configuration?
Also, would it be possible to acquire the video stream directly from gstreamer somehow? And pass it further to WebRTC PeerConnection within Electron app, not using the UserMedia at all?
EDIT:
I've tried opening the same application via a browser on a desktop version of Raspbian (same code hosted as a webpage on an external server). It failed with the same error on Chrome, but worked on Firefox. So seems like Chrome backend cannot communicate with the camera. How is it different from the Firefox backend and how to include the missing bits manually?
FYI:
I've filed a bug report in Chromium as it appears to be a problem since 89.0.4389.128 version https://bugs.chromium.org/p/chromium/issues/detail?id=1259138

How to stream via RTMP using Gstreamer?

I am attempting to stream video and audio using Gstreamer to an RTMP Server (Wowza) but there are a number of issues.
There is almost no documentation about how to properly utilise rtmpsink, a plugin that sends media via RTMP to a specified server. Not only that but crafting the correct Gstreamer pipeline that is rtmpsink compatible is simply a trial and error exercise currently.
My current Gstreamer pipeline is:
sudo gst-launch-1.0 -e videotestsrc ! queue ! videoconvert ! x264enc ! flvmux streamable=true ! queue ! rtmpsink location='rtmp://<ip_address>/live live=true'
Running the above on my Linux machine spits out this error:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstRTMPSink:rtmpsink0: Could not open resource for writing.
Additional debug info:
gstrtmpsink.c(246): gst_rtmp_sink_render (): /GstPipeline:pipeline0/GstRTMPSink:rtmpsink0:
Could not connect to RTMP stream "rtmp://31.24.217.8/live live=true" for writing
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...
ERROR: from element /GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0:
streaming task paused, reason error (-5)
ERROR: from element /GstPipeline:pipeline0/GstQueue:queue0: Internal data flow error.
Additional debug info:
gstqueue.c(992): gst_queue_handle_sink_event (): /GstPipeline:pipeline0/GstQueue:queue0:
streaming task paused, reason error (-5)
Due to lack of documentation on the Wowza side another issue is actually pin-pointing the correct ip address to point rtmpsink at and lack of documentation on the Gstreamer side, proper RTMP authentication is elusive aside from some examples found on some forums of which cannot be confirmed as working due to other variables.
What is the correct Gstreamer pipeline for streaming via RTMP using rtmpsink and how do I properly implement rtmpsink for this with and without authentication?
Actually the pipeline you're using is working fine.
However, disabling the Wowza's RTMP security it is a must, also pointing to the correct direction of too.
Following the guidelines on the next page: https://www.wowza.com/forums/content.php?36-How-to-set-up-live-streaming-using-an-RTMP-based-encoder
Re-check that RTMP is enabled in application Playback Types:
Disable all security options to assure the GStreamer compatibility.
In the Playback Security tab, check that No client restrictions is selected (selected by default).
In the Sources tab, in the left columns, it is possible to check the server settings:
Once we have done all this steps, we can launch the previous pipeline:
gst-launch-1.0 -e videotestsrc ! queue ! videoconvert ! x264enc ! flvmux streamable=true ! queue ! rtmpsink location='rtmp://192.168.1.40:1935/livertmp/myStream'
It works and it is possible to check the result clicking on Test Players button. The result is next:
Although probably it is out of scope, it is possible to add audio to the pipeline and improve it adding some properties that were missing:
gst-launch-1.0 videotestsrc is-live=true ! videoconvert ! x264enc bitrate=1000 tune=zerolatency ! video/x-h264 ! h264parse ! video/x-h264 ! queue ! flvmux name=mux ! rtmpsink location='rtmp://192.168.1.40:1935/livertmp/myStream' audiotestsrc is-live=true ! audioconvert ! audioresample ! audio/x-raw,rate=48000 ! voaacenc bitrate=96000 ! audio/mpeg ! aacparse ! audio/mpeg, mpegversion=4 ! mux.
Regarding to the password encrypted content, is not straightforward to achieve it with GStreamer.

Raspberry pi webcam server performance issue

I am able to access my raspberry pi webcam server from all the pcs under the same network. But I found lag(>5sec) in the streaming. Is it possible to reduce the lag. any ideas? please share...
Thanks
Regards
Hema Chowdary
Well this depends on what you are using to send and receive the stream but for fun let's say you are using GStreamer, People have reported sub 100 ms over Wifi with the following setup even if I never got it <110ms.
Sender:
gst-launch-0.10 alsasrc device=hw:0 ! audio/x-raw-int, rate=48000, channels=1, endianness=1234, width=16, depth=16, signed=true ! udpsink host=192.168.1.255 port=5000
Reciever:
gst-launch-0.10 udpsrc buffer-size=1 port=5000 ! audio/x-raw-int, rate=48000, channels=1, endianness=1234, width=16, depth=16, signed=true ! alsasink sync=false
I did not come up with this configuration for GStreamer but rather SWAP_File did on the official Raspberry Pi forum.

RTSP error - Segmenation fault

I get "segmentation fault" error when I play the following RTSP plugin:
./TEST "( v4l2src always-copy=FALSE input-src=composite ! video/x-raw-yuv,format=\(fourcc\)NV12, width=320,height=240 ! queue ! dmaiaccel ! dmaienc_h264 encodingpreset=2 ratecontrol=2 intraframeinterval=23 idrinterval=46 targetbitrate=1000000 ! rtph264pay name=pay0 pt=96 )"
TEST is the test-launch application from rtsp examples. I get the following error:
davinci_resizer davinci_resizer.2: RSZ_G_CONFIG:0:1:124
vpfe-capture vpfe-capture: IPIPE Chained
vpfe-capture vpfe-capture: Resizer present
tvp514x 1-005d: tvp5146 (Version - 0x03) found at 0xba (DaVinci I2C adapter)
vpfe-capture vpfe-capture: dma_alloc_coherent size 7168000 failed
Segmentation fault
Can anyone tell me as to what is going wrong.
Thanks,
Maz
See vpfe-capture vpfe-capture: dma_alloc_coherent size 7168000 failed
memory allocation has failed somewhere in your capture driver. This question is better suited for TI's e2e list, no? I don't think this ia a gstreamer generic issue but issue specific to the embedded hardware.
Why don't you get a simple filesrc ! h264parse ! rtph264pay pipeline up first and then slowly make it more and more complicated. [replace bitstream with yuv and do encoding, then add the capture]