How to create MPEG2 Transport Stream Pipeline Using Python and Gstreamer - streaming

In developing a streaming audio application I used the gst-launch-1.0 command-line tool to generate an MPEG Transport stream for testing. This worked as intended (I was able to serve the stream from a simple http server and hear it using VLC media player). I then tried to replicate the encoding part of that stream in Python gstreamer code. The Python version connected to the server ok, but no audio could be heard. I'm trying to understand why the command-line implementation worked, but the Python one did not. I am working on Mac OS 10.11 and Python 2.7.
The command line that worked was as follows:
gst-launch-1.0 audiotestsrc freq=1000 ! avenc_aac ! aacparse ! mpegtsmux ! tcpclientsink host=127.0.0.1 port=9999
The Python code that created the gstreamer pipeline is below. It instantiated without producing any errors and it connected successfully to the http server, but no sound could be heard through VLC. I verified that the AppSrc in the Python code was working, by using it with a separate gstreamer pipeline that played the audio directly. This worked fine.
def create_mpeg2_pipeline():
play = Gst.Pipeline()
src = GstApp.AppSrc(format=Gst.Format.TIME, emit_signals=True)
src.connect('need-data', need_data, samples()) # need_data and samples defined elsewhere
play.add(src)
capsFilterOne = Gst.ElementFactory.make('capsfilter', 'capsFilterOne')
capsFilterOne.props.caps = Gst.Caps('audio/x-raw, format=(string)S16LE, rate=(int)44100, channels=(int)2')
play.add(capsFilterOne)
src.link(capsFilterOne)
audioConvert = Gst.ElementFactory.make('audioconvert', 'audioConvert')
play.add(audioConvert)
capsFilterOne.link(audioConvert)
capsFilterTwo = Gst.ElementFactory.make('capsfilter', 'capsFilterTwo')
capsFilterTwo.props.caps = Gst.Caps('audio/x-raw, format=(string)F32LE, rate=(int)44100, channels=(int)2')
play.add(capsFilterTwo)
audioConvert.link(capsFilterTwo)
aacEncoder = Gst.ElementFactory.make('avenc_aac', 'aacEncoder')
play.add(aacEncoder)
capsFilterTwo.link(aacEncoder)
aacParser = Gst.ElementFactory.make('aacparse', 'aacParser')
play.add(aacParser)
aacEncoder.link(aacParser)
mpegTransportStreamMuxer = Gst.ElementFactory.make('mpegtsmux', 'mpegTransportStreamMuxer')
play.add(mpegTransportStreamMuxer)
aacParser.link(mpegTransportStreamMuxer)
tcpClientSink = Gst.ElementFactory.make('tcpclientsink', 'tcpClientSink')
tcpClientSink.set_property('host', '127.0.0.1')
tcpClientSink.set_property('port', 9999)
play.add(tcpClientSink)
mpegTransportStreamMuxer.link(tcpClientSink)
My question is, how does the gstreamer pipeline that I've implemented in Python differ from the command-line pipeline? And more generally, how do you DEBUG this sort of thing? Does gstreamer have any 'verbose' mode?
Thanks.

One question at a time:
1) How does it differ from gst-launch-1.0?
It is hard to tell without seeing your full code but I'll try to guess:
gst-launch-1.0 does proper pad linking. When you have a muxer like you do you can't directly link it as it is created without any sink pads. You need to request one to be created before you can link. Take a look at dynamic pads: https://gstreamer.freedesktop.org/documentation/application-development/basics/pads.html
Also, gst-launch-1.0 has error handling, so it checks that every action succeeded and otherwise reports an error. I'd recommend you add a GstBus message handler to get notified of error messages at least. Also you should check the return for the functions you call in GStreamer, that would allow you to catch this linking error in your program.
2) Gstreamer debugging?
Mostly done by setting the GST_DEBUG variable: https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html#the-debug-log
Run your application with: GST_DEBUG=6 ./yourapplication and you should see lots of logging.

Related

Streaming arbitrary data with Gstreamer over network

How can we use Gstreamer to stream arbitrary data?
In this very informative talk (https://www.youtube.com/watch?v=ZphadMGufY8) the lecturer mentions that Gstreamer is media agnostic and a use case where Gstreamer is used for non-media application so this should be possible but I didn’t find anything useful on internet so far.
Particular use case in which I am interested: high-speed usb bayer camera is connected to RPi4. RPi4 reads and forwards camera frames via network. Now, Gstreamer doesn’t support (as far as I know) sending bayer formatted frames via udp/rtp so I need to convert it to something else i.e. to RGB format using the bayer2rgb element. This conversion, however, consumes some part of the processing power from RPi4 so the speed in which RPi4 can read and send frames from the camera is significantly lower.
On top of that I am using RPi4 as a data acquisition system for other sensors also so it would be great if I could use Gstreamer to stream them all.
The sender pipeline is
gst-launch-1.0 videotestsrc ! video/x-bayer,format=bggr,width=1440,height=1080,framerate=10/1 ! rtpgstpay mtu=1500 ! queue ! multiudpsink clients=127.0.0.1:5000
The receiver pipeline is
gst-launch-1.0 -v udpsrc port=5000 caps="application/x-rtp, media=(string)application, clock-rate=(int)90000, encoding-name=(string)X-GST" ! queue ! rtpgstdepay ! bayer2rgb ! videoconvert ! autovideosink
Take care of the mtu, as far as I know, PI supports 1500 bytes only, no Jumbo Frames.
Also expected missing packages.

How to measure intra GStreamer/gst-launch latency

im currently building a GStreamer pipleine on my Raspberry PI as follows:
v4lsrc - h264enc - mpegtsmux - udpsink
im try to figure out how to measure the time in milliseconds for one or several elements, eg. the time consumed by h264enc and mpegtsmux.
Of course I found the following entry which solves this problem by using a gstreamer plugin:
How can I quantitatively measure gstreamer H264 latency between source and display?
But i'm not really sure how to compile such a plugin. Especially on the Raspberry Pi! Ive read that every GStreamer Element needs to calculate its latency anyway. Would be super cool if anybody could help me with that!
Cheers,
Markus
I don't think there's an out-of-the-box way to do this. In the past we used to wrap a pair of identity elements around the element to be measured, and added it to a bin. Very clunky.
I went ahead and uploaded some code that uses the new GstMeta API and two new elements: markin and markout. So you can use it like so:
export GST_DEBUG=markout:5
gst-launch-1.0 -f -e videotestsrc pattern=ball ! video/x-raw,width=320,height=240 ! markin name=moo ! videoscale ! markout ! video/x-raw,width=1280,height=720 ! ximagesink
https://github.com/mpekar/gstreamer_timestamp_marking

How to get application process to wait until the socket has data to read using libevent bufferevents?

I'm working with libevent for the first time and have been having an issue trying to get my application to not run until the read callback is called. I am using bufferevents as well. Essentially I am doing is trying to avoid the sleep in my main application loop and instead have the OS wake up the process (via libevent) when there is data to be read off the socket. Anyone know how to do this? I found in an alpha build of libevent that you can set a base event loop to be EVLOOP_NO_EXIT_ON_EMPTY, but from looking at the libevent code that will just use up my whole proc I believe. I also read on this question that it is a bad idea to set a socket to blocking on windows which is why I haven't done that as a solution either. I will mark this with libuv and libev too since they are similar and might contribute to my solution.
you have to use the following api, some of the API may be oudated you can search google for new one.
struct event_base *base ;
struct event g_eve
base = event_init();
//after binding the socket register your socket for read event using below api
event_set(&g_eve, SockFd, EV_READ | EV_PERSIST, CallbackFunctin, &g_eve);
event_add(&g_eve, NULL);
event_base_dispatch(base);

Video streaming over RTP using gstreamer

I am trying to stream a video file using gstreamer from one device to another over RTP. At the sender side I am using the following command :
gst-launch filesrc location=/home/kuber/Desktop/MELT.MPG ! mpegparse ! rtpsend ip=localhost
But this gives the following error : no element "rtpsend" , I downloaded all the rtp tools and still the same error. Am I using rtpsend in some wrong way?
Also can someone give me the command line code for streaming video file(locally stored in my laptop and not the testvideosrc file) from one device to another? strong text
Assuming this is a MPEG1/2 elementary stream (because you are using mpegparse) that you want to send out You need to use rtpmpvpay after your mpegparse and then give the output to udpsink.
mpegparse ! rtpmpvpay ! udpsink host="hostipaddr" port="someport"
I am not aware of any rtpsend plugin as such. The above holds true for any streaming on rtp.
Do a gst-inspect | grep rtp to see all the payloaders, depayers
If it is a mpegps stream you need to first do a mpegpsdemux before the rest of the pipeline.
EDIT:
Why not remove mpegparse? don't see why you need it. You should learn to look at the source and sink requirements in gst-inspect of the component, that will tell you the compatibility that is needed between nodes. Recieving will be reverse udpsrc port="portno" ! capsfilter caps="application/x-rtp, pt=32, ..enter caps here" ! rtpmpvdepay !

Correct gstreamer pipeline for particular rtsp stream

I'm trying to convert this RTSP URL to something else (anything!) using this gst pipeline:
gst-launch rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov ! rtpmp4vdepay ! filesink location=somebytes.bin
This gives the following error:
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2791): gst_base_src_loop (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc0:
streaming task paused, reason not-linked (-1)
So I guess it's something about connecting the rstp source to the depayloader. If I change the pipeline to use rtpmp4gdepay rather than vdepay, it works and produces something, but I'm not sure what the output format is.
Does anyone know what pipeline I should be using to get at the video from this URL? I'm assuming it's mp4/h264/aac, but maybe it's not.
Try this first:
gst-launch-0.10 -v playbin2 uri=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov
or
gst-launch-1.0 -v playbin uri=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov
Mov file is not directly streamable. So your RTSP source is probably sending you two elementary streams. (I am guessing this is darwin or some other similar server) So you may have to setup two outputs from rtspsrc one for audio and one for video.
rtpmp4vpay is for elementary mpeg4 streams. Is your source file having mpeg4 video codec? If it is h.264 replace it with rtph264depay. you can pass the output to decoder and play it if you want. Feed it to decodebin. To dump it in h.264 you will first have to parse and add nal headers to it ( h264parse parse perhaps? )
rtpmp4gpay is most probably accepted for the audio stream.
I am guessing your file is h.264/aac which is why rtpmp4vdepay wont work and rtpmp4gdepay will. But you are not doing anything about the video when you setup rtpmp4gdepay so you need to do that.