Correct gstreamer pipeline for particular rtsp stream - mp4

I'm trying to convert this RTSP URL to something else (anything!) using this gst pipeline:
gst-launch rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov ! rtpmp4vdepay ! filesink location=somebytes.bin
This gives the following error:
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2791): gst_base_src_loop (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc0:
streaming task paused, reason not-linked (-1)
So I guess it's something about connecting the rstp source to the depayloader. If I change the pipeline to use rtpmp4gdepay rather than vdepay, it works and produces something, but I'm not sure what the output format is.
Does anyone know what pipeline I should be using to get at the video from this URL? I'm assuming it's mp4/h264/aac, but maybe it's not.

Try this first:
gst-launch-0.10 -v playbin2 uri=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov
or
gst-launch-1.0 -v playbin uri=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov

Mov file is not directly streamable. So your RTSP source is probably sending you two elementary streams. (I am guessing this is darwin or some other similar server) So you may have to setup two outputs from rtspsrc one for audio and one for video.
rtpmp4vpay is for elementary mpeg4 streams. Is your source file having mpeg4 video codec? If it is h.264 replace it with rtph264depay. you can pass the output to decoder and play it if you want. Feed it to decodebin. To dump it in h.264 you will first have to parse and add nal headers to it ( h264parse parse perhaps? )
rtpmp4gpay is most probably accepted for the audio stream.
I am guessing your file is h.264/aac which is why rtpmp4vdepay wont work and rtpmp4gdepay will. But you are not doing anything about the video when you setup rtpmp4gdepay so you need to do that.

Related

Streaming arbitrary data with Gstreamer over network

How can we use Gstreamer to stream arbitrary data?
In this very informative talk (https://www.youtube.com/watch?v=ZphadMGufY8) the lecturer mentions that Gstreamer is media agnostic and a use case where Gstreamer is used for non-media application so this should be possible but I didn’t find anything useful on internet so far.
Particular use case in which I am interested: high-speed usb bayer camera is connected to RPi4. RPi4 reads and forwards camera frames via network. Now, Gstreamer doesn’t support (as far as I know) sending bayer formatted frames via udp/rtp so I need to convert it to something else i.e. to RGB format using the bayer2rgb element. This conversion, however, consumes some part of the processing power from RPi4 so the speed in which RPi4 can read and send frames from the camera is significantly lower.
On top of that I am using RPi4 as a data acquisition system for other sensors also so it would be great if I could use Gstreamer to stream them all.
The sender pipeline is
gst-launch-1.0 videotestsrc ! video/x-bayer,format=bggr,width=1440,height=1080,framerate=10/1 ! rtpgstpay mtu=1500 ! queue ! multiudpsink clients=127.0.0.1:5000
The receiver pipeline is
gst-launch-1.0 -v udpsrc port=5000 caps="application/x-rtp, media=(string)application, clock-rate=(int)90000, encoding-name=(string)X-GST" ! queue ! rtpgstdepay ! bayer2rgb ! videoconvert ! autovideosink
Take care of the mtu, as far as I know, PI supports 1500 bytes only, no Jumbo Frames.
Also expected missing packages.

How to create MPEG2 Transport Stream Pipeline Using Python and Gstreamer

In developing a streaming audio application I used the gst-launch-1.0 command-line tool to generate an MPEG Transport stream for testing. This worked as intended (I was able to serve the stream from a simple http server and hear it using VLC media player). I then tried to replicate the encoding part of that stream in Python gstreamer code. The Python version connected to the server ok, but no audio could be heard. I'm trying to understand why the command-line implementation worked, but the Python one did not. I am working on Mac OS 10.11 and Python 2.7.
The command line that worked was as follows:
gst-launch-1.0 audiotestsrc freq=1000 ! avenc_aac ! aacparse ! mpegtsmux ! tcpclientsink host=127.0.0.1 port=9999
The Python code that created the gstreamer pipeline is below. It instantiated without producing any errors and it connected successfully to the http server, but no sound could be heard through VLC. I verified that the AppSrc in the Python code was working, by using it with a separate gstreamer pipeline that played the audio directly. This worked fine.
def create_mpeg2_pipeline():
play = Gst.Pipeline()
src = GstApp.AppSrc(format=Gst.Format.TIME, emit_signals=True)
src.connect('need-data', need_data, samples()) # need_data and samples defined elsewhere
play.add(src)
capsFilterOne = Gst.ElementFactory.make('capsfilter', 'capsFilterOne')
capsFilterOne.props.caps = Gst.Caps('audio/x-raw, format=(string)S16LE, rate=(int)44100, channels=(int)2')
play.add(capsFilterOne)
src.link(capsFilterOne)
audioConvert = Gst.ElementFactory.make('audioconvert', 'audioConvert')
play.add(audioConvert)
capsFilterOne.link(audioConvert)
capsFilterTwo = Gst.ElementFactory.make('capsfilter', 'capsFilterTwo')
capsFilterTwo.props.caps = Gst.Caps('audio/x-raw, format=(string)F32LE, rate=(int)44100, channels=(int)2')
play.add(capsFilterTwo)
audioConvert.link(capsFilterTwo)
aacEncoder = Gst.ElementFactory.make('avenc_aac', 'aacEncoder')
play.add(aacEncoder)
capsFilterTwo.link(aacEncoder)
aacParser = Gst.ElementFactory.make('aacparse', 'aacParser')
play.add(aacParser)
aacEncoder.link(aacParser)
mpegTransportStreamMuxer = Gst.ElementFactory.make('mpegtsmux', 'mpegTransportStreamMuxer')
play.add(mpegTransportStreamMuxer)
aacParser.link(mpegTransportStreamMuxer)
tcpClientSink = Gst.ElementFactory.make('tcpclientsink', 'tcpClientSink')
tcpClientSink.set_property('host', '127.0.0.1')
tcpClientSink.set_property('port', 9999)
play.add(tcpClientSink)
mpegTransportStreamMuxer.link(tcpClientSink)
My question is, how does the gstreamer pipeline that I've implemented in Python differ from the command-line pipeline? And more generally, how do you DEBUG this sort of thing? Does gstreamer have any 'verbose' mode?
Thanks.
One question at a time:
1) How does it differ from gst-launch-1.0?
It is hard to tell without seeing your full code but I'll try to guess:
gst-launch-1.0 does proper pad linking. When you have a muxer like you do you can't directly link it as it is created without any sink pads. You need to request one to be created before you can link. Take a look at dynamic pads: https://gstreamer.freedesktop.org/documentation/application-development/basics/pads.html
Also, gst-launch-1.0 has error handling, so it checks that every action succeeded and otherwise reports an error. I'd recommend you add a GstBus message handler to get notified of error messages at least. Also you should check the return for the functions you call in GStreamer, that would allow you to catch this linking error in your program.
2) Gstreamer debugging?
Mostly done by setting the GST_DEBUG variable: https://gstreamer.freedesktop.org/documentation/tutorials/basic/debugging-tools.html#the-debug-log
Run your application with: GST_DEBUG=6 ./yourapplication and you should see lots of logging.

Video streaming over RTP using gstreamer

I am trying to stream a video file using gstreamer from one device to another over RTP. At the sender side I am using the following command :
gst-launch filesrc location=/home/kuber/Desktop/MELT.MPG ! mpegparse ! rtpsend ip=localhost
But this gives the following error : no element "rtpsend" , I downloaded all the rtp tools and still the same error. Am I using rtpsend in some wrong way?
Also can someone give me the command line code for streaming video file(locally stored in my laptop and not the testvideosrc file) from one device to another? strong text
Assuming this is a MPEG1/2 elementary stream (because you are using mpegparse) that you want to send out You need to use rtpmpvpay after your mpegparse and then give the output to udpsink.
mpegparse ! rtpmpvpay ! udpsink host="hostipaddr" port="someport"
I am not aware of any rtpsend plugin as such. The above holds true for any streaming on rtp.
Do a gst-inspect | grep rtp to see all the payloaders, depayers
If it is a mpegps stream you need to first do a mpegpsdemux before the rest of the pipeline.
EDIT:
Why not remove mpegparse? don't see why you need it. You should learn to look at the source and sink requirements in gst-inspect of the component, that will tell you the compatibility that is needed between nodes. Recieving will be reverse udpsrc port="portno" ! capsfilter caps="application/x-rtp, pt=32, ..enter caps here" ! rtpmpvdepay !

Lossless compressed JPEG gstreamer element?

The pipeline below works fine for saving compressed JPEG images but is there a way to save lossless compressed JPEG images using gstreamer?
gst-launch v4l2src always-copy=false num-buffers=1 chain-ipipe=true ! 'video/x-raw-yuv,format=(fourcc)NV12, width=2176, height=1944' ! dmaiaccel ! dm365facedetect draw-square=true ! dmaienc_jpeg ! filesink location=$FILE_NAME
Assuming you have all the GStreamer plugins installed (good, bad, and ugly), you have an impressive number of lossless video compressors at your disposal via the FFmpeg GStreamer element. These include ffenc_png (for PNG encoding), ffenc_jpegls (lossless JPEG algorithm), and many less common ones.
However, if I'm reading your GStreamer command line correctly, you seem to be calling a series of custom components that are tied to a particular type of hardware (I have been Googling but I haven't quite nailed down what it is). The JPEG encoder component is 'dmaienc_jpeg'. It's possible that the element preceding it in the chain (dm365facedetect) only outputs data that dmaienc_jpeg can interpret. However, if it's a general colorspace, then you can send it through an FFmpeg lossless encoder, possibly with a colorspace conversion in between. The answer can be ascertained by invoking 'gst-inspect' on the elements and studying the output (the src and sink data types).
Update, pursuant to new data: Good news: that dm365facedetect element outputs raw YUV in an NV12 format. Very flexible, and you have a lot of options.
What platform are you on? If you are using Ubuntu Linux, install a bunch of GStreamer plugins using:
apt-get install gstreamer0.10-plugins-good \
gstreamer0.10-plugins-bad gstreamer0.10-plugins-ugly gstreamer0.10-ffmpeg
Some lossless codec options: PNG, via either 'pngenc' or 'ffenc_png' (although this may technically incur a tiny bit of loss due to YUV -> RGB colorspace conversion), 'ffenc_huffyuv', 'ffenc_jpegls', or 'ffenc_ljpeg'. When you encode these, send them through the avimux component. So, an example amendment to the end of your command line:
... ! dm365facedetect draw-square=true ! ffenc_ljpeg ! \
avimux ! filesink location=$FILE_NAME
Expect the lossless codec data to be somewhat larger than the JPEG data you were getting before. Experiment with different codecs to see what you like, and make sure you can decode the data on the other side using your preferred toolchain (FFmpeg and VLC should always be able to handle it).

RTSP Source Filter with GDCL MP4 Muxer incompatibility

I'm trying to use GDCL MP4 Muxer with my RTSP Source Filter. They work fine together except after stopping the graph, muxer doesn't finilize the file and write the reqiured tables to the end of file via file writer (some parts are written starting from moov but not the time table values). When I try another RTSP source filter (which I don't have its source codes), table values are created with GDCL MP4 Muxer.
But when I try Elecard's MP4 Muxer, it works fine with my RTSP Source Filter. So, there is an incompatibility. I examined GDCL's source codes but couldn't find what it was expecting from me. I already calculate and set timestamp values to samples using SetTime method. But GDCL still doesn't finilaze file. Is it caused by missing information or missing signal when graph stops? What can be the problem, any ideas?
One thing you should be aware of regarding Geraint's MP4 Mux is that it is checking incoming media samples to have both start and stop time. You might be having only .tStart/AM_SAMPLE_TIMEVALID which still makes sense for video, but this would be a problem.
So the samples have to have stop time, or you need to fix this in multiplexer code.
A typical symptom for the problem is that generated files are empty or of zero duration.