How can I record only part of a live source with gstreamer? - sockets

Application consists of two pipelines:
Sending pipeline
filesrc ! decodebin ! encoder ! payloader ! udpsink
Receiving pipeline
udpsrc ! rtpbin ! depayloader ! decoder ! encoder ! filesink
The wanted behavior is that the sending pipeline plays a file, and when that has finished, another file plays and recording starts.
The actual behavior varies. With some approaches it is that the recording starts from the same time that the first playback starts. This I believe is due to that the pipelines share the same GSocket, in order to get it to work at all. So somehow data coming to the socket must be buffered.
Other approaches result in a few frames from before the recording should start, and then jumps to after the recording begins, resulting in a messy picture (i-frames without keyframe).
I've tried a couple of different approaches to try to get the recording to start at the right time:
Start the receiving pipeline when the second file starts playing
Start both pipelines at the same time and have a valve element dropping everything until the second file starts playing
Start both pipelines at the same time and Seek to the time where the second file starts playing
Start both pipelines at the same time and have the receiving pipeline connected to a fakesink until and switch to the real filter chain when second file starts playing
Set an offset on the receiving pipeline
I would be very grateful for any help with this!

Start both pipelines at the same time and have a valve element dropping everything until the second file starts playing
This actually works. The problem I had was that no picture fast update was sent, and it took a while for the next keyframe to arrive by itself.

Related

AVB Streaming Stored file from sever to client : TimeStamping issue

I am working on AVB application.As we have created gstreamer plugins at talker side and listener side and we used that plugins to transfer stored media.
I am using below pipeline
Talker side :
gst-launch-1.0 filesrc location=/home/input.mp4 ! queue ! avbsink interface=eth0 fd=0 (here avbsink is created property to transfer avb packets)
Listener side :
gst-launch-1.0 avbsrc interface=eth0 dataSync=1 mediaType=0 fd=0 ! queue ! qtdemux name=mux mux.video_0 ! queue ! avdec_h264 ! autovideosink mux.audio_0 ! queue ! decodebin ! autoaudiosink
(i tried vaapidecode and vaapisink instead of avdec_h264 and autovideosink for hardware accelerator )
Error comming on listener side is
"WARNING: from element /GstPipeline:pipeline0/GstVaapisink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2683) : gst_base_sink_is_too_late(): /GstPipeline:pipeline0/GstVaapiSink:vaapisink0;
There may be a timestamping problem, or this computer is too slow. "
I have seen one solution to use sync=false then i have added sync=false with vaapisink and error message got eliminate but still video is not playing smoothly. its continuously gating stop and again starting.
Is there any solution to play video continuously.( Only high quality video(720p or more) is not playing, application is working for low quality video ).
It looks like the size of the buffer is not enough since a frame of hd video has more pixels. The other point I can propose is may be you can apply some sort of compression algorithm prior to sending the frame to listener but I am not sure if compression is contradicting any one of AVB protocols.

Video streaming over RTP using gstreamer

I am trying to stream a video file using gstreamer from one device to another over RTP. At the sender side I am using the following command :
gst-launch filesrc location=/home/kuber/Desktop/MELT.MPG ! mpegparse ! rtpsend ip=localhost
But this gives the following error : no element "rtpsend" , I downloaded all the rtp tools and still the same error. Am I using rtpsend in some wrong way?
Also can someone give me the command line code for streaming video file(locally stored in my laptop and not the testvideosrc file) from one device to another? strong text
Assuming this is a MPEG1/2 elementary stream (because you are using mpegparse) that you want to send out You need to use rtpmpvpay after your mpegparse and then give the output to udpsink.
mpegparse ! rtpmpvpay ! udpsink host="hostipaddr" port="someport"
I am not aware of any rtpsend plugin as such. The above holds true for any streaming on rtp.
Do a gst-inspect | grep rtp to see all the payloaders, depayers
If it is a mpegps stream you need to first do a mpegpsdemux before the rest of the pipeline.
EDIT:
Why not remove mpegparse? don't see why you need it. You should learn to look at the source and sink requirements in gst-inspect of the component, that will tell you the compatibility that is needed between nodes. Recieving will be reverse udpsrc port="portno" ! capsfilter caps="application/x-rtp, pt=32, ..enter caps here" ! rtpmpvdepay !

Limiting gstreamer pipeline throughput to simulate live source

I'm developing an RTSP server that should emulate a live source, while streaming the data from a file.
What I currently have is mostly based on gst-rtsp-server example test-readme.c, only with the following pipeline:
gst_rtsp_media_factory_set_launch(factory, "( "
"filesrc location=stream.mkv ! matroskademux name=demuxer "
"demuxer. ! queue ! rtph264pay name=pay0 pt=96 "
"demuxer. ! queue ! rtpmp4gpay name=pay1 pt=97 "
")");
This works very well, except for one problem: when the RTSP client (which uses RTSP/TCP interleave transport) is not able to receive data, the whole pipeline locks up until the client is ready again, and then resumes at the original position without any jump.
Since I want to emulate live source which cannot buffer its video indefinitely, the desired behavior in this case is to continue playing the file, so when the client blocks for 5 seconds, it will lose 5 seconds of recording.
I've attempted to achieve this by limiting queue sizes and setting them as leaky (by setting them as queue max-size-bytes=1000000 max-size-time=1000000000 leaky=upstream, which should provide buffer to ~1 second of video, but no more). This did not work entirely as I hoped: the source and demuxer filled the queue and then completely emptied themselves in 0.1 sec.
I figured I need some way to throttle pipeline throughput before the queue, either by limiting the demuxer to real-time demuxing, or finding/making a gstreamer filter that will let through 1 second of data per 1 second of real time.
Do you have any hints on how to do this?
So it seems that while leaky queue and limiter can be done, they don't help much in this regard as GStreamer RTSP implementation has its own queue for outgoing TCP data. What appears to work is keeping the pipeline unchanged and patching gst-rtsp-server module to limit its queue length (to 1 MB in this case, recent version also limit message count to 100):
--- gst-rtsp-server-1.4.5/gst/rtsp-server/rtsp-client.c 2014-11-06 11:20:28.000000000 +0100
+++ gst-rtsp-server-1.4.5-r1/gst/rtsp-server/rtsp-client.c 2015-04-28 14:25:14.207888281 +0200
## -3435,11 +3435,11 ##
gst_rtsp_client_set_send_func (client, do_send_message, priv->watch,
(GDestroyNotify) gst_rtsp_watch_unref);
/* FIXME make this configurable. We don't want to do this yet because it will
* be superceeded by a cache object later */
- gst_rtsp_watch_set_send_backlog (priv->watch, 0, 100);
+ gst_rtsp_watch_set_send_backlog (priv->watch, 1000000, 100);
GST_INFO ("client %p: attaching to context %p", client, context);
res = gst_rtsp_watch_attach (priv->watch, context);
return res;

Make MPlayer show all playback state change message in output

I'm currently using MPlayer in slave mode for a video player im making.
As of currently, the media player shows ==== PAUSED ==== when it's paused and I can read the output for this status to know when the video is paused.
The command line arg i am using as of now is msglevel identify=6:statusline=-1 (i disabled statusline as it produced A: 0.7 V: 0.6 A-V: 0.068 ... and unneccessary stuff)
What do I need to set the msglevel (or anything else) so that it will also show ==== PLAYING ==== or any indication that it started playing, stopped, media ended, loading, etc?
I found out how to get if the video is paused.
By sending command pausing_keep_force get_property pause to mplayer, it responds with ANS_pause=no if not paused, and ANS_pause=yes if paused. Problem solved.
Based on what I can decipher from the OP's answer to his/her own question, he/she was looking for a way to determine whether mplayer was paused or playing. I've written a little bash script that can handle this task with some simple function calls.
You can actually inspect the last couple lines of mplayer's output to see if mplayer is paused. I put together a little bash library that can be used to query some status information about mplayer. Take a look on my GitHub. There are instructions for integrating my script with other bash scripts.
If you implement my script, you will need to play your media file using the playMediaFile function. Then you can simply call the isPaused function as a condition in bash like this:
if isPaused; then
# do something
fi
# or
if ! isPaused; then
# do something
fi
# or
ifPaused && #do something

gstreamer pipeline that was working now requiring a bunch of queue components, why?

I have a C program that records video and audio from a v4l2 source into flv format. I noticed that the program did not work on newer versions of ubuntu. I decided to try to run the problamatic pipeline in gst-launch and try to find the simplest pipeline that would reproduce the problem. Just focusing on the video side I have reduced it to what you see below.
So I have a gstreamer pipeline that was working:
gst-launch v4l2src ! tee name="vtee" ! queue ! videorate ! ffmpegcolorspace ! ffdeinterlace ! x264enc ! flvmux name="mux" ! filesink location=vid.flv vtee. ! queue ! xvimagesink
Now it will only work if I do this with a bunch of queue's added one after another before the xvimagesink. Although this does work I get a 2 second delay before the pipeline starts to work and i also get the message :
gst-launch v4l2src ! tee name="vtee" ! queue ! videorate ! ffmpegcolorspace ! ffdeinterlace ! x264enc ! flvmux name="mux" ! filesink location=vid.flv vtee. ! queue ! queue ! queue ! queue ! queue ! xvimagesink
Although the second pipeline above works, there is a pause before the pipeline starts running and I get the message (I don't think this system is 2 slow, its a core i7 with tons of ram):
Additional debug info:
gstbasesink.c(2692): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.
Can any one explain what is happening here? What am I doing wrong?
You claim that the first pipeline stopped working but you don't explain what happened. Things stop working because something else changed:
- version of GStreamer and submodules ?
- version of OS ?
- version of camera ?
It shouldn't be necessary to add a bunch of queues in a row. In practice they will create thread boundaries and separate the part before and after across threads, and it will add the delay you see, which will affect the latency and sync.
An old message, but the problem is still not fixed. Somewhere between 9.10 and 11.10 (I upgraded a few before noticing). I got around it by avoiding the x264enc and used ffenc_mpeg4 instead.
I just noticed this note from Gstreamer Cheat Sheet :
Note: We can replace theoraenc+oggmux with x264enc+someothermuxer but then the pipeline will freeze unless we make the queue [19] element in front of the xvimagesink leaky, i.e. "queue leaky=1".
Which doesn't work for me so I'll stick with ffenc_mpeg4.