GStreamer GstMeta is lost when using video scaling - metadata

I am trying carry over some custom SEI messages that I am parsing in gsth264parse.c by using Closed Captions metadata.
This is my pipeline
filesrc location=test_sei.mp4 ! qtdemux name=demux demux.video_0 ! h264parse ! vaapih264dec ! vaapipostproc width=320 height=200 format=yv12 ! appsink sync=false name=sink
So I call
gst_buffer_add_video_caption_meta(buffer,GST_VIDEO_CAPTION_TYPE_CEA608_RAW, "test", 5);
to add metadata to a GstBuffer.
Then, later on in my appsink code I call
gst_buffer_get_video_caption_meta(buffer)
to get my SEI message.
If I have video resizing in my pipeline, I receive NULL. If I remove width=320 height=200, it works fine. Tried with decodebin ! videoscale ! video/x-raw,width=320,height=240, same thing, if I remove resizing, it works. How to make GStreamer to preserve GstMeta of GstBuffer during video scaling?

Related

broadcast visualization along live audio : lower CPU usage?

I need to broadcast an audio live signal with its visualization with wavescope, and a background image. I actually built a working gst-launch-1.0 command. It's just that it's taking uo a lot of the CPU usage and I wish to lower the resources demands. I'm testing my signal with nginx' rtmp module on localhost, and playing back with ffplay.
I'm aware that this question comes quite close, but I believe that the problem at hand is more concrete : I'm looking for a way to send less wavescope frames, in the expectation that it will require less CPU cycles.
I got a sample png image and an alsasrc that is a microphone as inputs.
gst-launch-1.0 compositor name=comp sink_1::alpha=0.5 ! videoconvert ! \
x264enc quantizer=35 tune=zerolatency ! h264parse ! 'video/x-h264,width=854,height=480' ! queue ! \
flvmux name=muxer streamable=true ! rtmpsink location='rtmp://localhost/streamer/test' \
filesrc location='sample.png' ! pngdec ! videoconvert ! imagefreeze is-live=true ! queue ! comp.sink_0 \
alsasrc ! audioconvert ! tee name=audioTee \
audioTee. ! audioresample ! muxer. \
audioTee. ! wavescope ! 'video/x-raw,width=854,height=480' ! queue2 ! comp.sink_1 \
I tried qantizer up to 50, and adding a framerate=(fraction)5/1 to the caps after h264parse. The former didn't make a difference, and the latter brought clock issues back on rails, with muxer. "unable to configure latency". I assumed that if h264parse asked for less frames, then wavescope would render them on demand. Now I'm not sure. Anyway I tried to specify a framerate on wavescope's output caps, and the issue is the same. Or I'm afraid it's rather the alsasrc that is dictating framerates to everybody.
I'm sorry my example command still has an RTMP sink, I wasn't able to reproduce this with a playbin. You'll also obviously need a valid png image as ./sample.png.

Q: Gstreamer mp4mux issue with encodebin and concat pipeline

I've been trying out this fancy encodebin gstreamer element lately. Simple examples work pretty well but I have some issues with more complex pipelines. I'm using gst-launch-1.0 version 1.18.4 on msys. My workflow is as follows:
Firstly I create some mp4 file from scratch using encodebin (it chooses best encoder, in my case it uses nvidia gpu):
gst-launch-1.0.exe videotestsrc num-buffers=100 ! encodebin profile="video/quicktime,variant=iso:video/x-h264,tune=zerolatency,profile=baseline" ! filesink location="input.mp4"
This part works well, it uses hardware encoding, everything is fine here.
Then I want to append some realtime stream to this file preserving time and so on. Pipeline I created for that purpose:
GST_DEBUG=3 gst-launch-1.0.exe concat name=c ! m.video_0 mp4mux name=m ! filesink location=out.mp4 filesrc location=input.mp4 ! parsebin ! h264parse ! c. videotestsrc num-buffers=100 ! encodebin profile="video/x-h264,tune=zerolatency,profile=baseline" ! c.
Apparently it does not work for me, I get:
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0: Internal data
stream error.
streaming stopped, reason not-negotiated (-4)
Interestingly if we switch from mp4mux to mpegtsmux it works well:
gst-launch-1.0.exe concat name=c ! mpegtsmux ! filesink location=out.mp4 filesrc location=input.mp4 ! parsebin ! h264parse ! c. videotestsrc num-buffers=100 ! encodebin profile="video/x-h264,tune=zerolatency,profile=baseline" ! c.
So I've started wondering, is it something with mp4mux pads? Anyone has an idea why it does not work with mp4mux?

AVB Streaming Stored file from sever to client : TimeStamping issue

I am working on AVB application.As we have created gstreamer plugins at talker side and listener side and we used that plugins to transfer stored media.
I am using below pipeline
Talker side :
gst-launch-1.0 filesrc location=/home/input.mp4 ! queue ! avbsink interface=eth0 fd=0 (here avbsink is created property to transfer avb packets)
Listener side :
gst-launch-1.0 avbsrc interface=eth0 dataSync=1 mediaType=0 fd=0 ! queue ! qtdemux name=mux mux.video_0 ! queue ! avdec_h264 ! autovideosink mux.audio_0 ! queue ! decodebin ! autoaudiosink
(i tried vaapidecode and vaapisink instead of avdec_h264 and autovideosink for hardware accelerator )
Error comming on listener side is
"WARNING: from element /GstPipeline:pipeline0/GstVaapisink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2683) : gst_base_sink_is_too_late(): /GstPipeline:pipeline0/GstVaapiSink:vaapisink0;
There may be a timestamping problem, or this computer is too slow. "
I have seen one solution to use sync=false then i have added sync=false with vaapisink and error message got eliminate but still video is not playing smoothly. its continuously gating stop and again starting.
Is there any solution to play video continuously.( Only high quality video(720p or more) is not playing, application is working for low quality video ).
It looks like the size of the buffer is not enough since a frame of hd video has more pixels. The other point I can propose is may be you can apply some sort of compression algorithm prior to sending the frame to listener but I am not sure if compression is contradicting any one of AVB protocols.

gstreamer udp Streaming is slow

I'm working on a videochat application and am having trouble with UDP streaming vs TCP.
When I use the pipelines below, the video streams acceptably. (The application itself is in python, but the pipelines are essentially as below)
sender:
gst-launch-0.10 v4l2src ! video/x-raw-yuv,width=320,height=240 !
theoraenc ! oggmux ! tcpclientsink host=nnn.nnn.nnn.nnn port = 5000
receiver:
gst-launch-0.10 tcpserversrc host=nnn.nnn.nnn.nnn port=5000
! decodebin ! xvimagesink
However, since this app is to perform across/through NAT, I require UDP streaming.
When I switch the tcpserversrc to a "udpsrc port=5000" and the tcpclientsink to a "udpsink host = nnn.nnn.nnn.nnnn port=5000", performance plummets to the point where the receiving computer gets one single frame every 5 seconds or so. (This occurs even when both streams are executed on the same machine)
The sending pipeline generates the following (once):
WARNING: from element /GstPipeline:pipeline0/GstUDPSink:udpsink0:
Internal data flow problem.
Additional debug info:
gstbasesink.c(3492): gst_base_sink_chain_unlocked (): /GstPipeline:pipeline0
/GstUDPSink:udpsink0:
Received buffer without a new-segment. Assuming timestamps start from 0.
...and the receiving pipeline generates (every 20 seconds or so):
WARNING: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2739): gst_base_sink_is_too_late (): /GstPipeline:pipeline0
/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.
I've read docs and manpages, fiddled with various parameters to the udpsink, all to no good effect.
Can anyone direct me to the (no doubt obvious) thing that I'm completely not getting?
Thanks in advance :)
I had the same problem.
Try setting
sync=false
on tcpclientsink and xvimagesink
I had a similar problem. I managed to solve it by changing two things (1) As Fuxi mentioned sync = false and (2) Adding caps at the decoding side to match the encoding pipeline. For e.g. in your case something like gst-launch-0.10 tcpserversrc host=127.0.0.1 port=5000 ! decodebin ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! xvimagesink sync=false should work (It works for me). I would recommend you set the frame rate in both (server/client) the pipelines as well. I start the decoding pipeline first (server) and then the encoding pipeline (client) otherwise OFCOURSE it fails.
Update:
Adding queue between the appropriate decoding elements have saved my tail numerous times. e.g. gst-launch-0.10 tcpserversrc host=127.0.0.1 port=5000 ! queue ! decodebin ! queue ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! xvimagesink sync=false. Similarly videorate has helped me in some situations.
i am using this command and it working like a charm.
server side:
gst-launch v4l2src device=/dev/video1 ! ffenc_mpeg4 ! rtpmp4vpay send-config=true ! udpsink host=127.0.0.1 port=5000
Client side:
gst-launch udpsrc uri=udp://127.0.0.1:5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)MP4V-ES, profile-level-id=(string)1, config=(string)000001b001000001b58913000001000000012000c48d88007d0a041e1463000001b24c61766335322e3132332e30, payload=(int)96, ssrc=(uint)298758266, clock-base=(uint)3097828288, seqnum-base=(uint)63478" ! rtpmp4vdepay ! ffdec_mpeg4 ! autovideosink

gstreamer pipeline that was working now requiring a bunch of queue components, why?

I have a C program that records video and audio from a v4l2 source into flv format. I noticed that the program did not work on newer versions of ubuntu. I decided to try to run the problamatic pipeline in gst-launch and try to find the simplest pipeline that would reproduce the problem. Just focusing on the video side I have reduced it to what you see below.
So I have a gstreamer pipeline that was working:
gst-launch v4l2src ! tee name="vtee" ! queue ! videorate ! ffmpegcolorspace ! ffdeinterlace ! x264enc ! flvmux name="mux" ! filesink location=vid.flv vtee. ! queue ! xvimagesink
Now it will only work if I do this with a bunch of queue's added one after another before the xvimagesink. Although this does work I get a 2 second delay before the pipeline starts to work and i also get the message :
gst-launch v4l2src ! tee name="vtee" ! queue ! videorate ! ffmpegcolorspace ! ffdeinterlace ! x264enc ! flvmux name="mux" ! filesink location=vid.flv vtee. ! queue ! queue ! queue ! queue ! queue ! xvimagesink
Although the second pipeline above works, there is a pause before the pipeline starts running and I get the message (I don't think this system is 2 slow, its a core i7 with tons of ram):
Additional debug info:
gstbasesink.c(2692): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.
Can any one explain what is happening here? What am I doing wrong?
You claim that the first pipeline stopped working but you don't explain what happened. Things stop working because something else changed:
- version of GStreamer and submodules ?
- version of OS ?
- version of camera ?
It shouldn't be necessary to add a bunch of queues in a row. In practice they will create thread boundaries and separate the part before and after across threads, and it will add the delay you see, which will affect the latency and sync.
An old message, but the problem is still not fixed. Somewhere between 9.10 and 11.10 (I upgraded a few before noticing). I got around it by avoiding the x264enc and used ffenc_mpeg4 instead.
I just noticed this note from Gstreamer Cheat Sheet :
Note: We can replace theoraenc+oggmux with x264enc+someothermuxer but then the pipeline will freeze unless we make the queue [19] element in front of the xvimagesink leaky, i.e. "queue leaky=1".
Which doesn't work for me so I'll stick with ffenc_mpeg4.