Video is much faster than audio when muxed in GStreamer - command-line

I am trying to learn to record X11 windows' contents to do game screencasts for Youtube. This should be a fairly trivial task, but it already ate a full evening. Now I have learned a bit about muxing and queueing (using gst-launch), but the problem remains: When I mux an audio and video into an avi, video plays several times faster than audio in the resultant file. This means that video ends soon and comes to still, while audio continues to babble in the background.
This is my filter chain that causes the issue:
gst-launch-1.0 ximagesrc xid=$XID ! video/x-raw,framerate=30/1 ! videoconvert !
x264enc ! queue ! avimux name=mux ! queue ! filesink location=out.avi
pulsesrc device=$DEV ! queue ! audioconvert !
lamemp3enc bitrate=192 ! queue ! mux.
However, the issue vanishes when I just have video, and it is played with perfectly normal speed:
ximagesrc xid=0x0820000b ! video/x-raw,framerate=30/1 ! videoconvert !
x264enc ! avimux ! filesink location=out.avi
I would also appreciate if you correct me on the usage of ! queue !. Where is it needed? In the current setup I almost never get warnings that samples were dropped.
Update: I would prefer to use mp4 muxer, but it produces unplayable files lacking moov atom. Youtube recommends putting it at the beginning of file, any chance I can force that with mp4 muxer?

gst-launch-1.0 ximagesrc xid=$XID ! video/x-raw,framerate=30/1 ! queue !
videoconvert ! videorate ! queue ! x264enc ! queue ! avimux name=mux !
queue ! filesink location=out.avi pulsesrc device=$DEV ! queue !
audioconvert ! queue ! lamemp3enc bitrate=192 ! queue ! mux.
The above pipeline should play the audio video at proper speed.
I would also appreciate if you correct me on the usage of ! queue !.
Where is it needed? In the current setup I almost never get warnings
that samples were dropped.
queue are just buffers, these need to be used in places where one element is slower and one is faster, so for example video generating (ximagesrc) is much faster compared with x264enc (software encoding), so you would add a queue in between them so as to buffers aren’t dropped.
gst-launch-1.0 ximagesrc ! video/x-raw,framerate=30/1 ! queue !
videoconvert ! queue ! x264enc key-int-max=5 ! queue ! mp4mux
name=mux reserved-bytes-per-sec=100
reserved-max-duration=20184000000000
reserved-moov-update-period=100000000 ! queue ! filesink
location=out.mp4 audiotestsrc ! queue ! audioconvert ! queue !
lamemp3enc bitrate=192 ! queue ! mux.
The above pipeline would create a mp4 file mp4mux, but the moov atom will be at end itself also note make sure you change mp4mux properties as per your need.

Related

broadcast visualization along live audio : lower CPU usage?

I need to broadcast an audio live signal with its visualization with wavescope, and a background image. I actually built a working gst-launch-1.0 command. It's just that it's taking uo a lot of the CPU usage and I wish to lower the resources demands. I'm testing my signal with nginx' rtmp module on localhost, and playing back with ffplay.
I'm aware that this question comes quite close, but I believe that the problem at hand is more concrete : I'm looking for a way to send less wavescope frames, in the expectation that it will require less CPU cycles.
I got a sample png image and an alsasrc that is a microphone as inputs.
gst-launch-1.0 compositor name=comp sink_1::alpha=0.5 ! videoconvert ! \
x264enc quantizer=35 tune=zerolatency ! h264parse ! 'video/x-h264,width=854,height=480' ! queue ! \
flvmux name=muxer streamable=true ! rtmpsink location='rtmp://localhost/streamer/test' \
filesrc location='sample.png' ! pngdec ! videoconvert ! imagefreeze is-live=true ! queue ! comp.sink_0 \
alsasrc ! audioconvert ! tee name=audioTee \
audioTee. ! audioresample ! muxer. \
audioTee. ! wavescope ! 'video/x-raw,width=854,height=480' ! queue2 ! comp.sink_1 \
I tried qantizer up to 50, and adding a framerate=(fraction)5/1 to the caps after h264parse. The former didn't make a difference, and the latter brought clock issues back on rails, with muxer. "unable to configure latency". I assumed that if h264parse asked for less frames, then wavescope would render them on demand. Now I'm not sure. Anyway I tried to specify a framerate on wavescope's output caps, and the issue is the same. Or I'm afraid it's rather the alsasrc that is dictating framerates to everybody.
I'm sorry my example command still has an RTMP sink, I wasn't able to reproduce this with a playbin. You'll also obviously need a valid png image as ./sample.png.

Q: Gstreamer mp4mux issue with encodebin and concat pipeline

I've been trying out this fancy encodebin gstreamer element lately. Simple examples work pretty well but I have some issues with more complex pipelines. I'm using gst-launch-1.0 version 1.18.4 on msys. My workflow is as follows:
Firstly I create some mp4 file from scratch using encodebin (it chooses best encoder, in my case it uses nvidia gpu):
gst-launch-1.0.exe videotestsrc num-buffers=100 ! encodebin profile="video/quicktime,variant=iso:video/x-h264,tune=zerolatency,profile=baseline" ! filesink location="input.mp4"
This part works well, it uses hardware encoding, everything is fine here.
Then I want to append some realtime stream to this file preserving time and so on. Pipeline I created for that purpose:
GST_DEBUG=3 gst-launch-1.0.exe concat name=c ! m.video_0 mp4mux name=m ! filesink location=out.mp4 filesrc location=input.mp4 ! parsebin ! h264parse ! c. videotestsrc num-buffers=100 ! encodebin profile="video/x-h264,tune=zerolatency,profile=baseline" ! c.
Apparently it does not work for me, I get:
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0: Internal data
stream error.
streaming stopped, reason not-negotiated (-4)
Interestingly if we switch from mp4mux to mpegtsmux it works well:
gst-launch-1.0.exe concat name=c ! mpegtsmux ! filesink location=out.mp4 filesrc location=input.mp4 ! parsebin ! h264parse ! c. videotestsrc num-buffers=100 ! encodebin profile="video/x-h264,tune=zerolatency,profile=baseline" ! c.
So I've started wondering, is it something with mp4mux pads? Anyone has an idea why it does not work with mp4mux?

Gstreamer. Multiple pcap to avi

I have multiple .pcap files 01.pcap, 02.pcap,...N.pcap, they includes two streams, Audio-G.711 Video-H.264. Every pcap has ~1 min of streaming And I need to make one .avi.
I use mergecap.exe to concatenate pcaps to one big pcap.
mergecap.exe -F pcap 01.pcap 02.pcap ....N.pcap -w out.pcap
After that I use gstreamer to make .avi file
gst-launch-1.0 filesrc location=out.pcap ! tee name=t ! pcapparse dst-ip=192.168.2.55 dst-port=5010 ^
! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96 ^
! rtpjitterbuffer ^
! rtph264depay ^
! h264parse ^
! queue^
! mux. t. ! pcapparse dst-ip=192.168.2.55 dst-port=4010 ^
! application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMA, channels=(int)1, payload=(int)8 ^
! rtpjitterbuffer ^
! rtppcmadepay ^
! queue ^
! mux. avimux name=mux ! filesink location=test.avi
This pipeline works for one pcap well.. When I conatenate two .pcaps, it's works too. But if it is more than 2 pacaps-> rtpjitterbuffer drops almost every video packet
...
0:00:03.856698538 12812 08E3FD28 WARN rtpjitterbuffer gstrtpjitterbuffer.c:2163:gst_rtp_jitter_buffer_chain:<rtpjitterbuffer0> Packet #41238 too late as #57525 was already popped, dropping
0:00:03.861442222 12812 08E3FD28 WARN rtpjitterbuffer gstrtpjitterbuffer.c:2163:gst_rtp_jitter_buffer_chain:<rtpjitterbuffer0> Packet #41239 too late as #57525 was already popped, dropping
0:00:03.870865810 12812 08E3FD28 WARN rtpjitterbuffer gstrtpjitterbuffer.c:2163:gst_rtp_jitter_buffer_chain:<rtpjitterbuffer0> Packet #41240 too late as #57525 was already popped, dropping
0:00:03.876392403 12812 08E3FD28 WARN rtpjitterbuffer gstrtpjitterbuffer.c:2163:gst_rtp_jitter_buffer_chain:<rtpjitterbuffer0> Packet #41241 too late as #57525 was already popped, dropping
and continues...
and continues...
and continues...
...
I was trying :
Change latency in rtpjitterbuffer
Remove rtpjitterbuffer
Don't use tee
Your suggestions why this is happening?
I remind you that everything works up to two pcap's. No matter what pcaps 1 with 2 or 5 with 6 or ...
UPD. Tried to play with queues as otopolsky described but still did not works. I put queue after tee element. But the same error. I think that's because rtpjitterbuffer in two different threads uses in the same variable (from main thread?)
Maybe there is another way to make audio and video synchronized from pcap's by the rtp TIMESTAMP's?
I think on 80% that the problem is that you do not put queue before processing of each tee branch.. when all the rtpjitterbuffers are in one thread they may lock each other. So my best guess is put queue right after pcapparse or maybe before it to be completly sure:
gst-launch-1.0 filesrc ! tee name=t
avimux name=mux ! filesink location=test.avi
t. ! pcapparse ! x-rtp caps ! queue ! rtpjitterbuffer ! rtph264depay ! h264parse ! mux.
t. ! pcapparse ! x-rtp caps ! queue ! rtpjitterbuffer ! rtppcmadepay ! mux.
t. ! pcapparse ! x-rtp caps ! queue ! rtpjitterbuffer ! rtpwhateverelse .. ! mux.
You may play with the position of queue or put more queue.
Just remember that queue is used not only for buffering purpose but mainly to separate the processing to different threads - its nicely written here - check the nice picture at the beginning depicting the threads.
HTH - I hope its the answer.. if not then update question or ask in comment.

gstreamer udp Streaming is slow

I'm working on a videochat application and am having trouble with UDP streaming vs TCP.
When I use the pipelines below, the video streams acceptably. (The application itself is in python, but the pipelines are essentially as below)
sender:
gst-launch-0.10 v4l2src ! video/x-raw-yuv,width=320,height=240 !
theoraenc ! oggmux ! tcpclientsink host=nnn.nnn.nnn.nnn port = 5000
receiver:
gst-launch-0.10 tcpserversrc host=nnn.nnn.nnn.nnn port=5000
! decodebin ! xvimagesink
However, since this app is to perform across/through NAT, I require UDP streaming.
When I switch the tcpserversrc to a "udpsrc port=5000" and the tcpclientsink to a "udpsink host = nnn.nnn.nnn.nnnn port=5000", performance plummets to the point where the receiving computer gets one single frame every 5 seconds or so. (This occurs even when both streams are executed on the same machine)
The sending pipeline generates the following (once):
WARNING: from element /GstPipeline:pipeline0/GstUDPSink:udpsink0:
Internal data flow problem.
Additional debug info:
gstbasesink.c(3492): gst_base_sink_chain_unlocked (): /GstPipeline:pipeline0
/GstUDPSink:udpsink0:
Received buffer without a new-segment. Assuming timestamps start from 0.
...and the receiving pipeline generates (every 20 seconds or so):
WARNING: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2739): gst_base_sink_is_too_late (): /GstPipeline:pipeline0
/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.
I've read docs and manpages, fiddled with various parameters to the udpsink, all to no good effect.
Can anyone direct me to the (no doubt obvious) thing that I'm completely not getting?
Thanks in advance :)
I had the same problem.
Try setting
sync=false
on tcpclientsink and xvimagesink
I had a similar problem. I managed to solve it by changing two things (1) As Fuxi mentioned sync = false and (2) Adding caps at the decoding side to match the encoding pipeline. For e.g. in your case something like gst-launch-0.10 tcpserversrc host=127.0.0.1 port=5000 ! decodebin ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! xvimagesink sync=false should work (It works for me). I would recommend you set the frame rate in both (server/client) the pipelines as well. I start the decoding pipeline first (server) and then the encoding pipeline (client) otherwise OFCOURSE it fails.
Update:
Adding queue between the appropriate decoding elements have saved my tail numerous times. e.g. gst-launch-0.10 tcpserversrc host=127.0.0.1 port=5000 ! queue ! decodebin ! queue ! video/x-raw-yuv,width=320,height=240 ! ffmpegcolorspace ! xvimagesink sync=false. Similarly videorate has helped me in some situations.
i am using this command and it working like a charm.
server side:
gst-launch v4l2src device=/dev/video1 ! ffenc_mpeg4 ! rtpmp4vpay send-config=true ! udpsink host=127.0.0.1 port=5000
Client side:
gst-launch udpsrc uri=udp://127.0.0.1:5000 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)MP4V-ES, profile-level-id=(string)1, config=(string)000001b001000001b58913000001000000012000c48d88007d0a041e1463000001b24c61766335322e3132332e30, payload=(int)96, ssrc=(uint)298758266, clock-base=(uint)3097828288, seqnum-base=(uint)63478" ! rtpmp4vdepay ! ffdec_mpeg4 ! autovideosink

gstreamer pipeline that was working now requiring a bunch of queue components, why?

I have a C program that records video and audio from a v4l2 source into flv format. I noticed that the program did not work on newer versions of ubuntu. I decided to try to run the problamatic pipeline in gst-launch and try to find the simplest pipeline that would reproduce the problem. Just focusing on the video side I have reduced it to what you see below.
So I have a gstreamer pipeline that was working:
gst-launch v4l2src ! tee name="vtee" ! queue ! videorate ! ffmpegcolorspace ! ffdeinterlace ! x264enc ! flvmux name="mux" ! filesink location=vid.flv vtee. ! queue ! xvimagesink
Now it will only work if I do this with a bunch of queue's added one after another before the xvimagesink. Although this does work I get a 2 second delay before the pipeline starts to work and i also get the message :
gst-launch v4l2src ! tee name="vtee" ! queue ! videorate ! ffmpegcolorspace ! ffdeinterlace ! x264enc ! flvmux name="mux" ! filesink location=vid.flv vtee. ! queue ! queue ! queue ! queue ! queue ! xvimagesink
Although the second pipeline above works, there is a pause before the pipeline starts running and I get the message (I don't think this system is 2 slow, its a core i7 with tons of ram):
Additional debug info:
gstbasesink.c(2692): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.
Can any one explain what is happening here? What am I doing wrong?
You claim that the first pipeline stopped working but you don't explain what happened. Things stop working because something else changed:
- version of GStreamer and submodules ?
- version of OS ?
- version of camera ?
It shouldn't be necessary to add a bunch of queues in a row. In practice they will create thread boundaries and separate the part before and after across threads, and it will add the delay you see, which will affect the latency and sync.
An old message, but the problem is still not fixed. Somewhere between 9.10 and 11.10 (I upgraded a few before noticing). I got around it by avoiding the x264enc and used ffenc_mpeg4 instead.
I just noticed this note from Gstreamer Cheat Sheet :
Note: We can replace theoraenc+oggmux with x264enc+someothermuxer but then the pipeline will freeze unless we make the queue [19] element in front of the xvimagesink leaky, i.e. "queue leaky=1".
Which doesn't work for me so I'll stick with ffenc_mpeg4.