I am building a pipeline where I need to mux multiple videos(2 in this case) into a muxer(multistreamscimux). When I build the pipeline it throws errorneous pipeline: unexpected reference.
The following png is generated out of the pipeline that muxes a single video. I am trying to create a pipe starting from rtpbin that goes into multistreamrtpscimux.rtpsrc_2. But I think I may be making a mistake in the way I specify the mux source and sink pads. I have tried the ones mentioned in here but could not resolve it.
Any help is appreciated.
Here is the actual pipeline that I am trying to build.
gst-launch-1.0 -v rtpbin name=rtpbin_0 videotestsrc pattern=ball is-live=true
name=vidsource_0 ! video/x-raw, framerate=30/1, width=180, height=90 ! textoverlay
text="" valignment=4 ! x264enc aud=false name=videoenc_0 ! video/x-h264,
profile=baseline, stream-format=byte-stream,alignment=au ! rtph264pay mtu=1256
pt=109 ! multistreamrtpmux name=multirtpmux_0 csis-string="22446601"
vid-headerext-id=1 vid-header-extension-string="04" frame-marking-headerext-id=2
frame-marking-header-extension-string="48" ! msrtpscimux.rtpsink_0
multistreamrtpscimux name=msrtpscimux ! rtpbin name=rtpbin_1 videotestsrc pattern=ball
is-live=true name=vidsource_1 ! video/x-raw, framerate=30/1, width=180, height=90
! textoverlay text="" valignment=4 ! x264enc aud=false name=videoenc_1 !
video/x-h264, profile=baseline, stream-format=byte-stream,alignment=au !
rtph264pay mtu=1256 pt=109 ! multistreamrtpmux name=multirtpmux_1
csis-string="22446601" vid-headerext-id=1 vid-header-extension-string="04"
frame-marking-headerext-id=2 frame-marking-header-extension-string="48" !
multirtpmux_1.rtpsrc msrtpscimux.rtpsink_1 msrtpscimux. msrtpscimux.rtpsrc !
netsim drop-probability=0.0 delay-probability=0.0 !
application/x-rtp ! rtpbin_0.send_rtp_sink_0 rtpbin_0.send_rtp_src_0 !
multisocketudpsink name=videosink rtpbin_0.send_rtcp_src_0 !
multisocketudpsink name=rtcpsink sync=false async=false
I was able to fix this issue by specifying the sinkpads of the mux element I want to connect to in each of the substream and then finally specify the mux element with its parameters.
a brief example as below..
for connecting
[videotestsrc]->[multirtpmux]--[sinkpad_0 ]
| msrtpscimux |
[videotestsrc]->[multirtpmux]--[sinkpad_1 ]
For the above I used
videotestsrc pattern=ball ! multistreamrtpmux name=multirtpmux_0 ! \
msrtpscimux.rtpsink_0 videotestsrc pattern=red ! \
multistreamrtpmux name=multirtpmux_1 ! msrtpscimux.rtpsink_3 \
multistreamrtpscimux name=msrtpscimux
Note: I dont use pipe between msrtpscimux.rtpsink_3 multistreamrtpscimux name=msrtpscimux to indicate that I am using a different substream
Related
I'm a new hand and try to build a xilinx by pocto.
As the guide, I cloned repositories (branch thud), source oe-... and change MACHINE="zedborad-zynq7", then bitbake petalinux-image-minimal, but I get following error:
ERROR: tcf-agent-1.7.0+gitAUTOINC+dad3a6f568-r0 do_fetch: Fetcher
failure: Fetch command ...
https://git.eclipse.org/gitroot/tcf/org.eclipse.tcf.agent.git
refs/:refs/ failed with exit code 128, output: fatal: repository
'https://git.eclipse.org/gitroot/tcf/org.eclipse.tcf.agent.git/' not
found ... ERROR: Task
(~/poky/meta/recipes-devtools/tcf-agent/tcf-agent_git.bb:do_fetch)
failed with exit code '1'
The issue is that the statement in tcf-agent_git.bb:
SRC_URI = "git://git.eclipse.org/gitroot/tcf/org.eclipse.tcf.agent \
It is NOT the wrong address, In fact, I can clone successfully with this address. On the other hand, any my modify on this variable will NOT work either.
I already grep -rn "eclipse.org", but only find this file.
Any recommendation will be welcome.
Thanks lot.
——————————————————————————
I can't resolve this issue finally.
I find that the builder does NOT fetch from the address the SRC_URL offer at all, instead, it fetches from a mirror given somewhere.
As a test, I edited the .bb file, add PREMIRRORS="" and MIRROS="", and add protocal=git statement for the SRC_URI. The statements are effective realy, the builder fetches from the SRC_URL address, but the protocol is still HTTPS, the function still fails.
My solution is cloning the source manually, and putting it to corresponding directory, in order to let the builder know this, I also touch a package_name.done and chmod 777 in the same directory, then I can continue.
I've run into the exact same issue using Xilinx Yocto stack (rel-v2018.3 branch). For me, the problem wasn't in the tcf-agent_git.bb recipe in core/meta/recipes-devtools/tcf-agent, but in the tcf-agent_%.bbappend file in meta-petalinux/recipes-devtools/tcf-agent. In there, I replaced
SRC_URI = " \
git://git.eclipse.org/gitroot/tcf/org.eclipse.tcf.agent.git;branch=master;protocol=https \
file://fix_ranlib.patch;striplevel=2 \
file://ldflags.patch \
file://tcf-agent.init \
file://tcf-agent.service \
"
with
SRC_URI = " \
git://git.eclipse.org/gitroot/tcf/org.eclipse.tcf.agent.git;branch=master \
file://fix_ranlib.patch;striplevel=2 \
file://ldflags.patch \
file://tcf-agent.init \
file://tcf-agent.service \
"
and it finishes building correctly.
The former used to work fine last time I built the image (a few months ago) but for some reason the protocol=https option makes it fail now.
Your SRC_URI seems wrong.
it should be
SRC_URI = "git://git.eclipse.org/gitroot/tcf/org.eclipse.tcf.agent.git \
This one is working perfect for me.
Note : The back slash () at the end means you have multiple line SRC_URI. correct it if you have only single line.
In December 2021, using branch rel-v2020.1, I needed to change the line into :
SRC_URI = "git://git.eclipse.org/r/tcf/org.eclipse.tcf.agent.git;protocol=https \
i am trying to record output from mjpeg ( ip camera ) to ogg file using gstreamer .
i trying the following but no dice
gst-launch-1.0 souphttpsrc location='http://192.168.2.124:8081/' is-live=true do-timestamp=true ! multipartdemux ! image/jpeg, width=1280, height=720, framerate=20/1 ! jpegdec ! theoraenc bitrate=2200 ! oggmux ! filesink location=output.ogg
i mean, the pipeline runs ok for a while but then, out of the blue, got a EOS signal
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:39.423928252
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
also video output of the recording is not quite rite, seems sluggish :/
weird part ... this works ok
gst-launch-1.0 souphttpsrc location='http://192.168.2.124:8081/' is-live=true do-timestamp=true ! multipartdemux ! image/jpeg, width=1280, height=720, framerate=20/1 ! jpegdec ! jpegenc ! avimux ! filesink location=output.avi
and then i do this transcode
gst-launch-1.0 -e filesrc location=output.avi ! avidemux ! jpegdec ! videoconvert ! theoraenc bitrate=2200 ! oggmux ! filesink location=output.ogg
i mean i got the desired result but need first to capture stream to avi and then do the transcode to ogg
Good day fellow programmers,
I am trying to play a .ts file with gstreamer straight on a RPi.
Gstreamer-1.0 as well as gst-omx have been successfully installed and this example pipeline runs like a charm:
gst-launch-1.0 -v filesrc location=h264_720p_hp_5.1_6mbps_ac3_planet.mp4 ! qtdemux ! h264parse ! omxh264dec ! autovideosink
It actually even works using gst-launch-1.0 playbin uri=file:/root/h264_720p_hp_5.1_6mbps_ac3_planet.mp4
However if I try to use playbin to play a .ts file it actually does run it but only with a very poor frame rate which makes this approach unusable.
If I try to build a custom pipeline similar to one shown above I am stuck with "tsparse" apparently being incompatible with "omxmpeg2videodec".
This is what I run:
gst-launch-1.0 -v filesrc location=parkrun1920_12mbps.ts ! tsdemux ! tsparse ! omxmpeg2videodec ! autovideosink
Which outputs this error:
erroneous pipeline: could not link mpegtsparse2-0 to omxmpeg2videodec-omxmpeg2videodec0
Does anyone has an idea how I could get gstreamer to fluently play mpeg2-ts files?
My goal is to play http unicast mpeg2-ts streams provided by mumudvb on the same RPi.
Thanks for your help, it would be greatly appreciated!
Edit: omxplayer plays the .ts file perfectly smooth so I don't think my problem has got anything to do with the hardware or the file.
The problem is that I used tsparse. After demuxing the stream it is no longer a TS file and one therefor has to use mpegvideoparse or similar parser elements.
I am using like this to play ts in Ubuntu. gst-launch-1.0 souphttpsrc location=http://xxx.xxx.x.xx/location/test.ts ! tsdemux name=d d.video_0324 ! queue ! h264parse ! avdec_h264 ! videoconvert ! videoscale ! ximagesink.
FYI, if playbin is working, you can generate xdot grapfile by setting GST_DEBUG_DUMP_DOT_DIR var. Then analyse the xdot and find the solution.
For my project, I am trying to use a gumstix overo, with gstreamer and the TI plugin for making use of the DSP in order to stream video via RTP. I found these two tutorials and have even been able to follow them successfully:
http://jumpnowtek.com/index.php?option=com_content&view=article&id=81:gumstix-dsp-gstreamer&catid=35:gumstix&Itemid=67
^^In this one I am able to compile an embedded linux os, with gstreamer and the GstTIPlugIn Element. after doing so, I am able to stream the videotestsource to a remote PC successfully.
However that tutorial is meant for a caspa video cam, I am using the Logitech Pro C920 used in this tutorial:
http://www.oz9aec.net/index.php/gstreamer/473-using-the-logitech-c920-webcam-with-gstreamer
^^In this one we make use of a C920 camera in H264 mode. since the V4l2 drivers do not support this, we use a c script to capture from the camera frame by frame and stream it to standard out. From here we tell Gstreamer to capture from a file source, in this case standard in (/dev/fd/0). Again I am able to complete this successfully and stream from the C920 camera, however without using the TIplugin for making use of DSP.
Now on to the problem:
./capture -c 10000 -o | gst-launch -v -e filesrc location=/dev/fd/0 ! h264parse ! rtph264pay ! udpsink host=192.168.1.100 port=4000
^^This command will run the capture program, and gstreamer will grab and stream the video using the h264parse pipeline to encode (I believe?)
when I replace h264parse with the TIplugin from the first tutorial like this:
./capture -c 10000 -o | gst-launch -v -e filesrc location=/proc/self/fd/0 ! TIVidenc1 codecName=h264enc engineName=codecServer ! rtph264pay ! udpsink host=192.168.1.100 port=4000
I get this error:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstTIVidenc1:tividenc10: failed to create video encoder: h264enc
Additional debug info:
gsttividenc1.c(1584): gst_tividenc1_codec_start (): /GstPipeline:pipeline0/GstTIVidenc1:tividenc10
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
I also tried keeping both elements in and then the error says it cannot link h264parse0 to tividenc10
Has anyone had any experience with the GstTIPlugin and know what I'm doing wrong?
thanks
What problem are you trying to solve, exactly? Are you trying to encode H.264 using the TI's encoding element? Because if I'm reading this all correctly, the './capture' utility already receives frames in H.264-- no need to encode.
Assuming we have this golden example (this works for you, right?):
./capture -c 10000 -o | gst-launch -v -e filesrc location=/dev/fd/0 !
h264parse ! rtph264pay ! udpsink host=192.168.1.100 port=4000
The 'h264parse' parses an H.264 stream into H.264 NAL units for the benefit of the RTP sink. If that's working, then the h264parse element is happy because it is getting H.264 data from the capture program.
If you're trying to replace h264parse with a TI H.264 encoder element, well, that's just confusing. Again, I don't know exactly what problem you're trying to solve so I might not have the whole picture.
If you're not already familiar with it, get to know the 'gst-inspect' command. E.g., 'gst-inspect h264parse'. This will give you insight about what type of data an element can consume or produce.
I am working with gstreamer, mainly playing around with music playback features.
I am currently trying to use RTP to send mp3 streams over our LAN, but unsuccessfully until now.
On sender side I use the following pipeline:
gst-launch -v filesrc location=./my_music_file.mp3 ! ffdemux_mp3 ! rtpmpapay ! udpsink port=6969 host=192.168.0.200
On receiver side I use the following pipeline:
gst-launch -v udpsrc port=6969 caps="application/x-rtp, media=(string)audio, clock-rate=(int)90000, encoding-name=(string)MPA, payload=(int)96, ssrc=(guint)1951256090, clock-base=(guint)1711290778, seqnum-base=(guint)24773" ! rtpmpadepay ! flump3dec ! pulsesink
There is apparently no error as output from receiver side is:
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
...But sound hears strange, is just as if it has been played too fast.
I have tested that audio works by playing mp3 files locally. I have also tested rtp by streaming wav/µLaw files. All this works well.
I have tried also to face problem in other ways, for instance, I have used the following pipeline, taht works well with audiotestsrc/amrnb codec:
gst-launch gstrtpbin name=rtpbin audiotestsrc ! amrnbenc ! rtpamrpay ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! udpsink host=192.168.0.200 port=5002 rtpbin.send_rtcp_src_0 ! udpsink port=5003 host=192.168.0.200 sync=false async=false udpsrc port=5005 ! rtpbin.recv_rtcp_sink_1
But when using same pipeline with lame, again on the receiver side there is no error but there is a "too fast" output:
Sender:
gst-launch gstrtpbin name=rtpbin audiotestsrc ! lamemp3enc ! rtpmpapay ! rtpbin.send_rtp_sink_0 rtpbin.send_rtp_src_0 ! udpsink host=192.168.0.200 port=5002 rtpbin.send_rtcp_src_0 ! udpsink port=5003 host=192.168.0.200 sync=false async=false udpsrc port=5005 ! rtpbin.recv_rtcp_sink_1
Receiver:
gst-launch -v udpsrc port=5002 caps="application/x-rtp, media=(string)audio, clock-rate=(int)90000, encoding-name=(string)MPA, payload=(int)96" ! rtpmpadepay ! flump3dec ! pulsesink
Could anyone have an idea of what's wrong with my pipelines?
Thank you very much for your support,
Jorge
For those who would be interested in this topic, I have a partial answer of the problem.
In fatc, it is fluendo decoder which losses good mp3 frames coming from rtp depay.
When I use mad decoder, I can receive and hear all the stream.
Here are the pipelines I use to do mp3 streaming over RTP:
Sender:
gst-launch -v filesrc location=./my_file.mp3 ! ffdemux_mp3 ! rtpmpapay ! udpsink port=6969 host=192.168.0.200
Receiver:
gst-launch -v udpsrc port=6969 caps="application/x-rtp, media=(string)audio, clock-rate=(int)90000, encoding-name=(string)MPA, payload=(int)96" ! rtpmpadepay ! mad ! pulsesink
Problem has been posted to fluendo team.
Hope this help.