I am trying to composite three streams coming from three Rapsberry PI.
As soon as I join two streams together using the videomixer plugin, I get a message ending with:
Pipeline:pipeline0/GstOSXVideoSink:osxvideosink0:
There may be a timestamping problem, or this computer is too slow.
Strangely, my task monitor only indicates about 15%CPU usage for gst
With the three streams, the framerate becomes unusable. I would expect my I7 macbook to be able to handle this without problem....
Here is the code I am using for the mixing, in this case just one stream(/sink?).
Can anyone tell me whether there is an obvious mistake ? Or where I should look for the bottleneck and improve it ?
Thanks !
gst-launch-1.0 videomixer name=m sink_1::xpos=400 sink_2::ypos=300 ! autovideosink \
-v udpsrc port=9000 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264'! rtph264depay ! video/x-h264,width=400,height=300,framerate=30/1 ! h264parse ! avdec_h264 ! videoconvert ! m. \
-v udpsrc port=9001 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! video/x-h264,width=400,height=300,framerate=30/1 ! h264parse ! avdec_h264 ! videoconvert ! m. \
-v udpsrc port=9002 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! video/x-h264,width=400,height=300,framerate=30/1 ! h264parse ! avdec_h264 ! videoconvert ! m.
Here is the code I use to send the streams from the RPI Camera.
raspivid -n -w 640 -h 480 -t 0 -o - \
| gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay \
config-interval=10 pt=96 ! udpsink host=192.168.1.3 port=9000
Try adding queue elements for each video decode and sync=false to the video sink.
gst-launch-1.0 videomixer name=m sink_1::xpos=400 sink_2::ypos=300 ! videoconvert ! ximagesink sync=false \
udpsrc port=9000 ! application/x-rtp,media=video,clock-rate=90000,encoding-name=H264 ! rtph264depay ! video/x-h264,width=400,height=300 ! h264parse ! avdec_h264 ! queue ! videoconvert ! m. \
udpsrc port=9001 ! application/x-rtp,media=video,clock-rate=90000,encoding-name=H264 ! rtph264depay ! video/x-h264,width=400,height=300 ! h264parse ! avdec_h264 ! queue ! videoconvert ! m. \
udpsrc port=9002 ! application/x-rtp,media=video,clock-rate=90000,encoding-name=H264 ! rtph264depay ! video/x-h264,width=400,height=300 ! h264parse ! avdec_h264 ! queue ! videoconvert ! m.
Now my disclaimer to this would be that I'm unsure if the video will be properly smooth and in sync, but it seems to look pretty good.
Also, on raspivid, you'll probably want to add the config-interval property to the rtph264pay element.
raspivid -n -w 640 -h 480 -t 0 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 ! multiudpsink clients=192.168.1.3:9000,192.168.1.3:9001,192.168.1.3:9002
Related
I am trying to send a video source to three outputs: multicast, filesystem, and (resized video) display with gst-launch-1.0.
This is the command,
gst-launch-1.0 videotestsrc ! x264enc ! tee name=t \
t. ! queue ! rtph264pay ! udpsink host=224.1.1.1 port=20000 auto-multicast=true \
t. ! queue ! h264parse ! splitmuxsink location=./vid%02d.mkv max-size-time=10000000000 \
t. ! queue ! videoconvert ! videoscale ! video/x-raw,width=100 ! autovideosink
and this is the error,
WARNING: erroneous pipeline: could not link queue2 to videoconvert0
Your problem is that you are sending h264 stream to videconvert that rather expects raw video. So you would just add decoding:
gst-launch-1.0 -e videotestsrc ! video/x-raw,width=640,height=480,framerate=30/1 ! queue ! x264enc ! tee name=t t. ! queue ! rtph264pay ! udpsink host=224.1.1.1 port=20000 auto-multicast=true t. ! queue ! h264parse ! splitmuxsink location=./vid%02d.mkv max-size-time=10000000000 t. ! queue ! h264parse ! avdec_h264 ! videoconvert ! videoscale ! video/x-raw,width=100 ! autovideosink
I am developing a video chat application and I need realtime streaming with audio and video in sync. This is what I did......
video encoding with x264 encoder and decoding
audio encoding with lamemp3 encoder and decoding
mpegts muxing and demuxing
my sender command:
gst-launch-1.0 -e mpegtsmux name="muxer" ! udpsink host=172.20.4.19 port=5000 v4l2src ! video/x-raw, width=640,height=480 ! x264enc tune=zerolatency byte-stream=true ! muxer. pulsesrc ! audioconvert ! lamemp3enc target=1 bitrate=64 cbr=true ! muxer. rtph264pay
my reciever command:
gst-launch-1.0 udpsrc port=5000 ! decodebin name=dec ! queue ! autovideosink dec. ! queue ! audioconvert ! audioresample ! autoaudiosink
However I am getting a delay of more than 1 second. What is causing this delay (assuming that I did something enherintly wrong)? And how would I minimize it?
Looking for explanation how to using named elements in respect with muxing two inputs in one module. For instance muxing audio and video in one mpegtsmux modle
gst-launch filesrc location=surround.mp4 ! decodebin name=dmux ! queue ! audioconvert ! lamemp3enc dmux. ! queue ! x264enc ! mpegtsmux name=mux ! queue ! filesink location=out.ts
Above pipeline gives plugins interconnection like below
So it shows audio doesn't connect to mpegtsmus.
How to modify command line to have audio and video muxedup in mpegtsmux ?
Thanks!
I'll try to give the basic idea though I'm not that proficient and could be plain wrong.
A pipeline can consist of several sub-pipelines. If some element (bin) ends not with a pipe (!) but with a start of another element, then it's a new sub-pipeline: filesrc location=a.mp4 ! qtdemux name=demp4 demp4. ! something
A named bin (usually a muxer), or its pads like somedemux.audio_00 can be a source and/or a sink in other sub-pipelines: demp4. ! queue ! decodebin ! x264enc ! mux.
Usually a sub-pipeline ends with a named bin/muxer, either declared: mpegtsmux name=mux or referenced by name: mux. The dot at the end is a syntax of a reference.
Then the named muxer can be piped to a sink in a yet another sub-pipeline: mux. ! filesink location=out.ts
If you're only using the only audio or video stream from a source, you don't have to specify a pad like muxname.audio_00. muxname. is a shortcut to "suitable audio/video pad from muxname".
The example
That said, I assume that your mp4 file has both audio and video. In this case, you need to demux it into 2 streams first, decode, re-encode and then mux them back.
Indeed, your audio is not connected to mpegtsmux.
If you really need to decode the streams, that's what I would do. This didn't work for me, though:
gst-launch-1.0 filesrc location=surround.mp4 ! \
qtdemux name=demp4 \
demp4. ! queue ! decodebin ! audioconvert ! lamemp3enc ! mpegtsmux name=mux \
demp4. ! queue ! decodebin ! x264enc ! mux. \
mux. ! filesink location=out.ts
or let's use decodebin to magically decode both streams:
gst-launch-1.0 filesrc location=surround.mp4 ! \
decodebin name=demp4 \
demp4. ! queue ! audioconvert ! lamemp3enc ! mpegtsmux name=mux \
demp4. ! queue ! x264enc ! mux. \
mux. ! filesink location=out.ts
It is not linked because your launch line doesn't do it. Notice how the lamemp3enc element is not linked downstream.
Update your launch line to:
gst-launch filesrc location=surround.mp4 ! decodebin name=dmux ! queue ! audioconvert ! lamemp3enc ! mux. dmux. ! queue ! x264enc ! mpegtsmux name=mux ! queue ! filesink location=out.ts
The only change is " ! mux." after the lamemp3enc to tell it to link to the mpegtsmux.
While you are updating things, please notice that you are using gstreamer 0.10 that is years obsolete and unmantained, please upgrade to the 1.x series to get latest improvements and bugfixes.
i'm currently trying to stream two side by side webcams from my raspberry pi.
i found a pipeline for gstreamer:
gst-launch v4l2src device=/dev/video1 ! videoscale ! ffmpegcolorspace ! \
video/x-raw-yuv, width=640, height=480 ! videobox border-alpha=0 left=-640 !\
videomixer name=mix ! ffmpegcolorspace ! jpegenc ! tcpserversink \
host=192.168.1.108 port=8080 sync=false v4l2src ! videoscale !\
ffmpegcolorspace ! video/x-raw-yuv, width=640, height=480 !\
videobox right=-640 ! mix.
both webcams indicates that they are active by light, but i only can see the right side.
could someone please help me on this?
regards
carsten
I ran the line fine in my Linux box but just as a wild guess, try adding a queue element before every videomixer input pad.
I see dev/video1 but no dev/video2 or rather dev/video0 might want to specify that in your v4l2src.
Also I was having trouble with a pipeline similar to yours, this one worked for me:
gst-launch-0.10 v4l2src device=/dev/video1 ! videoscale ! ffmpegcolorspace ! video/x-raw-yuv, width=320, height=240 ! videobox border-alpha=0 ! videomixer name=mixme ! ffmpegcolorspace ! jpegenc ! avimux ! filesink location=sbs-3d-video.mov v4l2src device=/dev/video0 ! videoscale ! ffmpegcolorspace ! video/x-raw-yuv, width=320, height=240 ! videobox left=-320 ! mixme.
Sorry for your version of gstreamer:
gst-launch v4l2src device=/dev/video1 ! videoscale ! ffmpegcolorspace ! video/x-raw-yuv, width=320, height=240 ! videobox border-alpha=0 ! videomixer name=mixme ! ffmpegcolorspace ! jpegenc ! avimux ! filesink location=sbs-3d-video.mov v4l2src device=/dev/video0 ! videoscale ! ffmpegcolorspace ! video/x-raw-yuv, width=320, height=240 ! videobox left=-320 ! mixme.
This works:
gst-launch-0.10 \
videotestsrc ! ffmpegcolorspace ! 'video/x-raw-yuv' ! mux. \
audiotestsrc ! audioconvert ! 'audio/x-raw-int,rate=44100,channels=1' ! mux. \
avimux name=mux ! filesink location=gst.avi
I can let it run for a while, kill it, and then totem gst.avi displays a nice test card with tone.
However, trying to do something more useful like
gst-launch-0.10 \
filesrc location=MVI_2034.AVI ! decodebin name=dec \
dec. ! ffmpegcolorspace ! 'video/x-raw-yuv' ! mux. \
dec. ! audioconvert ! 'audio/x-raw-int,rate=44100,channels=1' ! mux. \
avimux name=mux ! filesink location=gst.avi
it just displays
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
and then stalls indefinitely.
What's the trick to get the version with decodebin rolling ?
Aha... this does what I want:
gst-launch-0.10 \
filesrc location=MVI_2034.AVI ! decodebin name=dec \
dec. ! queue ! ffmpegcolorspace ! 'video/x-raw-yuv' ! queue ! mux. \
dec. ! queue ! audioconvert ! 'audio/x-raw-int,channels=1' ! audioresample ! 'audio/x-raw-int,rate=44100' ! queue ! mux. \
avimux name=mux ! filesink location=gst.avi
The queue elements (both leading and trailing) do appear to be crucial.
Further experiments adding things like videoflip or
videorate ! 'video/x-raw-yuv,framerate=25/1'
into the video part of the pipeline all work as expected.
your pipeline seems to be correct. however, gst-launch is a limited tool - i would suggest coding the pipeline in python or ruby for better debugging.