Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am streaming an mp4(mpeg-4) file from one device to another using gstreamer over RTP stream. Basically I am splitting up the mp4 file into its audio and video file and then sending it all to the other device where it gets streamed. Now, I want to save the mp4 file to disk in the other device, but my problem is that I am able to save the audio and video files seperately and it cannot be played individually.
I am confused on how to combine both the audio and video rtp streams to form my mp4 file back and save it to a file in the other device.
Here are the command line codes :
Sender(Server)
gst-launch-0.10 -v filesrc location=/home/kuber/Desktop/sample.mp4 \
! qtdemux name=d \
! queue \
! rtpmp4vpay \
! udpsink port=5000 \
d. \
! queue \
! rtpmp4gpay \
! udpsink port=5002
Reciever(client)
gst-launch-0.10 udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)MP4V-ES, profile-level-id=(string)243, config=(string)000001b0f3000001b50ee040c0cf0000010000000120008440fa282fa0f0a21f, payload=(int)96, ssrc=(uint)4291479415, clock-base=(uint)4002140493, seqnum-base=(uint)57180" \
! rtpmp4vdepay \
! ffdec_mpeg4 \
! xvimagesink sync=false \
udpsrc port=5002 caps="application/x-rtp, media=(string)audio, clock-rate=(int)32000, encoding-name=(string)MPEG4-GENERIC, encoding-params=(string)2, streamtype=(string)5, profile-level-id=(string)2, mode=(string)AAC-hbr, config=(string)1290, sizelength=(string)13, indexlength=(string)3, indexdeltalength=(string)3, payload=(int)96, ssrc=(uint)501975200, clock-base=(uint)4248495069, seqnum-base=(uint)37039"\
! rtpmp4gdepay \
! faad \
! alsasink sync=false
You can try the following pipeline to mux audio and video into a single file.
Pipeline for this is as follows:
gst-launch-0.10 udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)MP4V-ES, profile-level-id=(string)243, config=(string)000001b0f3000001b50ee040c0cf0000010000000120008440fa282fa0f0a21f, payload=(int)96, ssrc=(uint)4291479415, clock-base=(uint)4002140493, seqnum-base=(uint)57180" \
! rtpmp4vdepay \
! ffdec_mpeg4 \
! mux. \
udpsrc port=5002 caps="application/x-rtp, media=(string)audio, clock-rate=(int)32000, encoding-name=(string)MPEG4-GENERIC, encoding-params=(string)2, streamtype=(string)5, profile-level-id=(string)2, mode=(string)AAC-hbr, config=(string)1290, sizelength=(string)13, indexlength=(string)3, indexdeltalength=(string)3, payload=(int)96, ssrc=(uint)501975200, clock-base=(uint)4248495069, seqnum-base=(uint)37039"\
! rtpmp4gdepay \
! faad \
! mux.
matroskamux name=mux
! filesink location=video.mp4
Related
I currently have two command-line pipelines set up to stream video from a Raspberry Pi camera (ArduCam module) to a PC over ethernet; these work great:
gst-sender.sh
./video2stdout | gst-launch-1.0 -v fdsrc fd=0 ! \
video/x-h264, width=1280, height=800, framerate=60/1 ! \
h264parse ! rtph264pay ! \
udpsink host=xxx.xxx.xx.xxx port=xxxx
gst-reciever.sh
gst-launch-1.0 -v -e udpsrc port=xxxx \
caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, \
encoding-name=(string)H264, payload=(int)96" ! \
rtph264depay ! h264parse ! mp4mux ! filesink location=video.mp4
However, I will ultimately be running multiple cameras, synchronized via an external hardware trigger, and since I can't guarantee that the streams will begin at the same time I need timestamps--either for the stream start time or for each frame.
By adding 'identity silent=false' between h264parse and rtph264pay in gst-sender.sh, I can access the stream's buffer data, and with the following command I can retrieve the frame timestamps:
./gst-sender.sh | grep -oP "(?<=dts: )(\d+:){2}\d+.\d+"
But these timestamps are relative to the start of the stream, so I can't use them to line up saved videos from multiple streams!
Start video encoding...
0:00:00.000000000
0:00:00.016666666
0:00:00.033333332
0:00:00.049999998
0:00:00.066666664
0:00:00.083333330
0:00:00.099999996
0:00:00.116666662
0:00:00.133333328
0:00:00.149999994
0:00:00.166666660
0:00:00.183333326
It looks like gstreamer has an "absolute" clock time that it uses for latency calculations [1], but I have been unable to find any way to access it from the command line.
Is there a way to access gstreamer's absolute/system clock from the command line? Or another way to get the stream start timestamp?
I am trying to composite three streams coming from three Rapsberry PI.
As soon as I join two streams together using the videomixer plugin, I get a message ending with:
Pipeline:pipeline0/GstOSXVideoSink:osxvideosink0:
There may be a timestamping problem, or this computer is too slow.
Strangely, my task monitor only indicates about 15%CPU usage for gst
With the three streams, the framerate becomes unusable. I would expect my I7 macbook to be able to handle this without problem....
Here is the code I am using for the mixing, in this case just one stream(/sink?).
Can anyone tell me whether there is an obvious mistake ? Or where I should look for the bottleneck and improve it ?
Thanks !
gst-launch-1.0 videomixer name=m sink_1::xpos=400 sink_2::ypos=300 ! autovideosink \
-v udpsrc port=9000 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264'! rtph264depay ! video/x-h264,width=400,height=300,framerate=30/1 ! h264parse ! avdec_h264 ! videoconvert ! m. \
-v udpsrc port=9001 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! video/x-h264,width=400,height=300,framerate=30/1 ! h264parse ! avdec_h264 ! videoconvert ! m. \
-v udpsrc port=9002 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! video/x-h264,width=400,height=300,framerate=30/1 ! h264parse ! avdec_h264 ! videoconvert ! m.
Here is the code I use to send the streams from the RPI Camera.
raspivid -n -w 640 -h 480 -t 0 -o - \
| gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay \
config-interval=10 pt=96 ! udpsink host=192.168.1.3 port=9000
Try adding queue elements for each video decode and sync=false to the video sink.
gst-launch-1.0 videomixer name=m sink_1::xpos=400 sink_2::ypos=300 ! videoconvert ! ximagesink sync=false \
udpsrc port=9000 ! application/x-rtp,media=video,clock-rate=90000,encoding-name=H264 ! rtph264depay ! video/x-h264,width=400,height=300 ! h264parse ! avdec_h264 ! queue ! videoconvert ! m. \
udpsrc port=9001 ! application/x-rtp,media=video,clock-rate=90000,encoding-name=H264 ! rtph264depay ! video/x-h264,width=400,height=300 ! h264parse ! avdec_h264 ! queue ! videoconvert ! m. \
udpsrc port=9002 ! application/x-rtp,media=video,clock-rate=90000,encoding-name=H264 ! rtph264depay ! video/x-h264,width=400,height=300 ! h264parse ! avdec_h264 ! queue ! videoconvert ! m.
Now my disclaimer to this would be that I'm unsure if the video will be properly smooth and in sync, but it seems to look pretty good.
Also, on raspivid, you'll probably want to add the config-interval property to the rtph264pay element.
raspivid -n -w 640 -h 480 -t 0 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 ! multiudpsink clients=192.168.1.3:9000,192.168.1.3:9001,192.168.1.3:9002
I am developing a video chat application and I need realtime streaming with audio and video in sync. This is what I did......
video encoding with x264 encoder and decoding
audio encoding with lamemp3 encoder and decoding
mpegts muxing and demuxing
my sender command:
gst-launch-1.0 -e mpegtsmux name="muxer" ! udpsink host=172.20.4.19 port=5000 v4l2src ! video/x-raw, width=640,height=480 ! x264enc tune=zerolatency byte-stream=true ! muxer. pulsesrc ! audioconvert ! lamemp3enc target=1 bitrate=64 cbr=true ! muxer. rtph264pay
my reciever command:
gst-launch-1.0 udpsrc port=5000 ! decodebin name=dec ! queue ! autovideosink dec. ! queue ! audioconvert ! audioresample ! autoaudiosink
However I am getting a delay of more than 1 second. What is causing this delay (assuming that I did something enherintly wrong)? And how would I minimize it?
Is my code correct ?
I try to convert .mp4 to .mkv H.264 with gst-launch-1.0 on Raspberry Pi
gst-launch-1.0 -v filesrc location=sample_mpeg4.mp4 ! omxmpeg4videodec ! omxh264enc ! matroskamux ! filesink location=out.mkv
Do you get any errors? Please remember to mention that in future questions as it helps narrowing down the problems.
It should not be right, .mp4 is usually a termination for mp4 container format and not for mpeg4 video codec. You should need something like:
gst-launch-1.0 -v filesrc location=sample_mpeg4.mp4 ! qtdemux ! omxmpeg4videodec ! queue ! videoconvert ! omxh264enc ! matroskamux ! filesink location=out.mkv
This will only convert the video, audio on the original media file will be lost. It might also be more practical to just use uridecodebin for the decoding part:
gst-launch-1.0 -v uridecodebin uri=file:///path/to/sample.mp4 ! queue ! videoconvert ! omxh264enc ! matroskamux ! filesink location=out.mkv
Looking for explanation how to using named elements in respect with muxing two inputs in one module. For instance muxing audio and video in one mpegtsmux modle
gst-launch filesrc location=surround.mp4 ! decodebin name=dmux ! queue ! audioconvert ! lamemp3enc dmux. ! queue ! x264enc ! mpegtsmux name=mux ! queue ! filesink location=out.ts
Above pipeline gives plugins interconnection like below
So it shows audio doesn't connect to mpegtsmus.
How to modify command line to have audio and video muxedup in mpegtsmux ?
Thanks!
I'll try to give the basic idea though I'm not that proficient and could be plain wrong.
A pipeline can consist of several sub-pipelines. If some element (bin) ends not with a pipe (!) but with a start of another element, then it's a new sub-pipeline: filesrc location=a.mp4 ! qtdemux name=demp4 demp4. ! something
A named bin (usually a muxer), or its pads like somedemux.audio_00 can be a source and/or a sink in other sub-pipelines: demp4. ! queue ! decodebin ! x264enc ! mux.
Usually a sub-pipeline ends with a named bin/muxer, either declared: mpegtsmux name=mux or referenced by name: mux. The dot at the end is a syntax of a reference.
Then the named muxer can be piped to a sink in a yet another sub-pipeline: mux. ! filesink location=out.ts
If you're only using the only audio or video stream from a source, you don't have to specify a pad like muxname.audio_00. muxname. is a shortcut to "suitable audio/video pad from muxname".
The example
That said, I assume that your mp4 file has both audio and video. In this case, you need to demux it into 2 streams first, decode, re-encode and then mux them back.
Indeed, your audio is not connected to mpegtsmux.
If you really need to decode the streams, that's what I would do. This didn't work for me, though:
gst-launch-1.0 filesrc location=surround.mp4 ! \
qtdemux name=demp4 \
demp4. ! queue ! decodebin ! audioconvert ! lamemp3enc ! mpegtsmux name=mux \
demp4. ! queue ! decodebin ! x264enc ! mux. \
mux. ! filesink location=out.ts
or let's use decodebin to magically decode both streams:
gst-launch-1.0 filesrc location=surround.mp4 ! \
decodebin name=demp4 \
demp4. ! queue ! audioconvert ! lamemp3enc ! mpegtsmux name=mux \
demp4. ! queue ! x264enc ! mux. \
mux. ! filesink location=out.ts
It is not linked because your launch line doesn't do it. Notice how the lamemp3enc element is not linked downstream.
Update your launch line to:
gst-launch filesrc location=surround.mp4 ! decodebin name=dmux ! queue ! audioconvert ! lamemp3enc ! mux. dmux. ! queue ! x264enc ! mpegtsmux name=mux ! queue ! filesink location=out.ts
The only change is " ! mux." after the lamemp3enc to tell it to link to the mpegtsmux.
While you are updating things, please notice that you are using gstreamer 0.10 that is years obsolete and unmantained, please upgrade to the 1.x series to get latest improvements and bugfixes.