How to get timestamps from gstreamer using system clock? - command-line

I currently have two command-line pipelines set up to stream video from a Raspberry Pi camera (ArduCam module) to a PC over ethernet; these work great:
gst-sender.sh
./video2stdout | gst-launch-1.0 -v fdsrc fd=0 ! \
video/x-h264, width=1280, height=800, framerate=60/1 ! \
h264parse ! rtph264pay ! \
udpsink host=xxx.xxx.xx.xxx port=xxxx
gst-reciever.sh
gst-launch-1.0 -v -e udpsrc port=xxxx \
caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, \
encoding-name=(string)H264, payload=(int)96" ! \
rtph264depay ! h264parse ! mp4mux ! filesink location=video.mp4
However, I will ultimately be running multiple cameras, synchronized via an external hardware trigger, and since I can't guarantee that the streams will begin at the same time I need timestamps--either for the stream start time or for each frame.
By adding 'identity silent=false' between h264parse and rtph264pay in gst-sender.sh, I can access the stream's buffer data, and with the following command I can retrieve the frame timestamps:
./gst-sender.sh | grep -oP "(?<=dts: )(\d+:){2}\d+.\d+"
But these timestamps are relative to the start of the stream, so I can't use them to line up saved videos from multiple streams!
Start video encoding...
0:00:00.000000000
0:00:00.016666666
0:00:00.033333332
0:00:00.049999998
0:00:00.066666664
0:00:00.083333330
0:00:00.099999996
0:00:00.116666662
0:00:00.133333328
0:00:00.149999994
0:00:00.166666660
0:00:00.183333326
It looks like gstreamer has an "absolute" clock time that it uses for latency calculations [1], but I have been unable to find any way to access it from the command line.
Is there a way to access gstreamer's absolute/system clock from the command line? Or another way to get the stream start timestamp?

Related

Using gstreamer with videomixer & 2 cameras streaming over UDP

I have a Raspberry Pi Compute module with 2 cameras. I'm trying to use gstreamer with v4l2src selecting /dev/video0 & /dev/video1 to continually run at about 20FPS and use videomixer to combine the images side-by-side then output H264 over RTP to a UDP port (read by another host)/
The default (current) RPi v4l2src driver does not support two cameras, but as of today a beta is available that does, however it requires the beta 4.4.6 kernel.
The problem I'm having is in getting the mixer connected.
#!/bin/bash -x
#
# Script to start RPi Compute Module streaming over RTP (RFC3984)
# from both cameras
#
FPS=20 # Frames per second
WIDTH=640 # Image width
HEIGHT=480 # Image height
UPLINK_HOST=192.168.1.73 # Receiving host
PORT=5200 # UDP port
#
# TESTING WITH ONE CAMERA ONLY FOR THE MOMENT
#
function start_streaming
{
gst-launch-1.0 -ve videomixer name=mixer \
! x264enc \
! h264parse \
! rtph264pay config-interval=10 pt=96 \
! udpsink host=$UPLINK_HOST port=$PORT \
v4l2src device=/dev/video0 \
! video/x-raw,format=AYUV,width=$WIDTH,height=$HEIGHT,framerate=$FPS/1 \
! mixer.
}
# Start streaming on both cameras simultaneously
echo Image size: $WIDTH x $HEIGHT
echo Frame rate: $FPS
echo Starting cameras 0 and 1 streaming to $UPLINK_HOST:$PORT
start_streaming
# Wait until everything has finished
wait
exit 0
# end
What I'm getting is the rather useless message:
WARNING: erroneous pipeline: could not link v4l2src0 to mixer
I've fiddled about rather a lot and got nowhere - it's probably something trivial, but be blowed if I can see it !
Many thanks
Nick
I think the problem is the chosen format. You use the AYUV while your camera does not support it. Try to replace the AYUV by I420.

How to convert mp4 to mkv H.264 with gst-launch-1.0 on Raspberry Pi

Is my code correct ?
I try to convert .mp4 to .mkv H.264 with gst-launch-1.0 on Raspberry Pi
gst-launch-1.0 -v filesrc location=sample_mpeg4.mp4 ! omxmpeg4videodec ! omxh264enc ! matroskamux ! filesink location=out.mkv
Do you get any errors? Please remember to mention that in future questions as it helps narrowing down the problems.
It should not be right, .mp4 is usually a termination for mp4 container format and not for mpeg4 video codec. You should need something like:
gst-launch-1.0 -v filesrc location=sample_mpeg4.mp4 ! qtdemux ! omxmpeg4videodec ! queue ! videoconvert ! omxh264enc ! matroskamux ! filesink location=out.mkv
This will only convert the video, audio on the original media file will be lost. It might also be more practical to just use uridecodebin for the decoding part:
gst-launch-1.0 -v uridecodebin uri=file:///path/to/sample.mp4 ! queue ! videoconvert ! omxh264enc ! matroskamux ! filesink location=out.mkv

Gstreamer pipeline multiple sink to one src

Looking for explanation how to using named elements in respect with muxing two inputs in one module. For instance muxing audio and video in one mpegtsmux modle
gst-launch filesrc location=surround.mp4 ! decodebin name=dmux ! queue ! audioconvert ! lamemp3enc dmux. ! queue ! x264enc ! mpegtsmux name=mux ! queue ! filesink location=out.ts
Above pipeline gives plugins interconnection like below
So it shows audio doesn't connect to mpegtsmus.
How to modify command line to have audio and video muxedup in mpegtsmux ?
Thanks!
I'll try to give the basic idea though I'm not that proficient and could be plain wrong.
A pipeline can consist of several sub-pipelines. If some element (bin) ends not with a pipe (!) but with a start of another element, then it's a new sub-pipeline: filesrc location=a.mp4 ! qtdemux name=demp4 demp4. ! something
A named bin (usually a muxer), or its pads like somedemux.audio_00 can be a source and/or a sink in other sub-pipelines: demp4. ! queue ! decodebin ! x264enc ! mux.
Usually a sub-pipeline ends with a named bin/muxer, either declared: mpegtsmux name=mux or referenced by name: mux. The dot at the end is a syntax of a reference.
Then the named muxer can be piped to a sink in a yet another sub-pipeline: mux. ! filesink location=out.ts
If you're only using the only audio or video stream from a source, you don't have to specify a pad like muxname.audio_00. muxname. is a shortcut to "suitable audio/video pad from muxname".
The example
That said, I assume that your mp4 file has both audio and video. In this case, you need to demux it into 2 streams first, decode, re-encode and then mux them back.
Indeed, your audio is not connected to mpegtsmux.
If you really need to decode the streams, that's what I would do. This didn't work for me, though:
gst-launch-1.0 filesrc location=surround.mp4 ! \
qtdemux name=demp4 \
demp4. ! queue ! decodebin ! audioconvert ! lamemp3enc ! mpegtsmux name=mux \
demp4. ! queue ! decodebin ! x264enc ! mux. \
mux. ! filesink location=out.ts
or let's use decodebin to magically decode both streams:
gst-launch-1.0 filesrc location=surround.mp4 ! \
decodebin name=demp4 \
demp4. ! queue ! audioconvert ! lamemp3enc ! mpegtsmux name=mux \
demp4. ! queue ! x264enc ! mux. \
mux. ! filesink location=out.ts
It is not linked because your launch line doesn't do it. Notice how the lamemp3enc element is not linked downstream.
Update your launch line to:
gst-launch filesrc location=surround.mp4 ! decodebin name=dmux ! queue ! audioconvert ! lamemp3enc ! mux. dmux. ! queue ! x264enc ! mpegtsmux name=mux ! queue ! filesink location=out.ts
The only change is " ! mux." after the lamemp3enc to tell it to link to the mpegtsmux.
While you are updating things, please notice that you are using gstreamer 0.10 that is years obsolete and unmantained, please upgrade to the 1.x series to get latest improvements and bugfixes.

Raspbian. gstreamer-1.0 flips video when I play it using GPU. Videoflip gives error

I've read https://stackoverflow.com/a/23869705/4073836 and it was very usefull to me.
At least I am able to play HD from my filesystem. But.
When I use software decoder
$ gst-launch-1.0 filesrc location=./test720p3kbps.mp4 ! qtdemux ! h264parse ! avdec_h264 ! eglglessink
I got normal picture on my screen while it is very slow.
Using omxplayer gives me brilliant picture. It is fast and correct.
And my own goal
$ gst-launch-1.0 filesrc location=./test720p3kbps.mp4 ! qtdemux ! h264parse ! omxh264dec ! eglglessink
also plays smooth. But it flips the picture upside-down! :'(
I've tried omxh263dec and omxmjpegdec with the same result.
decodebin and playbin did no result either.
I could use videoflip but it crashes my pipe as stably as AK-74 would do:
*** glibc detected *** gst-launch-1.0: free(): invalid pointer: 0x004aaf50 ***
Aborted
My gpu_mem in config.txt is set to 256
$ gst-launch-1.0 --version
gst-launch-1.0 version 1.2.0
GStreamer 1.2.0
http://packages.qa.debian.org/gstreamer1.0
I've installed it via apt-get install.
Thanks in advance!
The video is actually playing "correctly", it is the OpenGL coordinate system that is flipped.
I have had success with this issue by adding a format string as a work-around.
gst-launch-1.0 filesrc location=./test720p3kbps.mp4 ! qtdemux ! h264parse ! avdec_h264 ! "video/x-raw, format=(string)I420" ! eglglessink

Combining an audio and video stream using gstreamer [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am streaming an mp4(mpeg-4) file from one device to another using gstreamer over RTP stream. Basically I am splitting up the mp4 file into its audio and video file and then sending it all to the other device where it gets streamed. Now, I want to save the mp4 file to disk in the other device, but my problem is that I am able to save the audio and video files seperately and it cannot be played individually.
I am confused on how to combine both the audio and video rtp streams to form my mp4 file back and save it to a file in the other device.
Here are the command line codes :
Sender(Server)
gst-launch-0.10 -v filesrc location=/home/kuber/Desktop/sample.mp4 \
! qtdemux name=d \
! queue \
! rtpmp4vpay \
! udpsink port=5000 \
d. \
! queue \
! rtpmp4gpay \
! udpsink port=5002
Reciever(client)
gst-launch-0.10 udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)MP4V-ES, profile-level-id=(string)243, config=(string)000001b0f3000001b50ee040c0cf0000010000000120008440fa282fa0f0a21f, payload=(int)96, ssrc=(uint)4291479415, clock-base=(uint)4002140493, seqnum-base=(uint)57180" \
! rtpmp4vdepay \
! ffdec_mpeg4 \
! xvimagesink sync=false \
udpsrc port=5002 caps="application/x-rtp, media=(string)audio, clock-rate=(int)32000, encoding-name=(string)MPEG4-GENERIC, encoding-params=(string)2, streamtype=(string)5, profile-level-id=(string)2, mode=(string)AAC-hbr, config=(string)1290, sizelength=(string)13, indexlength=(string)3, indexdeltalength=(string)3, payload=(int)96, ssrc=(uint)501975200, clock-base=(uint)4248495069, seqnum-base=(uint)37039"\
! rtpmp4gdepay \
! faad \
! alsasink sync=false
You can try the following pipeline to mux audio and video into a single file.
Pipeline for this is as follows:
gst-launch-0.10 udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)MP4V-ES, profile-level-id=(string)243, config=(string)000001b0f3000001b50ee040c0cf0000010000000120008440fa282fa0f0a21f, payload=(int)96, ssrc=(uint)4291479415, clock-base=(uint)4002140493, seqnum-base=(uint)57180" \
! rtpmp4vdepay \
! ffdec_mpeg4 \
! mux. \
udpsrc port=5002 caps="application/x-rtp, media=(string)audio, clock-rate=(int)32000, encoding-name=(string)MPEG4-GENERIC, encoding-params=(string)2, streamtype=(string)5, profile-level-id=(string)2, mode=(string)AAC-hbr, config=(string)1290, sizelength=(string)13, indexlength=(string)3, indexdeltalength=(string)3, payload=(int)96, ssrc=(uint)501975200, clock-base=(uint)4248495069, seqnum-base=(uint)37039"\
! rtpmp4gdepay \
! faad \
! mux.
matroskamux name=mux
! filesink location=video.mp4