I want to do live video streaming and encoding. I am using Leopardboard DM365. I can capture and encode live video into H264 and then stream using gstreamer plugins but how do I capture the rtp packets on windows? I can capture on vlc using sdp file, but I do not want to just view using VLC. I need to capture the buffer and then pass it ahead to my application. How can I do this?
I am using the following gstreamer plugin on server side:
gst-launch -v -e v4l2src always-copy=FALSE input-src=composite
chain-ipipe=true ! video/x-raw-yuv,format=(fourcc)NV12, width=640,
height=480 ! queue ! dmaiaccel ! dmaienc_h264 encodingpreset=2
ratecontrol=2 intraframeinterval=23 idrinterval=46
targetbitrate=3000000 ! rtph264pay ! udpsink port=3000
host=192.168.1.102 sync=false enable-last-buffer=false
Thank you,
Maz
In your application if you know the exact parameters that you are going to receive why do you need the sdp file?
The sdp file is needed to get the streaming parameters. The rtsp protocol allows exchange of sdp because receiver does not know what the sender will send.
If your application knows what the sender will send you just need to capture the data and start decoding it. You many want to configure rtph264pay with config-interval=1 to send the SPS PPS every 1 second so that your application can decode the content that is coming in. Feel free to change the duration of config-interval to match your intraframeinteral.
Related
I need to stream video data to a server from an ESP32-cam. So I need to set the esp32-cam as a client, but I could not find any example code or any resource regarding how to stream video data to a server. There are example that show how to set ESP32-cam as a video streaming server, but not client. I could not find any resource. Is this possible at all?
Or as an alternative solution, would it be possible to connect the esp32-cam server to another server?
I would appreciate if you could give any resources.
Thanks in advance!
I think you mean you want the camera to act like an IP-Camera and send its stream to a server so you can then stream the video from that server.
Many servers for IP cameras will be set up to receive RTSP streams - there are example libraries to send a RTSP stream from your esp32-cam which you can use for this. One popular example: https://github.com/geeksville/Micro-RTSP
As a note, you could also have your server act as a client when you esp32-cam acts as a streaming server. It could then re-stream the video to wherever ever you want to send it.
Recently I had a task to convert the file format to mp4 and stream it. I have used ffmpeg as the transcoding tool. The MP4 file doesn't get streamed over the http protocol [have used php cgi wrapper], but then the output format is changed to mpegts the streaming occurs and works fine. A quick search on net http://wiki.videolan.org/MPEG relates and advises to use mpegts for streaming mp4 file. I need more insight on these two formats, their advantages and differences.
Thanks,
Peter
MPEG-TS is designed for live streaming of events over DVB, UDP multicast, but also
over HTTP.
It divides the stream in elementary streams, which are segmented in small chunks.
System information is sent at regular intervals, so the receiver can
start playing the stream any time.
MPEG-TS isn't good for streaming files, because it doesn't provide info about the
duration of the movie or song, as well as the points you can seek to.
There are some new protocols that can use MPEG-TS for streaming over HTTP,
that put additional metadata in files and fix the disadvantage I talked before.
These are HTTP Live Streaming and DASH (Dynamic adaptive streaming over HTTP).
On the other hand MP4 has that info in part of the stream, called moov atom.
The point is that the moov must be placed before the media content and downloaded
from the server first.This way the video player knows the duration and can seek to any point without downloading the whole file (this is called HTTP pseudostreaming).
Sadly ffmpeg places the moov at the end of the file. You can fix that with software
like Xmoov-PHP.
Here you can find more info about pseudostreaming.
You can reorder your MP4 file, putting the moov section at the start of it using the following FFMPEG command:
ffmpeg -i your.mp4 -vcodec copy -acodec copy -movflags +faststart reordered.mp4
.mp4 is the extension of a file
while mpeg ts is used for transport streams.....mpeg ts is a standard used for digital video broadcasting to send the mpeg video and mpeg audio. there are basically two types of ts
spts and mpts
spts contains the single program only whereas mpts contains the multiple programs in it.
ts reader and vlc media players are used to play the mpeg ts
if you want to know more about it the follow,
MPEG TS OR TRANSPORT STREAM MPTS SPTS
The extension for transport stream files is .ts
I'm trying to send a sequence of DTMF tones during a SIP call from linphone, compiled for the iPhone, in order to do some call management at a local exchange I've set up. I see from the code that the individual digits send DTMF (without audio on the line), but I can't seem to send a string of digits manually.
When I try, I just get a single digit sent. I could put in a delay and timer, but that just doesn't seem the way to go about it - and a long string of tones would take a long time to send with the necessary acknowledgements.
I've read that you can send DTMF as part of a SIP INFO message, but can't find the facility in linphone to construct a SIP INFO message.
Has anyone been able to do this or have any suggestions as to what I could try?
For me, changing the audio codec to speex # 32000 Hz solved the problem. I'm not sure exactly why it solved it, but beforehand DTMF signals were not being recoknized by the server, whereas now they are.
For reference, I'm using the recent Linphone 3.8.1 build.
dear all:
I wanna ask some questions about streaming protocol.
1.If I get video streaming use RTSP the video streaming will be sent add RTP header,but why some video streaming also add RTP header via cgi command?
2.I could distinguish "RTP over TCP" from "RTP over UDP",but what differences between "RTP over TCP" and "RTP over HTTP"
I'm confuse ~
i think you're confusing RTP over HTTP with RTSP over HTTP ( and therefore RTP too ): see http://developer.apple.com/quicktime/icefloe/dispatch028.html for more info.
Is it possible to recreate the media file from the captured wireshark logs. Is there any doc which explains how this needs to be done.
I am doing RTSP based streaming from my darwin test server. So I want to compare the Quality of the original and the streamed file.
I'm not familar with Darwin Streaming Servers but generally RTSP is only for establishing the RTP stream. The direction of RTP packets is normally in one direction (ignoring the ACK-packages for TCP).
For comparing the files I would use a tool suggested by all other users.
But to answer your question for wireshark:
filter you stream for the destination ip by using 'ip.addr eq '
look for your RTP or UDP packages from the RTSP-server
in case you see UDP-packages: right click on the package->'Decode As' and choose 'RTP' in Transport tab
choose from context menu 'follow UDP stream'
now you have the whole RTP-stream without RTP headers.
But keep in mind that in H.264 you have packetization which gives you extra bytes in the displayed stream. You cannot compare this with the original file!!
Look here in chapter 5.4. for further description.
Better use the tools mentioned by the others!
I don't think it is possible the way you hope, as RTSP is a sort of conversation between a client and a server (or servers). To recreate the RTSP session you would have to recreate all of this two-way traffic - it is not really comparable to opening a file in a video player.
I think you will find it easier to use VLC to stream the rtsp:// link and save it to a file. The stream will be transcoded while saving, so if you need a "true" comparison to the original file, you will want to use a lossless video codec for transcoding, and the output file could be very large.
Using Ostinato, You should be able to replay the file and capture using VLC.