RTP/AVP stream from vlc player SDP required error - mp4

I did RTP/AVP stream test using vlc player with two PCs.
But on the client side,
An error has been floated.
The content of the error is
SDP required:
A description in SDP format is required to receive the RTP stream. Note that rtp:// URIs cannot work with dynamic RTP payload format (96).
I hope you will help me solve the problem.
server
vlc player stream
file add (mp4)
Destination Settings (RTP Audio/Video Profile)
address : dst IP, port : 8554
profile : Video - H264 + MP3 (MP4)
stream
client
vlc open network stream
network address (rtp://#:8554)
play
result
SDP required:
A description in SDP format is required to receive the RTP stream. Note that rtp:// URIs cannot work with dynamic RTP payload format (96).
I want the video to be executed on the client.

Related

i can't stream video from http path in flutter video player

i'm trying to create new video player app that load vast xml and and extract the data from it.
one item in the data is video file url path, it can be http or https and i can't know in advance,
i can't just replace http to https because then it can be wrong domain name.
is there a why to stream the video file with http?
Thaks.

How to initialize HLS stream?

My IP Camera push stream only when somebody request. So there is no ts segment or m3u8 file when a client requests stream. Is there a way to tell the client to retry the same request a second later, so that the server has time to send a command, wait for streaming video and generate a m3u8 file.
I don't want to hold the connection to the client while waiting for the first m3u8 file because it may cause too many connections which the server can't handle.
Edit:
The server running Nginx(actually OpenResty) receives audio/video data from an IP Camera, then do transcoding with ffmpeg library and finally publish via HLS protocol.
The problem:
The IP Camera does not push stream all the time. It pushes only when client requests. It takes several seconds for the server to receive media stream and generate the first m3u8 file after the client (HLS player such as ffplay or video.js with videojs-contrib-hls plugin) requests. So the client will get a 404 and fail to open the HLS stream.
What I tried:
I don't want to hold the connection to the client while waiting for the first m3u8 file because it may cause too many connections which the server can't handle.
The client can check if the stream is ready by adding custom code. But I want to know if there is a better way first.
Question:
How can the server Initial the HLS stream and avoid the problem?
Is there a way to generate a fake m3u8 file to tell the standard HLS players to retry again automatically?
Or is there an way to configure the HLS player so that it will retry automatically when it get 404 from server. Any players that can run in browser is acceptable, such as flowplayer with flashls, video.js with videojs-contrib-hls.
Edit 2:
How does the camera start:
My system has a control module to communicate with the camera. It will control the camera to start or stop pushing media. I'm sure the camera will start only after a client request. So it doesn't matter how the camera starts
About Nginx in my system:
The camera POST raw media (alaw audio and h264 video) to OpenResty server (which based on Nginx), with timestamp in custom HTTP header. Every request post media of two seconds. The server take the raw media and transcode it to mpeg-ts format and also m3u8 file as playlist of HLS protocol. The transcoding is done with ffmpeg library.
Player:
<!DOCTYPE html>
<meta name="robots" content="noindex">
<html>
<head>
<meta charset=utf-8 />
<title>videojs-contrib-hls embed</title>
<link href="https://cdnjs.cloudflare.com/ajax/libs/video.js/5.10.2/alt/video-js-cdn.css" rel="stylesheet">
<script src="https://cdnjs.cloudflare.com/ajax/libs/video.js/5.10.2/video.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/videojs-contrib-hls/3.0.2/videojs-contrib-hls.js"></script>
</head>
<body>
<video id="my_video_1" class="video-js vjs-default-skin" controls preload="auto" width="640" height="268"
data-setup='{}'>
<source src="http://xxx.xxx.xxx.xxx/camera.m3u8" type="application/x-mpegURL">
</video>
</body>
</html>

How to intercept multipart/form data in fiddler and access a binary file which is a part of the request

I am trying to intercept requests send to a server from my mobile device. There is this post request which will upload payload to the server and the request has a file of type .pb, which i cant read in fiddler. Is there a way to get hold of the file ?
It's not clear what "i cant read in fiddler" means.
Use Fiddler's HexView request inspector to inspect the POST body. You can select the bytes of the file upload and choose Save bytes to save the file out to your desktop.

XMPP: Why do we send multiple requests to open a stream for one connection?

I've been following a number of tutorials, the best being Yandex's.
In step 6, it says I need to open the stream again after authorization? Is there any reason why?
Did the stream close automatically? If I just authenticated the stream, why does it need to close and be reopened? Do I need to recursively start from step 1 again or how often do I need to request it to be reopened? Do I need to authenticate this new stream.
As a XMPP beginner, why's the point of: New Stream -> Authorize it -> New Stream -> Not sure now what, maybe authorize again?
As an XMPP beginner, you may just get an XMPP library which done things right :)
When you ready to go deeper, you must read official XMPP specifications: Core, Instant Messaging and Presence and Address Format
Initial stream "negotiation" described here - https://www.rfc-editor.org/rfc/rfc6120#section-4.3
In a short - no, you need authenticate only when your party "advertise" <auth ... /> element in the beginning of new stream (<stream:features>), most of time it is done once.
The stream features change after authorization. Just as they change after the TLS handshake is performed. A change of stream features always means that there is a "new" stream going on. That's why the stream is "reopened".
If you look at RFC 6120 ยง 9.1, you see that a server (could) first announce (only) starttls as stream feature (9.1.1 Step 3):
S: <stream:features>
<starttls xmlns='urn:ietf:params:xml:ns:xmpp-tls'>
<required/>
</starttls>
</stream:features>
then after the server and client performed TLS negotiation and the established TLS a new stream is initiated by the client.
Now the server sends different stream features (9.1.2. Step 8):
S: <stream:features>
<mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'>
<mechanism>SCRAM-SHA-1-PLUS</mechanism>
<mechanism>SCRAM-SHA-1</mechanism>
<mechanism>PLAIN</mechanism>
</mechanisms>
</stream:features>
Notice how the starttls stream feature disappeared and now only the SASL mechanisms are reported as stream feature.
After the client is authenticated another new stream is created and again new stream features are send (9.1.3 step 14):
<stream:features>
<bind xmlns='urn:ietf:params:xml:ns:xmpp-bind'/>
</stream:features>

spray-client: log the actual HTTP sent / recieved over the wire

In Apache HTTP Client there's the concept of a "wire log" that can be turned on, printing out the actual HTTP text generated by the client code and sent to the server.
How can I do the same using spray-client? I'm of course able to add a RequestTransformer and ResponseTransformer and print them using .toString, but that doesn't show me what's actually being serialized to HTTP at the TCP level.