These applications stream video from client app to their own server. I am interested in knowing what type of protocol they use? I am planning on building a similar application but I dont know how to go about the video streaming. Once I get the stream to my server I will use OpenCV to do some processing and return the result to the client.
I recommend you to send only a minimum of data and do the processing as much as possible on the client. Since sending the whole video stream is a huge waste of traffic (and can not be done in realtime I think)
I would use a TCP connection to send an intermediate result to the server, that the server can process further. The desing of that communication depends on what you are sending and what you want to do with it.
You can wrap it in xml for instance, or serialize an object and so on.
Related
I need to stream video data to a server from an ESP32-cam. So I need to set the esp32-cam as a client, but I could not find any example code or any resource regarding how to stream video data to a server. There are example that show how to set ESP32-cam as a video streaming server, but not client. I could not find any resource. Is this possible at all?
Or as an alternative solution, would it be possible to connect the esp32-cam server to another server?
I would appreciate if you could give any resources.
Thanks in advance!
I think you mean you want the camera to act like an IP-Camera and send its stream to a server so you can then stream the video from that server.
Many servers for IP cameras will be set up to receive RTSP streams - there are example libraries to send a RTSP stream from your esp32-cam which you can use for this. One popular example: https://github.com/geeksville/Micro-RTSP
As a note, you could also have your server act as a client when you esp32-cam acts as a streaming server. It could then re-stream the video to wherever ever you want to send it.
I've been working through a bunch of WebRTC examples, and they all require a custom Websocket server for exchanging the signalling data. OTOH, every WebRTC doc states that you can use anything for signalling, including carrier pidgeons.
So I've been wondering, just out of curiosity: why isn't signalling usually done using a boring old REST API (or similar)? It's not as if the setup process has realtime requirements, for which using Websockets would make sense...
Because you want the setup process to be as quick as possible—usually—and there can be quite a few messages to exchange, especially if you use ICE trickling. Using AJAX you'd have to use repeated polling, which is certainly slower. If that's good enough for you and you see some advantage in doing it that way vs. web sockets, more power to you. But typically you'd want to forward messages to the other peer as soon as you get them, not whenever the other peer happens to poll the server next. And the only practical option to push data from the server to the client are web sockets.
You could use server-sent events for the server-to-client push and AJAX for the client-to-server sending… but why, when web sockets already provide duplex communication?
Helo, I'm working on a mobile game which needs realtime communication from client to server.
Usually I'll implement a TCP socket server and use some private binary protocol to enable bidirectional communication, and now I also looking into XMPP server like Ejabberd which is based on standard. But XML in some way it's really redundant and inefficient, especially for mobile app it could means more traffic and memory consumption.
Is it a MUST that XMPP use XML?
Is there any XMPP implementation that uses binary as low level data format instead of using XML? (or I shouldn't choose XMPP and start with other standard or technology.)
Any strategy to reduce overhead of sending complex data object (not big file object) using XMPP?
XML is required by the XMPP specification, so there are no binary implementations. It does indeed contain much more overhead, but you have to keep in mind the problem XMPP is designed to solve - an active chat connection can be expected to transmit maybe one message per second.
As for the Google talk api: they use a non-xml protocol for client - Google server connections. When I send a message in the Gmail client, the request body just contains a bunch of post data:
count=1&ofs=16&req0_type=m&req0_to=my.friend%40gmail.com&req0_id=6A8466CBC59CBB0C_0&req0_text=test&req0_chatstate=active&req0_iconset=classic&req0__sc=c
That part is not XMPP. The server which accepts this request then does the job of creating and sending out the XMPP requests. The XMPP is still in XML, they just use a different protocol between the client and Google server.
I am looking for advice/guidance on how to achieve the following:
I have a circuit mounted and connected to an Arduino and I am able to easily retrieve data from it, using Python and the pySerial module. It allows me to determine the value of an analog input over time.
At the moment I am storing that data to a file, with a time stamp and the correspondent value and I would love to hear opinions and thoughts on how I could 'share' this data to a web server and 'play' it live.
Is it possible to 'stream' the values into the dump file and retrieve data from it at the same time through an AJAX request or should I look into event-driven web servers like 'Tornado', 'Twisted'...
I am a bit lost here. Just for the record, I am comfortable with PHP and JavaScript for the final output, I just don't have a clue on how to constantly 'stream' the data I need.
Thanks in advance.
If you don't plan to update the Ardunio device too much then it would make sense to have the Python component continue to collect the data over the serial port and publish it in a way that can easily be consumed by a service which can distribute the information in a more efficient, and probably flexible, manner.
e.g.
read the data from the serial port and publish messages onto a message queue. The message queue can then be read by any other component and the data can then be distributed to other applications/clients.
Make a web call to a server that can process each update and distribute to other applications/clients.
You could use something like Pusher (who I work for) and make a call to the REST API to deliver each message to any connected clients. Whilst this is a good way of distributing your data you will be publishing your data even if no clients are listening so I think you are best to get the data to a component like a web server first.
Assuming you go with 1 or 2, you can then use realtime web solution to distribute the data to any number of clients. You could use Pusher here or you could use a self hosted solution.
So, the data flow as I see it would be:
Ardunio -> small Python app -> Queue (or HTTP request to Web server) -> Realtime Web Technology -> Many clients
I unsuccessfully searched Google for a good definition and understanding of streaming data and its characteristics. My questions are:
What is streaming data?
How can it be detected?
Correction:
"How can it be detected" is not an appropriate question. Instead my question is:
How is it different from buffered data and other data transfer mechanisms?
It depends in what context you mean but basically streaming data is analagous to asynchronous data. Take the Web as an example. The Web (or HTTP specifically) is (basically) a request-response mechanism in that a client makes a request and receives a response (typically a Web page of some kind).
HTTP doesn't natively support the ability for servers to push content to clients. There are a number of ways this can be faked, including:
Polling: forcing the client to make repeated requests, typically inconspicuously (as far as the client is concerned);
Long-lived connections: this is where the client makes a normal HTTP request but instead of returning immediately the server hangs on to the request until there's something to send back. When the request times out or a response is sent th eclient sends another request. In this way you can fake server push;
Plug-ins: Java applets, Flash, Silverlight and others can be used to achieve this.
Anything where the server effectively sends data to the client (rather than the client asking for it)--regardless of the mechanism and whether or not the client is polling for that data--can be characterised as streaming data.
With non-HTTP transports (eg vanilla TCP) server push is typically easier (but can still run afoul of firewalls and th elike). An example of this might be a sharetrading application that receives market information from a provider. That's streaming data.
How do you detect it? Bit of a vague question. I'm not really sure what you're getting at.
When you say streaming data I think of the following, although I'm not sure if this is what you're getting at. To me it's playing a video/audio file while it's downloading. That's what happens when you go to YouTube and watch a video and it starts playing even though you haven't downloaded the whole video yet. But you can see the video downloading - I'm sure you're familiar with the seek bar filling up as the file downloads. It doesn't necessarily have to be a video or audio file but that's the most common.