Is there a benchmark for real time communication(websocket or ajax polling etc)? - sockets

So I figure the definition of real time updates/communication is when updates made by one user are relayed to other users subscribed to the object as soon as they are made.
but this is not instantaneous(data takes finite time to travel). So I suppose that means a very short time.
If you use ajax polling every 5 seconds, the time taken for the user A to see something user B did is: 5+t1+t2 (time taken for data(http reques) to come from user B's PC to the server. t2 is time taken for data to come from server to User A's PC).
t1+t2 is the minimum delay that cannot be taken out of the picture(sure sockets reduce this time, but those factors are still present, however small).
So you can have delay of t1+t2+d in case of sockets. d is the time taken for server to notice event happened internally and propogate it(depends on CPU power)
My question is: is there any established benchmark/standard that defines how small d should be for the communication to be realtime.
Or is realtime just a general term we throw around daily?
This is out of sheer curiosity rather than any application. I am just curious if there are any established standards for realtime data.

"is there any established benchmark/standard that defines how small d
should be for the communication to be realtime?"
Your question is a valid one. An application is always defined by a characteristic latency time t. In different contexts, "realtime" can have an entirely different meaning with respect to t.
I would say the accepted "standard" for defining realtime event processing in the context of applications involving the web and human users is that (multiple) users should be able to interact with the application without "feeling" an impeding delay. The application must "feel responsive". In numbers this could mean that the overall latency time between request and response (general terms) should be not higher than on the order of ~100 ms. The human response time to real world events is on this order of magnitude. Online games requiring extremely fast reaction times are absolutely playable with an overall latency (round-trip) time somwhere between 10 and 60 ms.
In other contexts, such as in a lab or for controlling machines in industry, realtime event processing sometimes means guaranteed event processing within milliseconds, microseconds or even faster. This is an entirely different situation.
Coming back to web applications, I think modern realtime web services display one or multiple of the following characteristics:
the user interface is extremely responsive, partly realized by local execution in e.g. JavaScript. Eventual communication between code running on user side (in e.g. the browser) and the remote web application is executed asynchronously (hidden from the user).
the back-end implementation is based on efficient event processing techniques rather than periodic polls.
a persistent TCP/IP connection is used between user(s) and back-end in order to get rid of latencies and overhead due to connection opening/closing (this is where e.g. WebSockets come into play)
I hope this answers your question in general terms. If you want to know something more specifically, feel free to write a comment.

Related

CAN Communication: Good Practices

I am preparing to write some code for a master controller that communicates (via CANbus) with multiple nodes in a product. Each node monitors its own sensors (i.e. voltages, currents, fault flags, etc.) and can be started/stopped by the master controller. The master controller also sends the data to a display.
I am using an STM32H7B3I-EVAL board and using the STM32CubeIDE environment to write the code. I am trying to determine some good practices for writing this code, but I am inexperienced in CAN communication. I wanted to get people's opinions on the following high-level questions:
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
What are the pros/cons in using an RXBUFFER vs RXFIFO?
First of all, you need to invent an application tier CAN protocol unless you have one already. This isn't entirely trivial and requires some experience of CAN. Here you first of all need to take bus load in account, which in turn depends on the amounts of nodes and data allowed, as well as the baudrate. How to design this also depends on if it's a control system (hard realtime, milliseconds) or just some industrial sensor network (hundreds of milliseconds or seconds).
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Probably not. Regarding RX, depending on what CAN controller you have, there will at least be some manner of RX FIFO. Modern controllers also support dedicated "mailbox" slots for individual messages, which is more powerful and easier to work with. Your only requirement for never losing data is that you empty the FIFO at least as often as FIFO size times the time it takes to send the minimum package size (DLC=0). Unless your program is very busy, this is usually not a tough realtime deadline to meet.
Regarding TX, again it depends on the controller, but here it is usually sufficient to see that the previously send message has been send before attempting a new one. And unless you are really competing hard for bus access during a time of heavy bus load, this shouldn't be happening. Sensible CAN application protocols have some simple scheduling requirements such as "this gets sent after x ms in relation to that". Re-sending messages lost due to errors is handled by the controller hardware.
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
TX and RX buffers work independently of each other. Also what you are saying doesn't really make sense, since CAN is semi-duplex and one node's TX is another node's RX.
What are the pros/cons in using an RXBUFFER vs RXFIFO?
Those terms are pretty much synonymous. I suppose they may have some special meaning given a specific CAN controller, but you don't mention one (STM32 have several, one old and really bad "bxCAN" and one newer which I don't know much about. And some stubbornly insist on the horrible solution of using external controllers, particularly the Arduino kids).
Anyway, it is better to have neither, using a CAN controller with mailboxes is the best option. Unless the amount of expected identifiers are more than you have mailbox slots - in that case you have to direct low priority messages to a RX FIFO and use mailbox slots for high priority messages.

What are the possible use cases of the OrientDb Live Query feature?

I apologise if the question is naive. I wanted to understand what could be a few possible use cases of the live query feature.
Let's say - My database state changes but it doesn't change every minute (or hour). If I execute a live query against my database/class/cluster, I'm not really expecting the callback to be called anytime soon. But, hey, I would still want to be notified when there's a state change.
My need with Orientdb is more on lines of ElasticSearch's percolator bundled with a publish-subscribe system.
Is live query meant to cater to such use cases too? Or is my understanding of live query very limited? What could be a few possible use cases for the live query feature?
Thanks!
Whether or not Live Queries will be appropriate for your use case depends on a few things. There are several reason why live queries make sense. A few questions to ask are:
How frequently does the data change?
How soon after the data changes do you need to know about it?
How many different groups of data (e.g. classes, clusters) do you need to deal with?
How many clients are connected to the server?
If the data does not change very often, or if you can wait a set period of time before an update, or you don't have many clients (hitting the DB directly), or if you only have one thing feeding the database, then you might want to just do polling. There is a balance between holding a connection open that you send a message on very infrequently (live queries) and polling too often.
For example. It's possible that you have an application server (tomcat, node, etc) and that your clients connect via web sockets. Now lets say your app server makes one (or a few pooled) live query to the database. Now lets say your database has an update. It might just go from the database to the app server (e.g. node). Node may now be responsible for fanning out that message across 100 web sockets (1 for each connected client). In this case, the fact that node is connected to the database in a persistent way with a live query open, is not that big of a deal.
The question is. If you have thousands of clients connected, do they all need an immediate update. If so are you planning on having them polling at a short interval? If so, you probably could benefit from a live query. Lots of clients polling at a short interval will generate a lot of unnecessary traffic and queries.
Unfortunately at the end of the day, the answer is it depends. You probably need to prototype and then instrument under load to see what your tradeoffs are. But in principal, it is less about how frequently updates come, and more about how often you would have clients poll, and how many clients you have. If the answer is "short intervals and a lot of clients" Give live queries a try.

How can I compare the time between an Iphone and a (web) server?

I have an application made up of a server which sends occasional messages to Iphones. The latency between the two devices is important to the problem domain - if it takes less than a second for the message to arrive, everything's fine; if it takes more than 5 seconds, there's almost certainly a problem. The server-side messages are time stamped with the server time.
Using the cellular data connection, we see occasional delays, but we can't quantify them, because there's no guarantee that the Iphone's clock is synchronized with the server; one our test phones, we see different times for different carriers.
Is there a simple way to synchronize time between the Iphone and the server? I've looked at (S)NTP, which seems to be the right way to go.
Any alternatives? We only need to be accurate within seconds, not milli seconds.
I'm not sure what the exact situation is, so this may be a non-solution, but:
Presuming that you want to figure out the latency between the phone and the server (and only this) at set intervals (decided by the server). Presuming also that the error checking is done server-side, instead of synchronizing clocks, you might go with a "ping" approach.
Server pings client iPhone, and starts a stopwatch.
Client immediately pings server.
As soon as client ping reaches server, server stops the stopwatch and checks the time.
If I misunderstood your problem, apologies.
Well, a somewhat simplistic solution is that you could have the phones tell the server what time they have at various times and keep a database table of deltas. Then adjust your reported timestamp to the serve's time r +/- the delta. iPhones are synced to the carrier's time server to the best of my knowledge. The other possibility is to have both the phone and server query a common time source on a daily basis. It's unlikely that the time would vary much over a single day.

How can I measure the breakdown of network time spent in iOS?

Uploads from my app are too slow, and I'd like to gather some real data as to where the time is being spent.
By way of example, here are a few stages a request goes through:
Initial radio connection (significant source of latency in EDGE)
DNS lookup (if not cached)
SSL/TLS handshake.
HTTP request upload, including data.
Server processing time.
HTTP response download.
I can address most of these (e.g. by powering up the radio earlier via a dummy request, establishing a dummy HTTP 1.1 connection, etc.), but I'd like to know which ones are actually contributing to network slowness, on actual devices, with my actual data, using actual cell towers.
If I were using WiFi, I could track a bunch of these with Wireshark and some synchronized clocks, but I need cellular data.
Is there any good way to get this detailed breakdown, short of having to (gak!) use very low level socket functions to reproduce my vanilla http request?
Ok, the method I would use is not easy, but it does work. Maybe you're already tried this, but bear with me.
I get a time-stamped log of the sending time of each message, the time each message is received, and the time it is acted upon. If this involves multiple processes or threads, I have each one generate a log, and then merge them into a common timeline.
Then I plot out the timeline. (A tool would be nice, but I did it by hand.)
What I look for is things like 1) messages re-transmitted due to timeouts, 2) delays between the time a message is received and the time it's acted upon.
Usually, this identifies problems that I can fix in the code I can control. This improves things, but then I do it all over again, because chances are pretty good that I missed something the last time.
The result was that a system of asynchronous message-passing can be made to run quite fast, once preventable sources of delay have been eliminated.
There is a tendency in posting questions about performance to look for magic fixes to improve the situation. But, the real magic fix is to refine your diagnostic technique so it tells you what to fix, because it will be different from anyone else's.
An easy solution to this would be once the application get's fired, make a Long Polling connection with the server (you can choose when this connection need's to establish prior hand, and when to disconnect), but that is a kind of a hack if you want to avoid all the sniffing of packets with less api exposure iOS provides.

I'm writing an application that implements a questionnaire. Does qualify as being a real-time application?

Keeping it simple, I have a server and client. The server sends questions one by one and the client the answers, as soon as they are given.
So, would you say this application is real time?
Based on this quote from wikipedia, which summarizes my understand of what a real-time application is:
"A system is said to be real-time if the total correctness of an operation depends not
only upon its logical correctness, but also upon the time in which it is performed. The classical conception is that in a hard real-time or immediate real-time system, the completion of an operation after its deadline is considered useless - ultimately, this may cause a critical failure of the complete system. A soft real-time system on the other hand will tolerate such lateness, and may respond with decreased service quality (e.g., omitting frames while displaying a video)."
I would say no, it is not real-time.
No, Real-time systems are ones where the OS/Application has to respond to the environment within a known period, for example an embedded flight control system on a fighter jet.
Wikipedia has a fairly good article on Real-time computing.
If you are using for the communication a protocol like TCP/IP, that isnt realtime system, because these communication link are not by nature deterministic in matter of response time, the only sure thing is that the message will arrive, when? who knows...