CAN Communication: Good Practices - stm32

I am preparing to write some code for a master controller that communicates (via CANbus) with multiple nodes in a product. Each node monitors its own sensors (i.e. voltages, currents, fault flags, etc.) and can be started/stopped by the master controller. The master controller also sends the data to a display.
I am using an STM32H7B3I-EVAL board and using the STM32CubeIDE environment to write the code. I am trying to determine some good practices for writing this code, but I am inexperienced in CAN communication. I wanted to get people's opinions on the following high-level questions:
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
What are the pros/cons in using an RXBUFFER vs RXFIFO?

First of all, you need to invent an application tier CAN protocol unless you have one already. This isn't entirely trivial and requires some experience of CAN. Here you first of all need to take bus load in account, which in turn depends on the amounts of nodes and data allowed, as well as the baudrate. How to design this also depends on if it's a control system (hard realtime, milliseconds) or just some industrial sensor network (hundreds of milliseconds or seconds).
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Probably not. Regarding RX, depending on what CAN controller you have, there will at least be some manner of RX FIFO. Modern controllers also support dedicated "mailbox" slots for individual messages, which is more powerful and easier to work with. Your only requirement for never losing data is that you empty the FIFO at least as often as FIFO size times the time it takes to send the minimum package size (DLC=0). Unless your program is very busy, this is usually not a tough realtime deadline to meet.
Regarding TX, again it depends on the controller, but here it is usually sufficient to see that the previously send message has been send before attempting a new one. And unless you are really competing hard for bus access during a time of heavy bus load, this shouldn't be happening. Sensible CAN application protocols have some simple scheduling requirements such as "this gets sent after x ms in relation to that". Re-sending messages lost due to errors is handled by the controller hardware.
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
TX and RX buffers work independently of each other. Also what you are saying doesn't really make sense, since CAN is semi-duplex and one node's TX is another node's RX.
What are the pros/cons in using an RXBUFFER vs RXFIFO?
Those terms are pretty much synonymous. I suppose they may have some special meaning given a specific CAN controller, but you don't mention one (STM32 have several, one old and really bad "bxCAN" and one newer which I don't know much about. And some stubbornly insist on the horrible solution of using external controllers, particularly the Arduino kids).
Anyway, it is better to have neither, using a CAN controller with mailboxes is the best option. Unless the amount of expected identifiers are more than you have mailbox slots - in that case you have to direct low priority messages to a RX FIFO and use mailbox slots for high priority messages.

Related

Sending custom data to socket using eBPF

I'm trying, finally, to understand eBPF and maybe use it in an upcoming project.
For sake of simplicity I started with reading bcc documentation.
In my project I'll need to send some data over network upon some kernel function calls.
Can that be done without sending the data to userspace first?
I see that I can redirect skbs from one socket to another etc., and I see that I can submit custom data to user space. Is there a way to get the best of both worlds?
EDIT: I'm trying to log some file system events to another server that'll collect this data from multiple machines. Those machines can be fairly busy in some situations. It should be real time and with low latency.
I'd love to avoid going through userspace to prevent copying the data back and forth and to reduce sw overhead as much as possible.
Thank you all!
It seems this question can be summarized to: is it possible to send data over the network from a BPF tracing program (kprobes, tracepoints, etc.)?
The answer to that question is no. As far as I know, there are currently no way to craft and send packets over the network from BPF programs. You can resend a received packet to the network with some helpers, but they are only available to networking BPF programs.

Is there a benchmark for real time communication(websocket or ajax polling etc)?

So I figure the definition of real time updates/communication is when updates made by one user are relayed to other users subscribed to the object as soon as they are made.
but this is not instantaneous(data takes finite time to travel). So I suppose that means a very short time.
If you use ajax polling every 5 seconds, the time taken for the user A to see something user B did is: 5+t1+t2 (time taken for data(http reques) to come from user B's PC to the server. t2 is time taken for data to come from server to User A's PC).
t1+t2 is the minimum delay that cannot be taken out of the picture(sure sockets reduce this time, but those factors are still present, however small).
So you can have delay of t1+t2+d in case of sockets. d is the time taken for server to notice event happened internally and propogate it(depends on CPU power)
My question is: is there any established benchmark/standard that defines how small d should be for the communication to be realtime.
Or is realtime just a general term we throw around daily?
This is out of sheer curiosity rather than any application. I am just curious if there are any established standards for realtime data.
"is there any established benchmark/standard that defines how small d
should be for the communication to be realtime?"
Your question is a valid one. An application is always defined by a characteristic latency time t. In different contexts, "realtime" can have an entirely different meaning with respect to t.
I would say the accepted "standard" for defining realtime event processing in the context of applications involving the web and human users is that (multiple) users should be able to interact with the application without "feeling" an impeding delay. The application must "feel responsive". In numbers this could mean that the overall latency time between request and response (general terms) should be not higher than on the order of ~100 ms. The human response time to real world events is on this order of magnitude. Online games requiring extremely fast reaction times are absolutely playable with an overall latency (round-trip) time somwhere between 10 and 60 ms.
In other contexts, such as in a lab or for controlling machines in industry, realtime event processing sometimes means guaranteed event processing within milliseconds, microseconds or even faster. This is an entirely different situation.
Coming back to web applications, I think modern realtime web services display one or multiple of the following characteristics:
the user interface is extremely responsive, partly realized by local execution in e.g. JavaScript. Eventual communication between code running on user side (in e.g. the browser) and the remote web application is executed asynchronously (hidden from the user).
the back-end implementation is based on efficient event processing techniques rather than periodic polls.
a persistent TCP/IP connection is used between user(s) and back-end in order to get rid of latencies and overhead due to connection opening/closing (this is where e.g. WebSockets come into play)
I hope this answers your question in general terms. If you want to know something more specifically, feel free to write a comment.

design high volume MSMQ

We have many communication servers sending data packets. We would like to store these data packets coming from these server programs in MSMQ until an updater will process them. Data loss has been a concern and we would like to not lose any data packet coming from these server programs and want an efficient and performant solution.
What will be the best design approach?
Well, there are two basic things you need to do to get started. First, you'll want to modify the default installation to move the storage location to a drive that is mirrored and/or is not the same as the one that the operating system boots from on that server. Also you'll want to ensure there is enough space there to hold messages as they are queued, depending on the volume you're contemplating. This article covers that.
Second, you'll want to use transactions and journaling to ensure reliability. This is both a programming and infrastructure issue, so you can start by looking at this article, and then following up with a general guide on how to program against MSMQ correctly. This for example is a good starting point if you've never used MSMQ, although it's fairly basic. If you're going to use MSMQ as a binding/transport for WCF then you have the plumbing part pretty much covered; it's just a matter of configuring your services to handle the volume and traffic you think you're going to see.
We have many communication servers sending data packets.
When storing 'data packets', I would recommend writing [Serializable] .NET objects to WCF, mainly because WCF can read/write them transparently to MSMQ. This will be easier to work with, but if your data packets are say TCP/IP or binary packets, you will need to turn on 'Ordering', to ensure they go into the queue in the exact order they were placed.
MSMQ also has sessions, so if you want to group items together this is possible. WCF does not make this guarantee. You will need to write custom code for this, but it is only a case of assigning a unique ID to each message in a particular session.
Data loss has been a concern and we would like to not lose any data packet coming from these server programs
MSMQ can persist the data to disk, so if a server goes down, its queue is preserved. MSMQ can hold the queue in memory, which is more efficient but crashes/restarts will not retain the queue information.
and want an efficient( good performance )
MSMQ is fairly performant. The persistence to disk has a small overhead, but only due to the disk write. If performance includes multi-threading, MSMQ does not offer this feature as the queue is sequential, so must be processed in order. But this is typical of queue technologies.
MSMQ also have 4MB max message size, so keep in mind what you want to send across the network.
The only other thing is that MSMQ is not massively scalable. Its primary goal is guaranteed delivery. If you post millions of packets, they will get to their destination, but MSMQ does have a finite ability to push the messages to other machines. It operates a ThreadPool-like system, so it will not scale if this is also a requirement.
I have also added info to the #msmq-wcf wiki with a basic example of writing data.

How can I measure the breakdown of network time spent in iOS?

Uploads from my app are too slow, and I'd like to gather some real data as to where the time is being spent.
By way of example, here are a few stages a request goes through:
Initial radio connection (significant source of latency in EDGE)
DNS lookup (if not cached)
SSL/TLS handshake.
HTTP request upload, including data.
Server processing time.
HTTP response download.
I can address most of these (e.g. by powering up the radio earlier via a dummy request, establishing a dummy HTTP 1.1 connection, etc.), but I'd like to know which ones are actually contributing to network slowness, on actual devices, with my actual data, using actual cell towers.
If I were using WiFi, I could track a bunch of these with Wireshark and some synchronized clocks, but I need cellular data.
Is there any good way to get this detailed breakdown, short of having to (gak!) use very low level socket functions to reproduce my vanilla http request?
Ok, the method I would use is not easy, but it does work. Maybe you're already tried this, but bear with me.
I get a time-stamped log of the sending time of each message, the time each message is received, and the time it is acted upon. If this involves multiple processes or threads, I have each one generate a log, and then merge them into a common timeline.
Then I plot out the timeline. (A tool would be nice, but I did it by hand.)
What I look for is things like 1) messages re-transmitted due to timeouts, 2) delays between the time a message is received and the time it's acted upon.
Usually, this identifies problems that I can fix in the code I can control. This improves things, but then I do it all over again, because chances are pretty good that I missed something the last time.
The result was that a system of asynchronous message-passing can be made to run quite fast, once preventable sources of delay have been eliminated.
There is a tendency in posting questions about performance to look for magic fixes to improve the situation. But, the real magic fix is to refine your diagnostic technique so it tells you what to fix, because it will be different from anyone else's.
An easy solution to this would be once the application get's fired, make a Long Polling connection with the server (you can choose when this connection need's to establish prior hand, and when to disconnect), but that is a kind of a hack if you want to avoid all the sniffing of packets with less api exposure iOS provides.

I'm writing an application that implements a questionnaire. Does qualify as being a real-time application?

Keeping it simple, I have a server and client. The server sends questions one by one and the client the answers, as soon as they are given.
So, would you say this application is real time?
Based on this quote from wikipedia, which summarizes my understand of what a real-time application is:
"A system is said to be real-time if the total correctness of an operation depends not
only upon its logical correctness, but also upon the time in which it is performed. The classical conception is that in a hard real-time or immediate real-time system, the completion of an operation after its deadline is considered useless - ultimately, this may cause a critical failure of the complete system. A soft real-time system on the other hand will tolerate such lateness, and may respond with decreased service quality (e.g., omitting frames while displaying a video)."
I would say no, it is not real-time.
No, Real-time systems are ones where the OS/Application has to respond to the environment within a known period, for example an embedded flight control system on a fighter jet.
Wikipedia has a fairly good article on Real-time computing.
If you are using for the communication a protocol like TCP/IP, that isnt realtime system, because these communication link are not by nature deterministic in matter of response time, the only sure thing is that the message will arrive, when? who knows...