Why there's no TTL field in an ethernet frame? - ethernet

As far as I experienced, there's nothing like a TTL field in an ethernet frame. An IP datagram for instance provides a TTL field to prevent packages from being passed around for a perpetual period of time. Of course ethernet provides a mechanism to prohibit perpetual sending: Ethernet just calls the spanning tree algorithm to destroy any circles that may lead to perpetual sending. But thats a lot of work in comparison to a simple TTL field that provides the same (or similar).

When ethernet was first invented, it started out as a simple communication solution without any smarthub or sophisticated router on the media. Just dumping frames to the cable, backing off and retrying when collision occurred. Keeping it simple and no modification to the packets.
In a way, no TTL is somewhat inherited from the original design. And to stay backward compatible.

Related

CAN Communication: Good Practices

I am preparing to write some code for a master controller that communicates (via CANbus) with multiple nodes in a product. Each node monitors its own sensors (i.e. voltages, currents, fault flags, etc.) and can be started/stopped by the master controller. The master controller also sends the data to a display.
I am using an STM32H7B3I-EVAL board and using the STM32CubeIDE environment to write the code. I am trying to determine some good practices for writing this code, but I am inexperienced in CAN communication. I wanted to get people's opinions on the following high-level questions:
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
What are the pros/cons in using an RXBUFFER vs RXFIFO?
First of all, you need to invent an application tier CAN protocol unless you have one already. This isn't entirely trivial and requires some experience of CAN. Here you first of all need to take bus load in account, which in turn depends on the amounts of nodes and data allowed, as well as the baudrate. How to design this also depends on if it's a control system (hard realtime, milliseconds) or just some industrial sensor network (hundreds of milliseconds or seconds).
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Probably not. Regarding RX, depending on what CAN controller you have, there will at least be some manner of RX FIFO. Modern controllers also support dedicated "mailbox" slots for individual messages, which is more powerful and easier to work with. Your only requirement for never losing data is that you empty the FIFO at least as often as FIFO size times the time it takes to send the minimum package size (DLC=0). Unless your program is very busy, this is usually not a tough realtime deadline to meet.
Regarding TX, again it depends on the controller, but here it is usually sufficient to see that the previously send message has been send before attempting a new one. And unless you are really competing hard for bus access during a time of heavy bus load, this shouldn't be happening. Sensible CAN application protocols have some simple scheduling requirements such as "this gets sent after x ms in relation to that". Re-sending messages lost due to errors is handled by the controller hardware.
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
TX and RX buffers work independently of each other. Also what you are saying doesn't really make sense, since CAN is semi-duplex and one node's TX is another node's RX.
What are the pros/cons in using an RXBUFFER vs RXFIFO?
Those terms are pretty much synonymous. I suppose they may have some special meaning given a specific CAN controller, but you don't mention one (STM32 have several, one old and really bad "bxCAN" and one newer which I don't know much about. And some stubbornly insist on the horrible solution of using external controllers, particularly the Arduino kids).
Anyway, it is better to have neither, using a CAN controller with mailboxes is the best option. Unless the amount of expected identifiers are more than you have mailbox slots - in that case you have to direct low priority messages to a RX FIFO and use mailbox slots for high priority messages.

Should I use RTP or WebRTC for local network audio communication

I have a set of Raspberry Pi Zeros that I would like to use as a home intercom. I initially set them up to send audio to each other using golang with gRPC and bidirectional streaming, which works for short calls, but the lag builds up over time, so I think I need to switch to a real-time protocol like RTP or WebRTC. Since I already know the IP address of each device, and the hardware/supported codecs for each is the same, and they are all on the same network, is there any advantage to using WebRTC over using plain RTP? My understanding is that WebRTC mainly provides some additional security and connection orchestration like ICE and SDP, which I wouldn't necessarily need. I am trying to minimize resource usage since these devices are not as powerful as a phone or desktop. If I do use WebRTC, I can do the SDP signaling with gRPC or some other direct delivery method. Since there are more than 2 devices, I'm also curious about multicast functionality, which seems pure-RTP specific, while WebRTC (which uses RTP), doesn't necessarily support multicasting, and would require (n-1)! p2p connections. I'm very unclear/unsure about this point.
Also, does either support mixing audio channels natively, or would that need to be handled in the custom software?
You could use WebRTC, but you'd need to rig a signalling server, and a STUN / TURN server. These can be super simple and low capacity because everything is on a private network, but you still need 'em. The signalling server handles the necessary SDP interchange. Going full WebRTC might be overengineering this. (But of course learning to get WebRTC working can be useful.)
You've built out a golang infrastructure. Seeing as how you're on a private network, you could change up that program to send multicast UDP packets or RTP packets. Then you can rig your listeners to listen to them.
No matter what you do, you'll need to deal with the lag. A good way to do it in the packet world: don't build a queue of buffers ready to play. Instead, always put each received packet as the next-to-play packet, even if you have to overwrite a previously received packet. (That is, skip ahead.) You may get a pop once in a while, but with reasonably short packets, under 50ms, it shouldn't affect the user experience significantly. And the lag won't build up.
The oldtimey phone system ran on a continent-wide 8K synchronous clock. So lag was not an issue. But it's always a problem when audio analog-to-digital and digital-to-analog clocks aren't synchronized. That's true whenever they are on different devices. The slightest drift builds up over time. (RPis don't have fifty-dollar clock parts in them with guaranteed low drift.)
If all your audio sources run at the same sample rate, you can average them to mix them. That should get you started. (If you're using WebRTC in a browser, it will mix multiple sources for you. )
Since you are using Go check out offline-browser-communication. This removes the need for Signaling and STUN/TURN. It uses mDNS and pre-generated certificates. It is also being discussed in the WICG Discourse no idea if/when it will land.
'Lag' is a pretty common problem to have when doing media over TCP. You have lots of queues and congestion control you are dealing with. WebRTC (and RTP in general) is great at solving this. You have the following standardized things to solve it.
RTP packets have the relative timestamp
RTP Sender reports have a mapping of relative to NTP timestamp. Use this for sync/timing.
RTP Receiver reports give you packet loss/jitter. Use this to assert your network health.
Multicast is a fantastic suggestion as well. You reduce the complexity of having to signal all those 1:1 connections, and reduce the amount of bandwidth required. It does make security a little bit more delicate/roll your own though.
With Pion we decoupled all the RTP/RTCP stuff Pion Interceptor. So you don't have to use the full WebRTC stack to get the media transport things mentioned above.

How to intercept J1939 CAN messages?

I'm building a HIL/SIL test with Simulink, which tests the Vehicle Control Unit(VCU) from a vehicle. This VCU talks with a Power Distribution Module(PDM) over a J1939 CAN network. The PDM handles the in- and outputs from switches and to actuators and puts the information on the CAN bus. The VCU then knows what the PDM is seeing from connected sensors. In turn, the VCU puts info on the CAN bus on how the PDM should control the connected actuators.
My laptop is hooked to the same CAN bus with a Vector adapter and Simulink.
To test the VCU, I need to mimic the PDM and send messages to the VCU as if I were the PDM. The VCU then has to take the correct actions and control the real PDM accordingly.
Obviously, if I just mimic the PDM, my messages will interfere with those sent from the real PDM. So basically, I need the PDM to shut up and only listen. I do the talking for the PDM. However, the PDM is not configurable in a listen-only mode, so I have to intercept all messages it sends so they never arrive at the VCU.
My idea was that i'd detect(by observing the arbitration field of all messages) when the PDM starts sending, and pull a bit down in the arbitration field. It'd recognise the priority of my 'message' over its own, and it'd stop transmitting. It'd be as if the CAN bus is always to busy to give room to the PDM. This would shut up the PDM without it throwing errors. But other suggestions are welcome.
So (how) is it possible to intercept J1939 CAN messages in MATLAB/Simulink, or with a separate CAN controller?
Here is an idea, how to realize what you are looking for. You need some extra hardware, however.
This is the rough outline:
Setup a CAN-gateway device, which has two independent CAN-interfaces can0 and can1.
Disconnect the PDM from the CAN-bus and connect it to one of the interfaces of your CAN-gateway, e.g. can0
Connect the second interface of the CAN-gateway, can1, to the original CAN-bus, which also includes your laptop and the VCU
Program your CAN-gateway to forward all incoming CAN-frames on can1 to the can0 interface
As you want to ignore all messages from the PDM, simply ignore the CAN-frames coming in on interface can0 and not forward them to can1
Example, how to realize such a CAN-gateway:
Hardware: Use a Raspberry Pi and a CAN extension board with two can-interfaces, such as the PiCAN2 duo board.
Software: Write a small program to forward traffic between the interfaces can0 and can1, using socketcan, which is already included in the Linux kernel.
In case your devices are communicating via the higher layer J1939 transport protocol, you might also need to get the J1939 transport protocol running on the Raspberry Pi. If you are simply using 29-bit indentifiers with a maximum payload of 8 byte of data, this should also not be necessary.
Alternatively, you could also use a more expensive commercial solution, this CAN-Router for example.
Your original idea:
I think what you are envisioning is technically feasible, but might have some other drawbacks.
As the drivers of can controllers typically don't expose interfaces to interactively manipulate CAN-frames while their transmission is still ongoing, you could directly address a can-transceiver from a microcontroller
A few researchers realized a CAN Denial of service attack by turning the first recessive bit in a CAN-frame after the arbitration ID into a dominant bit for certain selected CAN-IDs. They used an Arduino Uno and a Microchip MCP2551 E/P CAN transceiver. The code used is also available online. As this interactive manipulation of CAN-frames during transmission is related to what you are looking for, this could be a good starting point for you.
Still I see some drawbacks, when you silence the PDM this way:
You will not only silence the PDM this way, but also (at least) delay the transmission of other nodes on the CAN-bus with arbitration IDs that have
lower priority than the messages from the PDM
It is very likely that the PDM will go into some error state, when it is not able to successfully send its CAN-frames to the bus after a certain number of retries
Yet another idea:
In case you are able to adapt the software of the VCU, change it in a way that it does not consume the CAN-frames from the PDM, but CAN-frames from your laptop by using different CAN-IDs for the same messages. You will have to change the dbc-file for that purpose.

TURN Server WebRTC Hardware / Network Requirements

I am currently being challenged (mentally) with the idea of scaling out a TURN server(s) from a novelty to something that scales based on call volume.
Subsequently, I am trying to understand the requirements from a hardware, network, and application perspective and it's associated cost. I have some specific questions come to mind I would love the community's help with wrapping my brain around.
1) Are the ports reused for multiple destinations simultaneously? It seems to me conceptually if it's UDP and the source ip/port, destination ip/port is enough of 4-tuple for uniqueness, I could see it theoretically possible but I've never seen documentation around this.
2) What is the time (if any) before ports are reused. If the TURN server has allocated ports 1234 and 1235 for a given time, when one or both of those sockets close, how long will it be before the TURN server re-allocates those ports as a result of another request.
3) How should I think about the hardware requirements (specifically CPU and memory) of my TURN server(s) as a function of number of concurrent calls?

Is UDP always unreliable?

I'm about to re-architect a real-time system that has been prototyped on a single node and specify how it should be scaled up to multiple nodes (probably never more than 20 of them in any one LAN). Some of the functionality will multiply on a per-node basis, and some of it will remain centralised on a one-per-system basis. There is going to be a need for communication between each node and that central unit (possibly a master node), but not between individual nodes.
Due to the real-time demands of the system, UDP is something that should be considered for that communication. But... it is almost always described as unreliable. Is this always the case? Does it not depend on the scale of the network, the data load on the network and the way the protocol is used?
For example, suppose I have a central unit which regularly polls through each node by addressing a UDP message to it, and each node immediately responds with its data via UDP. There is no other communication on the (isolated) network. Suppose there is also some mechanism to ensure there are never any collisions (e.g. all nodes have a maximum transmission length for their responses to a poll message, and the latencies are nailed down to known levels). Is there any (hidden) reason in a simple and structured network like this that you would ever fail to transmit/receive every last UDP packet and have near 100% reliability?
EDIT: the detail of this question suffers from confusion around what "unreliable" means, and whether it is intended to apply only to UDP, or to the system in which UDP is employed. I have chosen to leave this confusion in the question, because looking back over a lot of material on UDP, I can see that this confusion might be very common, and that answers which highlight that confusion and overcome it might be valuable.
The key is, UDP does not make any guarantees. There are many reasons why datagrams might go undelivered:
Sender host buffers fill up
Cosmic rays flip bits somewhere along the way, causing a checksum mismatch and the datagram to be discarded
Electromagnetic interference corrupts the signal momentarily
A network cable gets unplugged for a moment
A hub or switch loses power for a moment
A switch's buffers fill up
Receiving host buffers fill up
If any of these things (or many others) occurs, a datagram may go undelivered. UDP will make no attempt to detect this or to re-deliver it.
Yes. Every layer is potentially unreliable, starting with the electrical signalling across your Ethernet cable. (Ever jostled one of those plugs? You can see it in Wireshark logs.) Collisions are virtually impossible to avoid. And in case of congestion, your protocol stack may decide to drop UDP packets.
But all that's rather beside the point. UDP is unreliable, but that doesn't mean it can't be relied on. Plenty of mission-critical applications run over UDP. You just need to understand the unreliability and account for it.
Unreliable does not mean it will definitely fail. It only means that it does not care about transport problems and thus will not make any guarantees that transmission will be successful. Let's compare some aspects of UDP against TCP.
UDP is packet based, TCP stream based. This has not much to do with reliability.
Packets may arrive in a different order than they were sent. UDP does not care and will deliver the packets in this order to the application. In TCP data have a sequence number so the receivers operating system will detect reordering and forward the data to the application in the correct order. This usually does not matter when you have a direct connection between client and server, but might happen in wide networks like the internet.
Packets may get lost due to router or switch congestion or overload of the senders or receiving system or others. This might also happen in local networks with heavy traffic or if the receiver system is unable to cope with the amount of data, even for a short time. With UDP the data will be lost. TCP instead will detect lost packets and retransmit them and even slow down the traffic to adapt to what speed network and endpoints can handle and thus loose less packets in the future.
Packets might get duplicated. Again TCP will detect this due to the sequence number but UDP will not and thus transmit the duplicate packet to the application.
Packets might get corrupted. Both TCP and UDP have the same kind of checksum to detect small errors, but will not detect larger errors.
Applications using UDP usually does not need the reliability of TCP or don't need all of this. For instance with real time audio and video packet loss is acceptable but duplicates and reordering is not. Thus the RTP protocol contains its own sequence number (timestamp) to detect this case. Also, RTP is often accompanied by the RTCP protocol to send statistics about packet loss back to the peer and thus make adaption of connection speed possible.
If you want reliable UDP, try looking at ENet library.
http://enet.bespin.org/
Unreliability with regard to UDP is different from unreliability in general. Also, UDP and alternatives to it (e.g. TCP) are always only ever components or single layers in a wider system. This can lead to some confusion about what "unreliable" means.
UDP is a transport layer network protocol. The transport layer is responsible for getting data from one point on the network to another specific point on the network. In that context, UDP is described as an "unreliable" protocol because it makes no guarantees about whether the data sent will actually arrive. In contrast, TCP is a "reliable" transport layer protocol because if data goes missing or is corrupted the first time it is sent, the protocol itself has mechanisms to resend the data and ensure it arrives... eventually.
But UDP is not some sloppy "maybe, maybe not - let me think about it and screw you around" protocol. It does what it is specified to do, and is reliable (general sense) at doing it... as well as reliable (general sense) in failing in predictable ways. If you take these failure modes into account elsewhere, UDP can be a component of an overall very reliable system.
For example, by restricting network topology and using UDP to transport higher level protocols, the GigE Vision standard specifies a highly reliable system with high data transfer rates and real-time response whose transport level communications is dominated by UDP traffic.
Historically, the major source of unreliable packet transport was packet collisions due to two sources attempting to transmit simultaneously on a single channel. In modern networks, each node is typically connected on a full duplex link to a network switch, making collisions impossible on that link, and consequently making modern networks much more reliable (in all senses) than was the case when UDP was first designed.
No networking technology currently available can be made 100% reliable... but let's be practical rather than pedantic, because potential unreliability and actual unreliability are a lot like shark attacks - they tend to occur far more in people's minds than in reality.
Some material on UDP makes it sound almost like the people who designed UDP did it just to annoy people - that unreliability was deliberately engineered in. This is not the case, and it is unhelpful to think of it in these terms. It is far better to focus on what UDP does and does not do in comparison to alternatives (e.g. see this comparison between TCP and UDP... which nonetheless lists "unreliability" as a key feature of UDP).
In reality, when there is data to be transmitted, that can be transmitted, it is transmitted; when there is data that can be received, it is received. Likewise, if you transmit packets 1, 2 then 3 directly to an endpoint, they will almost certainly be received as packets 1, 2 and 3 in order (assuming no failures in lower network layers, and that incoming data is buffered in a FIFO as is customary, but not mandatory). You can get a lot of reliability out of this, depending on how you use it.
However, if you transmit packets via multiple routes, all bets are off - "unreliability" of packet order can occur. And if you flood the available buffers, unreliability via dropping packets will occur. And if you allow nodes to transmit at any time (asynchronous), then you will get unreliability through packet collisions. But in the "simple and structured" (and also small and synchronous) LAN described, you may be able to either avoid this, or detect its occurrence (e.g. by sending an incrementing counter value in each packet), which will let you compensate in an application-specific way.
In cases where the power goes off (perhaps momentarily), or cosmic rays strike, or people trip on loose cables causing an unacceptable level of "unreliability"... then don't blame UDP - blame the engineer(s) whose design left the system susceptible to these things.
All things considered, in the LAN described, you might reasonably expect to be able to engineer a system based on UDP so as to never lose more than one packet in every few million, or billion, or even astronomically better than this - but it will depend on specifics, and only you can know if your application can tolerate the quantity and quality of unreliable comms that results in your case.