I'm building a HIL/SIL test with Simulink, which tests the Vehicle Control Unit(VCU) from a vehicle. This VCU talks with a Power Distribution Module(PDM) over a J1939 CAN network. The PDM handles the in- and outputs from switches and to actuators and puts the information on the CAN bus. The VCU then knows what the PDM is seeing from connected sensors. In turn, the VCU puts info on the CAN bus on how the PDM should control the connected actuators.
My laptop is hooked to the same CAN bus with a Vector adapter and Simulink.
To test the VCU, I need to mimic the PDM and send messages to the VCU as if I were the PDM. The VCU then has to take the correct actions and control the real PDM accordingly.
Obviously, if I just mimic the PDM, my messages will interfere with those sent from the real PDM. So basically, I need the PDM to shut up and only listen. I do the talking for the PDM. However, the PDM is not configurable in a listen-only mode, so I have to intercept all messages it sends so they never arrive at the VCU.
My idea was that i'd detect(by observing the arbitration field of all messages) when the PDM starts sending, and pull a bit down in the arbitration field. It'd recognise the priority of my 'message' over its own, and it'd stop transmitting. It'd be as if the CAN bus is always to busy to give room to the PDM. This would shut up the PDM without it throwing errors. But other suggestions are welcome.
So (how) is it possible to intercept J1939 CAN messages in MATLAB/Simulink, or with a separate CAN controller?
Here is an idea, how to realize what you are looking for. You need some extra hardware, however.
This is the rough outline:
Setup a CAN-gateway device, which has two independent CAN-interfaces can0 and can1.
Disconnect the PDM from the CAN-bus and connect it to one of the interfaces of your CAN-gateway, e.g. can0
Connect the second interface of the CAN-gateway, can1, to the original CAN-bus, which also includes your laptop and the VCU
Program your CAN-gateway to forward all incoming CAN-frames on can1 to the can0 interface
As you want to ignore all messages from the PDM, simply ignore the CAN-frames coming in on interface can0 and not forward them to can1
Example, how to realize such a CAN-gateway:
Hardware: Use a Raspberry Pi and a CAN extension board with two can-interfaces, such as the PiCAN2 duo board.
Software: Write a small program to forward traffic between the interfaces can0 and can1, using socketcan, which is already included in the Linux kernel.
In case your devices are communicating via the higher layer J1939 transport protocol, you might also need to get the J1939 transport protocol running on the Raspberry Pi. If you are simply using 29-bit indentifiers with a maximum payload of 8 byte of data, this should also not be necessary.
Alternatively, you could also use a more expensive commercial solution, this CAN-Router for example.
Your original idea:
I think what you are envisioning is technically feasible, but might have some other drawbacks.
As the drivers of can controllers typically don't expose interfaces to interactively manipulate CAN-frames while their transmission is still ongoing, you could directly address a can-transceiver from a microcontroller
A few researchers realized a CAN Denial of service attack by turning the first recessive bit in a CAN-frame after the arbitration ID into a dominant bit for certain selected CAN-IDs. They used an Arduino Uno and a Microchip MCP2551 E/P CAN transceiver. The code used is also available online. As this interactive manipulation of CAN-frames during transmission is related to what you are looking for, this could be a good starting point for you.
Still I see some drawbacks, when you silence the PDM this way:
You will not only silence the PDM this way, but also (at least) delay the transmission of other nodes on the CAN-bus with arbitration IDs that have
lower priority than the messages from the PDM
It is very likely that the PDM will go into some error state, when it is not able to successfully send its CAN-frames to the bus after a certain number of retries
Yet another idea:
In case you are able to adapt the software of the VCU, change it in a way that it does not consume the CAN-frames from the PDM, but CAN-frames from your laptop by using different CAN-IDs for the same messages. You will have to change the dbc-file for that purpose.
Related
I am preparing to write some code for a master controller that communicates (via CANbus) with multiple nodes in a product. Each node monitors its own sensors (i.e. voltages, currents, fault flags, etc.) and can be started/stopped by the master controller. The master controller also sends the data to a display.
I am using an STM32H7B3I-EVAL board and using the STM32CubeIDE environment to write the code. I am trying to determine some good practices for writing this code, but I am inexperienced in CAN communication. I wanted to get people's opinions on the following high-level questions:
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
What are the pros/cons in using an RXBUFFER vs RXFIFO?
First of all, you need to invent an application tier CAN protocol unless you have one already. This isn't entirely trivial and requires some experience of CAN. Here you first of all need to take bus load in account, which in turn depends on the amounts of nodes and data allowed, as well as the baudrate. How to design this also depends on if it's a control system (hard realtime, milliseconds) or just some industrial sensor network (hundreds of milliseconds or seconds).
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Probably not. Regarding RX, depending on what CAN controller you have, there will at least be some manner of RX FIFO. Modern controllers also support dedicated "mailbox" slots for individual messages, which is more powerful and easier to work with. Your only requirement for never losing data is that you empty the FIFO at least as often as FIFO size times the time it takes to send the minimum package size (DLC=0). Unless your program is very busy, this is usually not a tough realtime deadline to meet.
Regarding TX, again it depends on the controller, but here it is usually sufficient to see that the previously send message has been send before attempting a new one. And unless you are really competing hard for bus access during a time of heavy bus load, this shouldn't be happening. Sensible CAN application protocols have some simple scheduling requirements such as "this gets sent after x ms in relation to that". Re-sending messages lost due to errors is handled by the controller hardware.
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
TX and RX buffers work independently of each other. Also what you are saying doesn't really make sense, since CAN is semi-duplex and one node's TX is another node's RX.
What are the pros/cons in using an RXBUFFER vs RXFIFO?
Those terms are pretty much synonymous. I suppose they may have some special meaning given a specific CAN controller, but you don't mention one (STM32 have several, one old and really bad "bxCAN" and one newer which I don't know much about. And some stubbornly insist on the horrible solution of using external controllers, particularly the Arduino kids).
Anyway, it is better to have neither, using a CAN controller with mailboxes is the best option. Unless the amount of expected identifiers are more than you have mailbox slots - in that case you have to direct low priority messages to a RX FIFO and use mailbox slots for high priority messages.
I am planning to simulate a vehicle n/w on CANoe. How do I simulate two nodes to communicate each other and send acknowledge message to each other. I do not want to use a Y-cable because I will need the other channel on CANcase reserved.
So, I would like to use just single channel of CANcase and make this simulation work without acknowledgement error.
Kindly share your expert views on this scenario, Thank you.
Go to Network Hardware, choose your channel and enable TX Self ACK.
Enabling this will make your HW VN Interface acknowledge their own messages and thus there will not be errors if there are no real ECUs on the bus.
Or you could use Simulated Bus mode in CANoe, which allows you to simulate your simulation nodes, without HW, in real time or with speed factor.
I am just curious about the LoRa technology and exploring about that I got stuck where LoRaWAN class (A, B and C) have been defined. My doubt is, if I want to design a LoRa Node with any LoRa enabled modules available in the market (By vendors like Ai-Thinker, Heltech, pycom etc) do I need to care about Class while programming the node for transmissions and receptions? Do they handled by the LoRa transceivers or we need to handle it by writing the code?
You should consider what LoRaWAN class you want to use for the applications you want to develop. These three classes all have different behaviour:
A: only accepts downlink messages in 2 timeslots after an uplink message. The other time the node is unavailable to the network.
B: does all the A functionality but also allows for receiving of downlink messages on set moments.
C: this class can always receive downlink messages. No waiting for a timeslot or uplink is needed to communicate with the node.
Different tranceivers/mcus need different levels of care.
If I take the example of the RN2483, this node handles all the LoRaWAN interactions internally, you only need to configure what you want. (AFAIK it doesn't support class B/C at the moment but plans are made to support it.)
If I take the CMWX1ZZABZ, this processor is programmed directly and you need to make sure the code works for the class you want to use (A/B/C). The CMWX1ZZABZ comes with a LoRaWAN stack but you need to make sure it actually works as needed, the RN2483 handles everything for you.
In Internet of things one of the important factor is battery life. That is, how long can a device be left in production without maintenance.
For a low power device the most important aspect is optimizing the usage of battery. For every communication device energy is required to transmit or receive data. Also if the MCU and the peripherals of a HW is always awake then the battery will get drained very fast.
Therefore to increase the device life and support various use cases there are theree device classes.
Explanation about each class is given here: https://www.thethingsnetwork.org/docs/lorawan/classes/
The answer to your real question ending with question marks are the below.
do I need to care about Class while programming the node for transmissions and receptions? Do they handled by the LoRa transceivers or we need to handle it by writing the code?
You usually don't need to care about the class when your application layer code is using the LoRaWAN protocol stack through its API.
However,
when you define what kind of application layer messages your application server and your end device exchange, you need to be aware of what the actual LoRaWAN Device Class is and you need to know what latency of downlink messages may have.
For example, if your device is operating in Class A mode, (that accepts downlink messages only as responses to uplink messages), you may write into your application code that the device sends regular heartbeat messages that allow the application server to send downlinks as a response to one of those heartbeats.
I have a set of Raspberry Pi Zeros that I would like to use as a home intercom. I initially set them up to send audio to each other using golang with gRPC and bidirectional streaming, which works for short calls, but the lag builds up over time, so I think I need to switch to a real-time protocol like RTP or WebRTC. Since I already know the IP address of each device, and the hardware/supported codecs for each is the same, and they are all on the same network, is there any advantage to using WebRTC over using plain RTP? My understanding is that WebRTC mainly provides some additional security and connection orchestration like ICE and SDP, which I wouldn't necessarily need. I am trying to minimize resource usage since these devices are not as powerful as a phone or desktop. If I do use WebRTC, I can do the SDP signaling with gRPC or some other direct delivery method. Since there are more than 2 devices, I'm also curious about multicast functionality, which seems pure-RTP specific, while WebRTC (which uses RTP), doesn't necessarily support multicasting, and would require (n-1)! p2p connections. I'm very unclear/unsure about this point.
Also, does either support mixing audio channels natively, or would that need to be handled in the custom software?
You could use WebRTC, but you'd need to rig a signalling server, and a STUN / TURN server. These can be super simple and low capacity because everything is on a private network, but you still need 'em. The signalling server handles the necessary SDP interchange. Going full WebRTC might be overengineering this. (But of course learning to get WebRTC working can be useful.)
You've built out a golang infrastructure. Seeing as how you're on a private network, you could change up that program to send multicast UDP packets or RTP packets. Then you can rig your listeners to listen to them.
No matter what you do, you'll need to deal with the lag. A good way to do it in the packet world: don't build a queue of buffers ready to play. Instead, always put each received packet as the next-to-play packet, even if you have to overwrite a previously received packet. (That is, skip ahead.) You may get a pop once in a while, but with reasonably short packets, under 50ms, it shouldn't affect the user experience significantly. And the lag won't build up.
The oldtimey phone system ran on a continent-wide 8K synchronous clock. So lag was not an issue. But it's always a problem when audio analog-to-digital and digital-to-analog clocks aren't synchronized. That's true whenever they are on different devices. The slightest drift builds up over time. (RPis don't have fifty-dollar clock parts in them with guaranteed low drift.)
If all your audio sources run at the same sample rate, you can average them to mix them. That should get you started. (If you're using WebRTC in a browser, it will mix multiple sources for you. )
Since you are using Go check out offline-browser-communication. This removes the need for Signaling and STUN/TURN. It uses mDNS and pre-generated certificates. It is also being discussed in the WICG Discourse no idea if/when it will land.
'Lag' is a pretty common problem to have when doing media over TCP. You have lots of queues and congestion control you are dealing with. WebRTC (and RTP in general) is great at solving this. You have the following standardized things to solve it.
RTP packets have the relative timestamp
RTP Sender reports have a mapping of relative to NTP timestamp. Use this for sync/timing.
RTP Receiver reports give you packet loss/jitter. Use this to assert your network health.
Multicast is a fantastic suggestion as well. You reduce the complexity of having to signal all those 1:1 connections, and reduce the amount of bandwidth required. It does make security a little bit more delicate/roll your own though.
With Pion we decoupled all the RTP/RTCP stuff Pion Interceptor. So you don't have to use the full WebRTC stack to get the media transport things mentioned above.
Is it possible to read the bits directly off the physical ethernet connection interface from a standard computer ethernet interface?
e.g., suppose I want to use the ethernet jack of a laptop as a differential logic probe(using a standard ethernet cable). Could I just potentially write a driver to get at the bits or is there a limit to how low a driver can go?
Essentially does the physical layer just send the bit stream to the device driver or does it do any decoding which will effect the interpretation of the bits or cause the device to malfunction(such using a different encoding scheme).
I guess what it boils down to, is, can we use the ethernet port as any standard digital differential communications link by writing a suitable driver or are we limited to the the ieee spec(8b/10b, etc...).
To answer shortly, probably not.
Here are some of the reason why:
On a hardware link layer, there is actually no electrical connection between the computer and the ethernet cable, it is electrically isolated by small transformer and is current and not voltage driven signal, so this will be the first problem to overcome, as you would have to send a fairly precise current over two lines rather than a voltage on a single line.
Ethernet transformers
PHY Hardware Interface: Then the next step, is that this is simply not controlled by the CPU where your code is being executed but by an ethernet PHY Chip interface, and there you have no (easy) way of flashing and controlling it. Some different PHY chip allows you different level of access, but I doubt you would find any that would allow you direct control over the transmission interface and even if it did, it would have to be implemented into the driver which is as well unlikely.
Ethernet PHY Controller
Perhaps some other solutions
as the comments above, if you want to have direct IO control on a computer, the best solution is over a serial or parallel port, perhaps you can find ethernet to serial or usb to serial port and then play with that but this would be digital signals.
Another thing you may want to use is the microphone input, as this accepts analog signals and you can have direct control over it, though be careful not burning your computer. (I've seen some bank card magnetic band using that on cellphones).
You can use libpcap/WinPcap to do this. Nevertheless you are not completely free in the choise of what you write/read on the wire. e.g. preamble and SFD must stil be there. This is so fundamental (because of noise resistance), that typical hardware just does not support anything different.
If you want to control completely everything, go to embedded hardware, find a board that uses a PHY that can give you that information and a processor that is capable of handling the data rates.