How can I simulate single channel CAN network in CANoe without real ECUs connected on the bus? - simulation

I am planning to simulate a vehicle n/w on CANoe. How do I simulate two nodes to communicate each other and send acknowledge message to each other. I do not want to use a Y-cable because I will need the other channel on CANcase reserved.
So, I would like to use just single channel of CANcase and make this simulation work without acknowledgement error.
Kindly share your expert views on this scenario, Thank you.

Go to Network Hardware, choose your channel and enable TX Self ACK.
Enabling this will make your HW VN Interface acknowledge their own messages and thus there will not be errors if there are no real ECUs on the bus.
Or you could use Simulated Bus mode in CANoe, which allows you to simulate your simulation nodes, without HW, in real time or with speed factor.

Related

CAN Communication: Good Practices

I am preparing to write some code for a master controller that communicates (via CANbus) with multiple nodes in a product. Each node monitors its own sensors (i.e. voltages, currents, fault flags, etc.) and can be started/stopped by the master controller. The master controller also sends the data to a display.
I am using an STM32H7B3I-EVAL board and using the STM32CubeIDE environment to write the code. I am trying to determine some good practices for writing this code, but I am inexperienced in CAN communication. I wanted to get people's opinions on the following high-level questions:
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
What are the pros/cons in using an RXBUFFER vs RXFIFO?
First of all, you need to invent an application tier CAN protocol unless you have one already. This isn't entirely trivial and requires some experience of CAN. Here you first of all need to take bus load in account, which in turn depends on the amounts of nodes and data allowed, as well as the baudrate. How to design this also depends on if it's a control system (hard realtime, milliseconds) or just some industrial sensor network (hundreds of milliseconds or seconds).
If we want to be constantly monitoring, should all the code for transmitting and receiving data be in a never-ending while loop?
Probably not. Regarding RX, depending on what CAN controller you have, there will at least be some manner of RX FIFO. Modern controllers also support dedicated "mailbox" slots for individual messages, which is more powerful and easier to work with. Your only requirement for never losing data is that you empty the FIFO at least as often as FIFO size times the time it takes to send the minimum package size (DLC=0). Unless your program is very busy, this is usually not a tough realtime deadline to meet.
Regarding TX, again it depends on the controller, but here it is usually sufficient to see that the previously send message has been send before attempting a new one. And unless you are really competing hard for bus access during a time of heavy bus load, this shouldn't be happening. Sensible CAN application protocols have some simple scheduling requirements such as "this gets sent after x ms in relation to that". Re-sending messages lost due to errors is handled by the controller hardware.
Is it better to transmit all data then receive all data, or transmit data when needed and have an interrupt for received messages?
TX and RX buffers work independently of each other. Also what you are saying doesn't really make sense, since CAN is semi-duplex and one node's TX is another node's RX.
What are the pros/cons in using an RXBUFFER vs RXFIFO?
Those terms are pretty much synonymous. I suppose they may have some special meaning given a specific CAN controller, but you don't mention one (STM32 have several, one old and really bad "bxCAN" and one newer which I don't know much about. And some stubbornly insist on the horrible solution of using external controllers, particularly the Arduino kids).
Anyway, it is better to have neither, using a CAN controller with mailboxes is the best option. Unless the amount of expected identifiers are more than you have mailbox slots - in that case you have to direct low priority messages to a RX FIFO and use mailbox slots for high priority messages.

Significance of LoRaWAN classes in developing a LoRa Node with sensors

I am just curious about the LoRa technology and exploring about that I got stuck where LoRaWAN class (A, B and C) have been defined. My doubt is, if I want to design a LoRa Node with any LoRa enabled modules available in the market (By vendors like Ai-Thinker, Heltech, pycom etc) do I need to care about Class while programming the node for transmissions and receptions? Do they handled by the LoRa transceivers or we need to handle it by writing the code?
You should consider what LoRaWAN class you want to use for the applications you want to develop. These three classes all have different behaviour:
A: only accepts downlink messages in 2 timeslots after an uplink message. The other time the node is unavailable to the network.
B: does all the A functionality but also allows for receiving of downlink messages on set moments.
C: this class can always receive downlink messages. No waiting for a timeslot or uplink is needed to communicate with the node.
Different tranceivers/mcus need different levels of care.
If I take the example of the RN2483, this node handles all the LoRaWAN interactions internally, you only need to configure what you want. (AFAIK it doesn't support class B/C at the moment but plans are made to support it.)
If I take the CMWX1ZZABZ, this processor is programmed directly and you need to make sure the code works for the class you want to use (A/B/C). The CMWX1ZZABZ comes with a LoRaWAN stack but you need to make sure it actually works as needed, the RN2483 handles everything for you.
In Internet of things one of the important factor is battery life. That is, how long can a device be left in production without maintenance.
For a low power device the most important aspect is optimizing the usage of battery. For every communication device energy is required to transmit or receive data. Also if the MCU and the peripherals of a HW is always awake then the battery will get drained very fast.
Therefore to increase the device life and support various use cases there are theree device classes.
Explanation about each class is given here: https://www.thethingsnetwork.org/docs/lorawan/classes/
The answer to your real question ending with question marks are the below.
do I need to care about Class while programming the node for transmissions and receptions? Do they handled by the LoRa transceivers or we need to handle it by writing the code?
You usually don't need to care about the class when your application layer code is using the LoRaWAN protocol stack through its API.
However,
when you define what kind of application layer messages your application server and your end device exchange, you need to be aware of what the actual LoRaWAN Device Class is and you need to know what latency of downlink messages may have.
For example, if your device is operating in Class A mode, (that accepts downlink messages only as responses to uplink messages), you may write into your application code that the device sends regular heartbeat messages that allow the application server to send downlinks as a response to one of those heartbeats.

Should I use RTP or WebRTC for local network audio communication

I have a set of Raspberry Pi Zeros that I would like to use as a home intercom. I initially set them up to send audio to each other using golang with gRPC and bidirectional streaming, which works for short calls, but the lag builds up over time, so I think I need to switch to a real-time protocol like RTP or WebRTC. Since I already know the IP address of each device, and the hardware/supported codecs for each is the same, and they are all on the same network, is there any advantage to using WebRTC over using plain RTP? My understanding is that WebRTC mainly provides some additional security and connection orchestration like ICE and SDP, which I wouldn't necessarily need. I am trying to minimize resource usage since these devices are not as powerful as a phone or desktop. If I do use WebRTC, I can do the SDP signaling with gRPC or some other direct delivery method. Since there are more than 2 devices, I'm also curious about multicast functionality, which seems pure-RTP specific, while WebRTC (which uses RTP), doesn't necessarily support multicasting, and would require (n-1)! p2p connections. I'm very unclear/unsure about this point.
Also, does either support mixing audio channels natively, or would that need to be handled in the custom software?
You could use WebRTC, but you'd need to rig a signalling server, and a STUN / TURN server. These can be super simple and low capacity because everything is on a private network, but you still need 'em. The signalling server handles the necessary SDP interchange. Going full WebRTC might be overengineering this. (But of course learning to get WebRTC working can be useful.)
You've built out a golang infrastructure. Seeing as how you're on a private network, you could change up that program to send multicast UDP packets or RTP packets. Then you can rig your listeners to listen to them.
No matter what you do, you'll need to deal with the lag. A good way to do it in the packet world: don't build a queue of buffers ready to play. Instead, always put each received packet as the next-to-play packet, even if you have to overwrite a previously received packet. (That is, skip ahead.) You may get a pop once in a while, but with reasonably short packets, under 50ms, it shouldn't affect the user experience significantly. And the lag won't build up.
The oldtimey phone system ran on a continent-wide 8K synchronous clock. So lag was not an issue. But it's always a problem when audio analog-to-digital and digital-to-analog clocks aren't synchronized. That's true whenever they are on different devices. The slightest drift builds up over time. (RPis don't have fifty-dollar clock parts in them with guaranteed low drift.)
If all your audio sources run at the same sample rate, you can average them to mix them. That should get you started. (If you're using WebRTC in a browser, it will mix multiple sources for you. )
Since you are using Go check out offline-browser-communication. This removes the need for Signaling and STUN/TURN. It uses mDNS and pre-generated certificates. It is also being discussed in the WICG Discourse no idea if/when it will land.
'Lag' is a pretty common problem to have when doing media over TCP. You have lots of queues and congestion control you are dealing with. WebRTC (and RTP in general) is great at solving this. You have the following standardized things to solve it.
RTP packets have the relative timestamp
RTP Sender reports have a mapping of relative to NTP timestamp. Use this for sync/timing.
RTP Receiver reports give you packet loss/jitter. Use this to assert your network health.
Multicast is a fantastic suggestion as well. You reduce the complexity of having to signal all those 1:1 connections, and reduce the amount of bandwidth required. It does make security a little bit more delicate/roll your own though.
With Pion we decoupled all the RTP/RTCP stuff Pion Interceptor. So you don't have to use the full WebRTC stack to get the media transport things mentioned above.

How to intercept J1939 CAN messages?

I'm building a HIL/SIL test with Simulink, which tests the Vehicle Control Unit(VCU) from a vehicle. This VCU talks with a Power Distribution Module(PDM) over a J1939 CAN network. The PDM handles the in- and outputs from switches and to actuators and puts the information on the CAN bus. The VCU then knows what the PDM is seeing from connected sensors. In turn, the VCU puts info on the CAN bus on how the PDM should control the connected actuators.
My laptop is hooked to the same CAN bus with a Vector adapter and Simulink.
To test the VCU, I need to mimic the PDM and send messages to the VCU as if I were the PDM. The VCU then has to take the correct actions and control the real PDM accordingly.
Obviously, if I just mimic the PDM, my messages will interfere with those sent from the real PDM. So basically, I need the PDM to shut up and only listen. I do the talking for the PDM. However, the PDM is not configurable in a listen-only mode, so I have to intercept all messages it sends so they never arrive at the VCU.
My idea was that i'd detect(by observing the arbitration field of all messages) when the PDM starts sending, and pull a bit down in the arbitration field. It'd recognise the priority of my 'message' over its own, and it'd stop transmitting. It'd be as if the CAN bus is always to busy to give room to the PDM. This would shut up the PDM without it throwing errors. But other suggestions are welcome.
So (how) is it possible to intercept J1939 CAN messages in MATLAB/Simulink, or with a separate CAN controller?
Here is an idea, how to realize what you are looking for. You need some extra hardware, however.
This is the rough outline:
Setup a CAN-gateway device, which has two independent CAN-interfaces can0 and can1.
Disconnect the PDM from the CAN-bus and connect it to one of the interfaces of your CAN-gateway, e.g. can0
Connect the second interface of the CAN-gateway, can1, to the original CAN-bus, which also includes your laptop and the VCU
Program your CAN-gateway to forward all incoming CAN-frames on can1 to the can0 interface
As you want to ignore all messages from the PDM, simply ignore the CAN-frames coming in on interface can0 and not forward them to can1
Example, how to realize such a CAN-gateway:
Hardware: Use a Raspberry Pi and a CAN extension board with two can-interfaces, such as the PiCAN2 duo board.
Software: Write a small program to forward traffic between the interfaces can0 and can1, using socketcan, which is already included in the Linux kernel.
In case your devices are communicating via the higher layer J1939 transport protocol, you might also need to get the J1939 transport protocol running on the Raspberry Pi. If you are simply using 29-bit indentifiers with a maximum payload of 8 byte of data, this should also not be necessary.
Alternatively, you could also use a more expensive commercial solution, this CAN-Router for example.
Your original idea:
I think what you are envisioning is technically feasible, but might have some other drawbacks.
As the drivers of can controllers typically don't expose interfaces to interactively manipulate CAN-frames while their transmission is still ongoing, you could directly address a can-transceiver from a microcontroller
A few researchers realized a CAN Denial of service attack by turning the first recessive bit in a CAN-frame after the arbitration ID into a dominant bit for certain selected CAN-IDs. They used an Arduino Uno and a Microchip MCP2551 E/P CAN transceiver. The code used is also available online. As this interactive manipulation of CAN-frames during transmission is related to what you are looking for, this could be a good starting point for you.
Still I see some drawbacks, when you silence the PDM this way:
You will not only silence the PDM this way, but also (at least) delay the transmission of other nodes on the CAN-bus with arbitration IDs that have
lower priority than the messages from the PDM
It is very likely that the PDM will go into some error state, when it is not able to successfully send its CAN-frames to the bus after a certain number of retries
Yet another idea:
In case you are able to adapt the software of the VCU, change it in a way that it does not consume the CAN-frames from the PDM, but CAN-frames from your laptop by using different CAN-IDs for the same messages. You will have to change the dbc-file for that purpose.

How to sync an application state over multiple iphones in the same network?

I am developing an iPhone application that allows to basically click through a series of actions. These series are predefined and synced with a common configuration server.
That app might be running on multiple devices at the same time. All devices are assumed to have the same series of actions defined on them. All devices are considered equal, there is not a server and multiple clients or something like that.
(Only) one of these devices is used by a person at any given time, it is however possible that the person switches to a different device at any given time. All "passive" devices need to be synchronized with the active one, so that they display the same action.
The whole thing should happen as automatically as possible. No selection of devices, configuration, all devices in the same network take part in the same series of actions.
One additional requirement is that a device could join during a presentation (a series of actions) and needs to jump to the currently active action.
Right now, I see two options to implement the networking/communication part of that:
Bonjour. I have implemented a working prototype that can automatically connect with one (1) other device in the network and communicate with that. I am not sure at this point how much additional work the "multiple devices" requirement is. Would I have to open a set of connections for every device and manually send the sync events to all of them? Is there a better way or does bonjour provide anything to help me with that? What does Bonjour provide given that I want to communicate with every device in the network anyway?
Multicast with AsyncUdpSocket. Simply define a port and send multicast sync events out to that port. I guess the main issue compared to using bonjour with tcp would be that the connection is not safe and packets could be lost. This is however in a private, protected wlan network with low traffic if that would really be an issue. Are there other disadvantages that I'm not seeing? Because that sounds like a relatively easy option at this point...
Which one would you suggest? Or is there another, better alternative that I'm not thinking of?
You should check out GameKit (built in to iOS)--they have a lot of the machinery you need in a convenient package. You can easily discover peers on the network and easily send data back for forth between clients (broadcast or peer to peer)
In my experience Bonjour is perfect for what you want. There's an excellent tutorial with associated source code: Chatty that can be easily modified to suit your purposes.
I hobbled together a distributed message bus for the iphone (no centralized server) that would work great for this. It should be noted that the UI guy made a mess of the code, so thar' be dragons there: https://code.google.com/p/iphonebusmiddleware/
The basic idea is to use bonjour to form a network with leader election. The leader becomes the hub through which all the slaves subscribe to topics of interest. Then any message sent to a given topic is delivered to every node subscribed to said topic. A master disconnection simple means restarting the leader election process.