CANopen over EtherCAT (CoE) - canopen

CANopen is point to point communication while EtherCAT is bus based. Point to point means there will be node address. But this is redundant in EtherCAT. So I was wondering how this node address bytes are handled in the CANopen over EtherCAT. I tried searching for information but couldn't find anything specific on this.
Also, I assume both cyclic and acyclic data of the CANopen device is sent only cyclically over the EtherCAT because it is Master triggered cyclic transmission protocol. This basically means I cannot send asynchronous, event-triggered information at the trigger of the event, on EtherCAT (which is counter-intuitive for CAN's priority based because all of them get the same priority). Please correct me if I am wrong about this. Also please tell me how can I make a higher priority byte reach quicker than the lower priority one (assuming both occurred at the same time and assume there is bandwidth to send both at the start of new frame).

CANopen provides Process Data Object (PDO) and Service Data Object (SDO). PDO is sent cyclically over the EtherCAT and SDO is sent acyclically. Therefore, if you use the SDO, you send asynchronous, event-triggered information at the trigger of the event.
Additionally, CANopen is usually used in the servo control and most of servo controller support the PDO and the SDO.

Related

Omnetpp application sends multiple streams

Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.

Does HAL_SPI_Transmit() discard received data?

Suppose I have two STM boards with a full duplex SPI connection (one is master, one is slave), and suppose I use HAL_SPI_Transmit() and HAL_SPI_Receive() on each end for the communication.
Suppose further that I want the communication to consist of a series of single-byte command-and-response transactions: master sends command A, slave receives it and then sends response A; master sends command B, slave receives it and then sends response B, and so on.
When the master calls HAL_SPI_Transmit(), the nature of SPI means that while it clocks out the first byte over the MOSI line, it is simultaneously clocking in a byte over the MISO line. The master would then call HAL_SPI_Receive() to furnish clocks for the slave to do the transmitting of its response. My question: What is the result of the master's HAL_SPI_Receive() call? Is it the byte that was simultaneously clocked in during the master's transmit, or is is what the slave transmitted afterwards?
In other words, does the data that is implicitly clocked in during HAL_SPI_Transmit() get "discarded"? I'm thinking it must, because otherwise we should always use the HAL_SPI_TransmitReceive() call and ignore the received part.
(Likewise, when HAL_SPI_Receive() is called, what is clocked OUT, which will be seen on the other end?)
Addendum: Please don't say "Don't use HAL". I'm trying to understand how this works. I can move away from HAL later--for now, I'm a beginner and want to keep it simple. I fully recognize the shortcomings of HAL. Nonetheless, HAL exists and is commonly used.
Yes, if you only use HAL_SPI_Transmit() to send data, the received data at the same clocked event gets discarded.
As an alternative, use HAL_SPI_TransmitReceive() to send data and receive data at the same clock events. You would need to provide two arrays, one that contains data that will be sent, and the other array will be populated when bytes are received at the same clock events.
E.g. if your STM32 SPI Slave wishes to send data to a master when the master plans to send 4 clock bytes to it (master sends 0xFF byte to retrieve a byte from slave), using HAL_SPI_TransmitReceive() will let you send the data you wish to send on one array, and receive all the clocked bytes 0xFF on another array.
I never used HAL_SPI_Receive() before on its own, but the microcontroller that called that function can send any data as long as the clock signals are valid. If you use this function, you should assume on the other microcontroller that the data that gets sent must be ignored. You could also use a logic analyzer to trace the SPI data exchange between two microcontrollers when using HAL_SPI_Transmit() and HAL_SPI_Receive().

What is General Call Address and what is the purpose of it in I2C?

I wonder what is General Call Address in I2C (0x00). If we have a master and some slaves can we communicate with these slaves through our master with this address?
Section 3.2.10 of I2C specification v.6 (https://www.i2c-bus.org/specification/) clearly describes the purpose of general call.
3.2.10General call address
The general call address is for addressing every device connected to the I2C-bus at the
same time. However, if a device does not need any of the data supplied within the general
call structure, it can ignore this address. If a device does require data from a general call
address, it behaves as a slave-receiver. The master does not actually know how many
devices are responsive to the general call. The second and following bytes are received
by every slave-receiver capable of handling this data. A slave that cannot process one of
these bytes must ignore it. The meaning of the general call address is always specified in
the second byte (see Figure 30).
You can use it to communicate with your slaves, but three restrictions applied.
General call can only write data to slave, not read.
Every slave should receive general call, you cannot address specific device with it, or you have to encode device address in general call message body, and decode it in the slave.
There are standard general call message format. You should not use standard codes for for your own functions.

how to set timer for physical process in Castalia?

As the usual practice in Castalia is that the application module requests for sensor reading using requestsensorreading() function which is handled by sensor manager. Sensor manager forwards the request to physical process and the physical process replies back with its value.
What i want to do is, i want the physical process to broadcast its value at set intervals of time. Sensor device will have a sensitivity > 0 and few nodes will receive the value. How can i accomplish this? is it possible to use timerFiredCallback function and BROADCAST_NETWORK_ADDRESS inside physical process?
You seem to be confused about the basic models of Castalia. The physical process is not a sensor node to send network broadcast messages. It is a module to model
the physical process that sensors in our sensor nodes are sampling. Moreover, a Physical process does not have one value. Values are changing depending on space and time, and depending on the specific model you have defined (the manual has plenty of info on how to define physical processes).You could define a physical process that only returns one value for every point in space and every point in time, but I am not sure why you would like to use such a process in simulation.
A physical process does not "broadcast its value". Sensor nodes sample the physical process and based on space, time, and the specific model of the process they get a value back. Different sensors nodes might get different values back. To achieve what you want, you simply make all sensor nodes periodically sample the physical process. There are some examples of Applications that do that.
So to recap: You define how your physical process needs to behave and then you make sensor nodes sample it (from the Application module using the method requestSensorReading() as you already know).

Simulink: Introduce delay with UDP Send/Receive

I'm building a client/server-type subsystem in a control system application using UDP Send/Receive blocks in Simulink. Data x is sent to the server via UDPSend block which is then processed at the server that returns output y.
Currently, I've both the client (a Simulink model) and the server (processing logic return in Java) resides in the localhost. Therefore, the packet exchanges essentially take near-zero time. I'd like to introduce network delay such that the packet exchanges take a varying amount of time (say due to changes in bandwidth availability), effectively simulating a scenario where the server node is located in a different geographical location.
Could someone guide me on how to achieve this? Thanks.
As a general (Simulink-independent) solution in a Windows environment, you should have a look at following tool, which "makes your network condition significantly worse, but in a managed and interactive manner."