using socket communications for CANBUS access, if an SDO is pending a reply and the node in question sends a heartbeat before sending the SDO reply, I get a duplicate read of the SDO reply when it comes in. The duplicate read takes about 2 peeks to come in whereas regular SDO replies take 30 ~ 300 peeks before the data is ready.
Has anyone else seen this or know if a bug is pending on this?
Environment: Beaglebone black, kernel 4.19.94-ti-r42
CANBUS device: MAXON robotic arm
CANBUS sniffer shows the maxon sending the correct heartbeat and reply sequence expected, not duplicated.
This is a hard error - 100% duplicatable.
Related
I am trying to use CANopenNode into a STM32L476 device by using libohiboard as HAL library. In the network, I have: (i) my board that operates as a master and (ii) a commercial node. At startup, the node sends HB message and SYNC message. When my board use
CO_NMT_sendCommand(CO->NMT,CO_NMT_ENTER_OPERATIONAL, 0x0A);
the master starts to send continually the same message without stopping!
With logic analyzer I see this:
Where Channel 0 is the TX pins of the microcontroller, and Channel 1 is the RX pin.
I can't understand why the message returns into RX pin immediately! I checked the microcontroller configuration and the loopback mode is OFF.
Thanks
Looks like normal CAN operation - all messages are immediately echoed back while they are sent or else bus arbitration wouldn't work. The only difference is the ACK bit which you can see are set on the rx line but not on tx. This bit is filled in by the other CAN node on the bus.
The reason why your node keeps sending the same message doesn't seem related to this.
I don't know how it works on your controller but usually you have to pay attention to send NMT_start_command only when your slave node doesn't return any heartbeat or if the heartbeat value is different than the mode expected (pre operational or operational as an example)
If the slave doesn't return anything there might be multiple reasons:
nothing activated so you have first to set a time using the right SDO
the slave use nodeguarding instead of heartbeat so you have to query first the slave with a message ID: 0x700 + Node ID, DLC: 0
Please let me know if it is not clear or doesn't help
The above two pictures are my measurement and simulation setups respectively. The Replay block plays a blf file of length 6 minutes containing total of 2,413,161 CAN frames from two CAN channels.
The above picture explains the bench setup. Canoe reads the blf file and transmits the CAN frames on two CAN channels. Microcontroller (MuC) receives the CAN frames converts them in to Ethernet IPV4 UDP packet and transmit again to the Canoe.
When i run this configuration, i am getting below errors.
1. System - CAN driver: Reception overrun - messages are lost
2. System CAN X : Message with ID = XXX could not be sent. Driver error 11 in TransmitCANFrame, "XL_ERR_QUEUE_IS_FULL"
3. System Warning: replay loading delay(s)
System ReplayBlock 1(blf_file.blf): 15 times, 7347.46 ms total
I assumes this was due to Canoe performance issue or CAN driver issue. So I did the below steps.
1. Modified the CANCaseXL Receive latency->Very fast under Vector hardware Config.
2. Increased the Transmit queue settings->32768 (maximum) under Vector hardware config -> Global settings.
3. I disabled all except one logging block (blf) [As you can see in the measurement setup].
But i still experience the same errors. What could be the problem? Is there any other ways to resolve this?
You need to have termination on both CANs (120 Ohm).
Such error indicates lack of termination.
I am using modbus-tk library for modbus serial server. All the communication is up and working. There is one instance where master is writing one register and next request is read but modbus-tk is merging the two request and hence getting CRC error
2019-01-31 17:19:59,881 DEBUG modbus._handle Thread-2 -->2-16-0-11-0-1-2-0-128-178-123-2-3-0-4-0-1-197-248
2019-01-31 17:19:59,881 ERROR modbus.handle_request Thread-2 invalid request: Invalid CRC in request
Actual request should be 2-16-0-11-0-1-2-0-128-178-123 and othe request is 2-3-0-4-0-1-197-248
Any ideas why I am having the issues
For setup, Modbus slave is connected via serial 232 and running two slave instances on single server.
You must create Thread-safe read/write. If you read or write, you cant do it with uncontrolled threads. You need to lock threads when you read or write. I cant explain why, but last time I was working with modbus, I had similar problem. Modbus simply cant handle threads very well. Lock helped a lot, but still most safe it is to do it threadless.
Idea:
import threading
lock = threading.Lock()
def read():
with lock:
read....
def write():
with lock:
write....
I already read this question about socket synchronization but I still dont get it yet.
Recently I was working on a relatively simple client/server app where the communication happens over a tcp socket. The client is written in PHP using the C-like functions (especially fsockopen and fgetc) PHP provides to interact with sockets, the server is written in node.js using a Stream for outputting data.
The protocol is quite simple, the message is just a string which ends with a 0-byte character.
Basically it works like this:
SERVER: Message 1
CLIENT: Ack 1
SERVER: Message 2
CLIENT: Ack 2
....
Which really worked fine as my client processed one message at a time by reading char by char from the socket until a 0-byte was encountered which designates the end of the message. Then the client writes back to the server that it has successfully received the message (thats the Ack <message id> part).
Now this happened:
SERVER: Message 1
CLIENT: Ack 1
SERVER: Message 2
CLIENT: Ack 2
SERVER: Message 3
Message 4
Message 5
Message 6
CLIENT: <DOH!>
....
Meaning the server unexpectedly sent multiple messages in one "batch" to the client, although every message is a single stream.write(...) operation on the server. It seemed like the messages were buffered somewhere and then sent to the client at once. My client code couldnt cope with multiple messages in the socket WITHOUT an Ack response in between, so it cut off the remaining messages after id 3.
So my question is:
How synchronized are sockets in their read and writes? From the question above I understand that a socket is basically two uni-directional pipes, which means they are not synchronized at all?
How can it happen that some messages were sent to my client in a simple "one message-one ack" manner and then suddendly multiple messages are written to the stream?
Does it actually change the picture if the socket is opened in a blocking/non-blocking manner?
I tested this on a Ubuntu VM (so no load or anything that could provoke strange behaviour) using PHP 5.4 and node 0.6.x.
TCP is an abstraction of a bi-directional stream, and as such has no concept of messages and cannot preserve message boundaries. There is no guarantee how multiple send() or recv() calls will map to TCP packets. You should treat send() as if calling it multiple times is equivalent to calling it once with the concatenation of all the data. More importantly, when receiving, you should make sure that your code interprets the incoming data exactly the same way, no matter how it was split over indvidual recv() calls.
To receive properly, you can use a buffer where you store incomplete messages. But be careful that when you have an incomplete message in a buffer, the next recv() call may complete the current message, as well as provide zero or more complete messages, and possibly part of another incomplete message.
The blocking or non-blocking mode doesn't change anything here - it's only about the way your application interfaces with the OS.
There are two synchronization concepts to deal with:
The (generally) synchronous operation of send() or recv().
The asynchronous way that one process sends a message and the way the other process handles the message.
If you can, try to avoid a design that keeps a client and server in process-synchronized "lock step" with each other. That's asking for trouble. What if the one of the processes closes unexpectedly? The other process/thread might hang on a recv() that will never come. It's one thing for your design to expect each message to be acknowledged eventually, but it's quite another for your design to expect that only one message can be sent, then it must be acknowledged, before you may send another.
Consider this:
Server: send 1
Client: ack 1
Server: send 2
Server: send 3
Client: ack 2
Server: send 4
Client: ack 3
Client: ack 4
A design that can accommodate this situation is better than one that expects:
Server: send 1
Client: ack 1
Server: send 2
Client: ack 2
Server: send 3
Client: ack 3
Server: send 4
Client: ack 4
A while back i had a question about why my socket sometimes received only 653 octets ( for example ) when i sent 1024 octets and thanks to Rakis i understood: The OS allows reception to occur in arbitrarily sized chunks.
This time i need a confirmation :)
On any OS ( Well GNU/Linux and Windows at least ), In any Language ( I'm using Python here ), if i send a packet of a random number of bytes, can be 2 bytes, can be 12000 bytes, let's say X, when i write socket.send(X), am i absolutely guaranteed that X will be FULLY received ( regardless of any chunks the receiving OS divides it into ) on the other end of the socket BEFORE i do another socket.send(any string) ?
Or in other words if i have the code :
socket.send(X)
socket.send(Y)
Even if X > MTU so it will be obliged to send multiple packets, does it wait until every packet is sent and acknowledged by the endpoint of the socket before sending Y ? Well writing that makes me believe that the answer is yes it is guaranteed and that this is exactly the purpose of setting a socket in blocking mode but i want to be sure :D
Thanks in advance,
Nolhian
You are guaranteed that X will be received (at the application level) before Y, if it's a stream socket. If it's a datagram socket, no guarantees.
Depending on the networking implementation, it's possible that at a lower level, X will be sent, lost in transmission, then Y will be sent, then X will be re-sent because no acknowledgement was received.
Even in blocking mode, the socket.send(Y) can execute before X even makes it "onto the wire", because the OS will buffer network traffic.
No, you can't.
All you know is that the client will receive the data in order, assuming it does receive it all. There's no way of knowing (at the application level) whether the client has received all the data without having some sort of "ACK" at the application level protocol.
am i absolutely guaranteed that X will be FULLY received ( regardless of any chunks the receiving OS divides it into ) on the other end of the socket BEFORE i do another socket.send(any string) ?
No. In general, more data may be sent without waiting for the receiving side, within certain limits:
on the sending side, you will have a maximum amount of data you can enqueue for transmission until the client has acknowledged some receipt (but typically the client's OS will acknowledge and buffer quite a lot before it refuses further data until the application has processed some), after which the sending socket may start blocking
forces the application design to consider how to enqueue and buffer excessive amounts of data, rather than having naively written applications utilise excessive amounts of Operating System-provided buffer memory
reduces retransmission rates when the receiving side is flooded with data too fast to process it
avoids sending huge amounts of data despite the network connection having been lost
So, strictly speaking and for large transmissions, the sender should be designed to handle sockets blocked from further sends (either knowing it is ok to block in the attempt (perhaps due to a dedicated sending thread) or waiting until it is possible to send more via non-blocking sockets or select/poll).
Whatever retransmission and buffering may be required, what you CAN be sure of is that the receiving side will have to read all of "X" before it starts being given the subsequently sent data "Y" (unless it specifically asks to have it otherwise, e.g. Out Of Band data).
Depending on the type of Sockets that you use, you can, in some cases, have a guarantee that data will be received, but not a feedback or a confirmation when it actually was.
Back to your question:
does it wait until every packet is sent and acknowledged by the endpoint of the socket before sending Y
So, you could say:
YES it does wait until it is sent, and
NO it does not wait for acknowledgment
A suggestion:
Since there are no auto-magic/built-in confirmations that your data was received, you could fairly easily implement your own logic for ACKnowledging the package was received, which would basically come down to your custom communication protocol.