TComPort and Modbus-RTU? - modbus

It is possible to read and send data with TComPort for modbus RTU protocol?
I have read wiki http://en.wikipedia.org/wiki/Modbus for modbus, but what mean start and end with 3.5c idle?
I use C++Builder2009

Of course it's possible.
In MODBUS ASCII it is easy to determine end of message since 2 bytes are used for single byte transmitted over communication line (byte is transmitted as it's ASCII hexadecimal representation), but in MODBUS RTU you have 1 byte used for single byte transmitted which means that they had to know somehow that messages has ended. So, bytes are added to a new message as long as pause between them is less then 3.5 characters. When pause is greater then 3.5, you have an end of a message and you can parse the message, process it, and get ready for new one. This idle time is measured in characters since that is the only constant. Time period of 1 character transmitted over 9600 and over 115200 is not the same, and it is also not the same for 9600-8N1 and for 9600-8E2, so you have to adjust that time based on COM port settings.

yes its possible to send data with comport using modbus protocol.
There are various packages for that like RXTXcomm.jar,comm.jar which provide functions to communicate with slave device using com port

Related

TCP/IP using Ada Sockets: How to correctly finish a packet? [duplicate]

This question already has answers here:
TCP Connection Seems to Receive Incomplete Data
(5 answers)
Closed 3 years ago.
I'm attempting to implement the Remote Frame Buffer protocol using Ada's Sockets library and I'm having trouble controlling the length of the packets that I'm sending.
I'm following the RFC 6143 specification (https://tools.ietf.org/pdf/rfc6143.pdf), see comments in the code for section numbers...
-- Section 7.1.1
String'Write (Comms, Protocol_Version);
Put_Line ("Server version: '"
& Protocol_Version (1 .. 11) & "'");
String'Read (Comms, Client_Version);
Put_Line ("Client version: '"
& Client_Version (1 .. 11) & "'");
-- Section 7.1.2
-- Server sends security types
U8'Write (Comms, Number_Of_Security_Types);
U8'Write (Comms, Security_Type_None);
-- client replies by selecting a security type
U8'Read (Comms, Client_Requested_Security_Type);
Put_Line ("Client requested security type: "
& Client_Requested_Security_Type'Image);
-- Section 7.1.3
U32'Write (Comms, Byte_Reverse (Security_Result));
-- Section 7.3.1
U8'Read (Comms, Client_Requested_Shared_Flag);
Put_Line ("Client requested shared flag: "
& Client_Requested_Shared_Flag'Image);
Server_Init'Write (Comms, Server_Init_Rec);
The problem seems to be (according to wireshark) that my calls to the various 'Write procedures are causing bytes to queue up on the socket without getting sent.
Consequently two or more packet's worth of data are being sent as one and causing malformed packets. Sections 7.1.2 and 7.1.3 are being sent consecutively in one packet instead of being broken into two.
I had wrongly assumed that 'Reading from the socket would cause the outgoing data to be flushed out, but that does not appear to be the case.
How do I tell Ada's Sockets library "this packet is finished, send it right now"?
To enphasize https://stackoverflow.com/users/207421/user207421 comment:
I'm not a protocols guru, but from my own experience, the usage of TCP (see RFC793) is often misunderstood.
The problem seems to be (according to wireshark) that my calls to the various 'Write procedures are causing bytes to queue up on the socket without getting sent.
Consequently two or more packet's worth of data are being sent as one and causing malformed packets. Sections 7.1.2 and 7.1.3 are being sent consecutively in one packet instead of being broken into two.
In short, TCP is not message-oriented.
Using TCP, sending/writing to socket results only append data to the TCP stream. The socket is free to send it in one exchange or several, and if you have lengthy data to send and message oriented protocol to implement on top of TCP, you may need to handle message reconstruction. Usually, an end of message special sequence of characters is added at the end of the message.
Processes transmit data by calling on the TCP and passing buffers of data as arguments. The TCP packages the data from these buffers into segments and calls on the internet module to transmit each segment to the destination TCP. The receiving TCP places the data from a segment into the receiving user's buffer and notifies the receiving user. The TCPs include control information in the segments which they use to ensure reliable ordered data transmission.
See also https://stackoverflow.com/a/11237634/7237062, quoting:
TCP is a stream-oriented connection, not message-oriented. It has no
concept of a message. When you write out your serialized string, it
only sees a meaningless sequence of bytes. TCP is free to break up
that stream up into multiple fragments and they will be received at
the client in those fragment-sized chunks. It is up to you to
reconstruct the entire message on the other end.
In your scenario, one would typically send a message length prefix.
This way, the client first reads the length prefix so it can then know
how large the incoming message is supposed to be.
or TCP Connection Seems to Receive Incomplete Data, quoting:
The recv function can receive as little as 1 byte, you may have to call it multiple times to get your entire payload. Because of this, you need to know how much data you're expecting. Although you can signal completion by closing the connection, that's not really a good idea.
Update:
I should also mention that the send function has the same conventions as recv: you have to call it in a loop because you cannot assume that it will send all your data. While it might always work in your development environment, that's the kind of assumption that will bite you later.

Single channel gateway only detect first message

My gateway uses the Raspi and RFM95 configuration and operates at 915 MHz. I am using the single channel packet forwarder code by tfelkamp (https://github.com/tftelkamp/single_chan_pkt_fwd).
My gateway only the detects the first message it received and ignores the all messages afterwards. It is still connected to the TTN server but does not receive any more messages.
Can anyone explain what might be the cause of this? Might it because the RFM95 sleeping or the code no longer forwarding the message from the transceiver.
Thanks
I experienced a similar issue. Please note your sender is using different channels, but starts with channel(0). This is the first successful message you receive. Your single channel receiver is just able to receive channel(0). There is a work around for this issue for your sender explained here
This sounds like your transmitter sends the messages using frequency-hopping, while your receiver does not handle it correctly (or the other way around).
Definition of frequency-hopping found in chapter 4.1.1.8 of Semtech's SX1272 datasheet:
Frequency hopping spread spectrum (FHSS) is typically employed when
the duration of a single packet could exceed regulatory requirements
relating to the maximum permissible channel dwell time. This is most
notably the case in US operation where the 902 to 928 MHz ISM band
which makes provision for frequency hopping operation. [...]
If you're using the LMIC-Arduino library for your node then yes, by default it is transmitting in a range and the single_chan_pkt_fwd gateway is only receiving on the frequency you specify in the global_conf.json or the .cpp source (depending on your chosen library).
With the assumption that you're using the arduino-lmic library, make the changes/additions mentioned in the this TTN forum post linked by Rainer which is the same I ran into.
Also... you'll find this further down the thread: in src > lmic > lmic.c edit the following:
void LMIC_disableChannel (u1_t channel) {
if( channel < 72+MAX_XCHANNELS )
//LMIC.channelMap[channel>>4] &= ~(1<<(channel&0xF)); // comment this one
LMIC.channelMap[channel/16] &= ~(1<<(channel&0xF)); // add this one
}
Then pick a frequency on channel 0 and set that for both node and packet forwarder. Here's a table snip from this page. I went with 902300000 and it's working fine.
"freq": 902300000,
"spread_factor": 7,

Sending data from STM32F401 MCU to ESP8266 and getting response from ESP8266 to MCU

I am working on an STM32f401 Nucleo board and ESP8266 wifi module. I am using Eclipse gcc-arm tool chain and cubeMx to generate code. I can transfer and receive data perfectly with USART/UART DMA.
Now I am stuck with ESP8266. I cannot send data from MCU to ESP and I'm not getting response from ESP to MCU. I already tested the ESP module communication, I can connect TO THE wifi with AT commands through USB and can also receive data in web via socket connection.
I configured USART1_TX/USART1_RX with PA9/PA10
Thanks in advance.
I'm not an expert, but I try to help you.
Which baud rate are you using? Is it coherent with the ESP8266 documentation?
Check the power supply and the connections.
Therefore, remember that the AT commands are case sensitive (they must be written with capital letters only) and they must terminate with carriage return and line feed, so "/r/n".
That's right at first check baud rate are matching
Then do you use dma for both tx/rx direction ?
For dma rx note that the "completion" callback will be called only when the full buffer will be filled.
If you neeed to "break" reception on ending "\n" "\n" then you might use the interrupt rx method oen hatr at a time and inspect it as it arrives in the callback that keep on asking one more byte until not done.
Alternatively with dma keep on polling the dma count and analyzed current rx buffer for some \r \n. abort/Stop dma when done.

Packet Size ,Window Size and Socket Buffer In TCP

After studying the "window size" concept, what I understood is that it keeps packet before sending over wire and till acknowledgement come for earliest packet . Once this gets filled up, subsequent packet will be dropped. Somewhere I also have read that TCP is a streaming protocol, and packet is what related to IP protocol at Network layer .
What I assumed till was that I have declared a Buffer (inside code) which I fill with some data and send this Buffer using socket. I declared a buffer of 10000 bytes and send it repeatedly using socket over 10 Gbps link .
I have following assumptions and questions. Please verify and help
If I want to send a packet of 64,256,512 etc. bytes, declared buffer inside code of that much space and send over socket. Each execution of send() command will send one packet of that much size .
So if I want to study the packet size variation effect on throughput, what do I have to do? Do I need to vary buffer size in code?
What are the socket buffer which we set using SO_SNDBUF and SO_RECVBUF? Google says it's buffer space for socket. Is it same as TCP window size or something different? Which parameter is more suitable to vary or to increase throughput?
Also there are three parameter in socket buffer: Min, Default and Max. Which one should I vary to my experiment and to get more relevance?
If I want to send a packet of 64,256,512 etc. bytes , Declared buffer inside code of that much space and send over socket .Each execution of send() command will send one packet of that much size.
Only if you disable the Nagle algorithm and the size is less than the path MTU. You mustn't rely on this.
So if I want to Study the Packet size variation effect on throughput, What I have to do , vary buffer space in Code?
No. Vary SO_RCVBUF at the receiver. This is the single biggest determinant of throughput, as it determines the maximum receive window.
what are the socket buffer which we set using SO_SNDBUF and SO_RCVBUF
Send buffer size at the sender, and receive buffer size at the receiver. In the kernel.
It's Same as TCP Window size
See above.
or else different ? Which parameter is more suitable to vary to increase throughput ?
See above.
Also there are three parameter in Socket Buffer min Default and Max . Which one should I vary for My experiment to get more relevance
None of them. These are the system-wide parameters. Just play with SO_SNDBUF and SO_RCVBUF for the specific sockets in your application.
TCP does not directly expose a way to control the way packets are sent since it is a stream protocol. But you can make the TCP stack send packets by disabling the Nagle algorithm. That way all data that you send will be sent out immediately instead of being buffered. Data will be split into packets of MTU size which is like ~1400 bytes. Depends on the link.
To answer (2): Disable nagling and invoke send with buffers of < 1400 bytes. Use Wireshark to make sure you got what you wanted.
The buffer settings have nothing to do with any of this. I know of no valid reason to touch them.
In general this question is probably moot since you seem to want to send a lot of data. Just leave Nagling enabled and send big buffers (such as 64KB).
I do some experience on Windows 10:
code from https://docs.python.org/3/library/socketserver.html#asynchronous-mixins,
RawCap for loopback capture,
WireShark for watching result.
The primary client code is:
def client(ip, port, message):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET,socket.SO_RCVBUF, 100000)
sock.connect((ip, port))
sock.sendall(bytes(message, 'ascii'))
response = str(sock.recv(1024), 'ascii')
print("Received: {}".format(response))
Here is the result(the server port is 11111):
you can see, the tcp recive window size is the same as SO_RCVBUF, may it is platform indepent, you can verify it on other platform.
on https://msdn.microsoft.com/en-us/library/windows/hardware/ff570832(v=vs.85).aspx
The SO_RCVBUF socket option determines the size of a socket's receive buffer that is used by the underlying transport.
verified this.
Also, when I set SO_SNDBUF = 100000, it have no affects on the tcp transmission between client and server, as server just can discard data if client send much data one time.
So, if you want to change SO_RCVBUF to max Throughput, you can refer http://packetbomb.com/understanding-throughput-and-tcp-windows/, the os may offer func to detect ideal send backlog (ISB).

Examine data at in callout driver for FWPM_LAYER_EGRESS_VSWITCH_TRANSPORT_V4 layer in WFP

I am writing the callout driver for Hyper-V 2012 where I need to filter the packets sent from virtual machines.
I added filter at FWPM_LAYER_EGRESS_VSWITCH_TRANSPORT_V4 layer in WFP. Callout function receive packet buffer which I am typecasting it to NET_BUFFER_LIST. I am doing following to get the data pointer
pNetBuffer = NET_BUFFER_LIST_FIRST_NB((NET_BUFFER_LIST*)pClassifyData->pPacket);
pContiguousData = NdisGetDataBuffer(pNetBuffer, NET_BUFFER_DATA_LENGTH(pNetBuffer), 0, 1, 0);
I have simple client-server application to test the packet data. Client is on VM and server is another machine. As I observed, data sent from client to server is truncated and some garbage value is added at the end. There is no issue for sending message from server to client. If I dont add this layer filter client-server works without any issue.
Callback function receives the metadata which incldues ipHeaderSize and transportHeaderSize. Both these values are zero. Are these correct values or should those be non-zero??
Can somebody help me to extract the data from packet in callout function and forward it safely to further layers?
Thank You.
These are the TCP packets. I looked into size and offset information. It seems the problem is consistent across packets.
I checked below values in (NET_BUFFER_LIST*)pClassifyData->pPacket.
NET_BUFFER_LIST->NetBUfferListHeader->NetBUfferListData->FirstNetBuffer->NetBuffe rHeader->NetBufferData->CurrentMdl->MappedSystemVa
First 24 bytes are only sent correctly and remaining are garbage.
For example total size of the packet is 0x36 + 0x18 = 0x4E I don't know what is there in first 0x36 bytes which is constant for all the packets. Is it a TCP/IP header? Second part 0x18 is the actual data which i sent.
I even tried with API NdisQueryMdl() to retrieve from MDL list.
So on the receiver side I get only 24 bytes correct and remaining is the garbage. How to read the full buffer from NET_BUFFER_LIST?