Python modbus-tk modbus server is merging two requests and hence getting CRC error - modbus

I am using modbus-tk library for modbus serial server. All the communication is up and working. There is one instance where master is writing one register and next request is read but modbus-tk is merging the two request and hence getting CRC error
2019-01-31 17:19:59,881 DEBUG modbus._handle Thread-2 -->2-16-0-11-0-1-2-0-128-178-123-2-3-0-4-0-1-197-248
2019-01-31 17:19:59,881 ERROR modbus.handle_request Thread-2 invalid request: Invalid CRC in request
Actual request should be 2-16-0-11-0-1-2-0-128-178-123 and othe request is 2-3-0-4-0-1-197-248
Any ideas why I am having the issues
For setup, Modbus slave is connected via serial 232 and running two slave instances on single server.

You must create Thread-safe read/write. If you read or write, you cant do it with uncontrolled threads. You need to lock threads when you read or write. I cant explain why, but last time I was working with modbus, I had similar problem. Modbus simply cant handle threads very well. Lock helped a lot, but still most safe it is to do it threadless.
Idea:
import threading
lock = threading.Lock()
def read():
with lock:
read....
def write():
with lock:
write....

Related

getting cloud sql errors and db got restarted automatically

These are the info logs
"2022-08-18T07:47:14.333850Z 32263817 [Note] [MY-010914] [Server] Aborted connection 32263817 to db: 'unconnected' user: 'xyz' host: '10.x.x.x' (Got an error reading communication packets)."
[Note] [MY-010914] [Server] Got packets out of order"
[Server] Got an error reading communication packets
I don't understand why I am getting this continuously on cloud sql logs also is this the reason why my db got crashed.
There are a lot of reasons that could perhaps lead to the MySQL connection packet issues.According to this thread MySQL Error Reading Communication Packets
MySQL network communication code was written under the assumption that queries are always reasonably short,and therefore can be sent to and processed by the server in one chunk, which is called a packet in MySQL terminology. The server allocates the memory for a temporary buffer to store the packet, and it requests enough to fit it entirely. This architecture requires a precaution to avoid having the server run out of memory---a cap on the size of the packet, which this option accomplishes.
The code of interest in relation to this option is found in sql/net_serv.cc. Take a look at my_net_read(), then follow the call to my_real_read() and pay particular attention to net_realloc().
This variable also limits the length of a result of many string functions. See sql/field.cc and sql/intem_strfunc.cc for details.
According to the MySQL Documentation
You can also get these errors if you send a query to the server that is incorrect or too large.you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section C.5.2.10, “Packet too large”.
An INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.
Additionally you may have a look at this link1 & link2

TCP messages - Durable and fast solution tips?

We use Spring integration for TCP socket communication with the hardware.
The client would be sending a sequence number to uniquely identify a message.
My requirement is to store these sequence numbers part of the socket message and validate them for non repetitive sequence numbers.
I went thru IdempotentReceiver, sounds like what i wanted.
But I need a durable and faster mechanism to store it, before unexpected shutdown of service and use the in memory cache for retrieving the latest sequence number.
Thank you in advance.!
You can use PropertiesPersistingMetadataStore for idempotent receiver:
The PropertiesPersistingMetadataStore is backed by a properties file and a PropertiesPersister.
By default, it only persists the state when the application context is closed normally. It implements Flushable so you can persist the state at will, be invoking flush().
See more in its JavaDocs.

Haskell database connections

Please look at this scotty app (it's taken directly from this old answer from 2014):
import Web.Scotty
import Database.MongoDB
import qualified Data.Text.Lazy as T
import Control.Monad.IO.Class
runQuery :: Pipe -> Query -> IO [Document]
runQuery pipe query = access pipe master "nutrition" (find query >>= rest)
main = do
pipe <- connect $ host "127.0.0.1"
scotty 3000 $ do
get "/" $ do
res <- liftIO $ runQuery pipe (select [] "stock_foods")
text $ T.pack $ show res
You see how the the database connection (pipe) is created only once when the web app launches. Subsequently, thousands if not millions of visitors will hit the "/" route simultaneously and read from the database using the same connection (pipe).
I have questions about how to properly use Database.MongoDB:
Is this the proper way of setting things up? As opposed to creating a database connection for every visit to "/". In this latter case, we could have millions of connections at once. Is that discouraged? What are the advantages and drawbacks of such an approach?
In the app above, what happens if the database connection is lost for some reason and needs to be created again? How would you recover from that?
What about authentication with the auth function? Should the auth function only be called once after creating the pipe, or should it be called on every hit to "/"?
Some say that I'm supposed to use a pool (Data.Pool). It looks like that would only help limit the number of visitors using the same database connection simultaneously. But why would I want to do that? Doesn't the MongoDB connection have a built-in support for simultaneous usages?
Even if you create connection per client you won't be able to create too many of them. You will hit ulimit. Once you hit that ulimit the client that hit this ulimit will get a runtime error.
The reason it doesn't make sense is because mongodb server will be spending too much time polling all those connections and it will have only as many meaningful workers as many CPUs your db server has.
One connection is not a bad idea, because mongodb is designed to send several requests and wait for responses. So, it will utilize as much resources as your mongodb can have with only one limitation - you have only one pipe for writing, and if it closes accidentally you will need to recreate this pipe yourself.
So, it makes more sense to have a pool of connections. It doesn't need to be big. I had an app which authenticates users and gives them tokens. With 2500 concurrent users per second it only had 3-4 concurrent connections to the database.
Here are the benefits connection pool gives you:
If you hit pool connection limit you will be waiting for the next available connection and will not get runtime error. So, you app will wait a little bit instead of rejecting your client.
Pool will be recreating connections for you. You can configure pool to close excess of connections and create more up until certain limit as you need them. If you connection breaks while you read from it or write to it, then you just take another connection from the pool. If you don't return that broken connection to the pool pool will create another connection for you.
If the database connection is closed then: mongodb listener on this connection will exit printing a error message on your terminal, your app will receive an IO error. In order to handle this error you will need to create another connection and try again. When it comes to handling this situation you understand that it's easier to use a db pool. Because eventually you solution to this will resemble connection pool very much.
I do auth once as part of opening a connection. If you need to auth another user later you can always do it.
Yes, mongodb handles simultaneous usage, but like I said it gives only one pipe to write and it soon becomes a bottle neck. If you create at least as many connections as your mongodb server can afford threads for handling them(CPU count), then they will be going at full speed.
If I missed something feel free to ask for clarifications.
Thank you for your question.
What you really want is a database connection pool. Take a look at the code from this other answer.
Instead of auth, you can use withMongoDBPool to if your MongoDB server is in secure mode.
Is this the proper way of setting things up? As opposed to creating a database connection for every visit to "/". In this latter case, we could have millions of connections at once. Is that discouraged? What are the advantages and drawbacks of such an approach?
You do not want to open one connection and then use it. The HTTP server you are using, which underpins Scotty, is called Warp. Warp has a multi-core, multi-green-thread design. You are allowed to share the same connection across all threads, since Database.MongoDB says outright that connections are thread-safe, but what will happen is that when one thread is blocked waiting for a response (the MongoDB protocol follows a simple request-response design) all threads in your web service will block. This is unfortunate.
We can instead create a connection on every request. This trivially solves the problem of one thread's blocking another but leads to its own share of problems. The overhead of setting up a TCP connection, while not substantial, is also not zero. Recall that every time we want to open or close a socket we have to jump from the user to the kernel, wait for the kernel to update its internal data structures, and then jump back (a context switch). We also have to deal with the TCP handshake and goodbyes. We would also, under high load, run out file descriptors or memory.
It would be nice if we had a solution somewhere in between. The solution should be
Thread-safe
Let us max-bound the number of connections so we don't exhaust the finite resources of the operating system
Quick
Share connections across threads under normal load
Create new connections as we experience increased load
Allow us to clean up resources (like closing a handle) as connections are deleted under reduced load
Hopefully already written and battle-tested by other production systems
It is this exactly problem that resource-pool tackles.
Some say that I'm supposed to use a pool (Data.Pool). It looks like that would only help limit the number of visitors using the same database connection simultaneously. But why would I want to do that? Doesn't the MongoDB connection have a built-in support for simultaneous usages?
It is unclear what you mean by simultaneous usages. There is one interpretation I can guess at: you mean something like HTTP/2, which has pipelining built into the protocol.
standard picture of pipelining http://research.worksap.com/wp-content/uploads/2015/08/pipeline.png
Above we see the client making multiple requests to the server, without waiting for a response, and then the client can receive responses back in some order. (Time flows from the top to the bottom.) This MongoDB does not have. This is a fairly complicated protocol design that is not that much better than just asking your clients to use connection pools. And MongoDB is not alone here: the simple request-and-response design is something that Postgres, MySQL, SQL Server, and most other databases have settled on.
And: it is true that connection pool limits the load you can take as a web service before all threads are blocked and your user just sees a loading bar. But this problem would exist in any of the three scenarios (connection pooling, one shared connection, one connection per request)! The computer has finite resources, and at some point something will collapse under sufficient load. Connection pooling's advantages are that it scales gracefully right up until the point it cannot. The correct solution to handling more traffic is to increase the number of computers; we should not avoid pooling simply due to this problem. 
In the app above, what happens if the database connection is lost for some reason and needs to be created again? How would you recover from that?
I believe these kinds of what-if's are outside the scope of Stack Overflow and deserve no better answer than "try it and see." Buuuuuuut given that the server terminates the connection, I can take a stab at what might happen: assuming Warp forks a green thread for each request (which I think it does), each thread will experience an unchecked IOException as it tries to write to the closed TCP connection. Warp would catch this exception and serve it as an HTTP 500, hopefully writing something useful to the logs also. Assuming a single-connection model like you have now, you could either do something clever (but high in lines of code) where you "reboot" your main function and set up a second connection. Something I do for hobby projects: should anything odd occur, like a dropped connection, I ask my supervisor process (like systemd) to watch the logs and restart the web service. Though clearly not a great solution for a production, money-makin' website, it works well enough for small apps.
What about authentication with the auth function? Should the auth function only be called once after creating the pipe, or should it be called on every hit to "/"?
It should be called once after creating the connection. MongoDB authentication is per-connection. You can see an example here of how the db.auth() command mutates the MongoDB server's data structures corresponding to the current client connection.

Socket data read wait time

I have application where I am listening on multiple sockets using select. If I start processing request that came in from Socket A and in the meanwhile if another request on socket B arrives then I want to know how long does socket B request had to wait before I could get it. Since this is a single threaded application I cannot spawn a new thread and go back to select to monitor again and instantly start processing request from socket B.
Is there a 'C' api available to get me this metric or is this just not possible to get?
There is no a straightforward way how to measure the interval between the 'data ready' time and 'data read' time because there is not any timestamp written together with the data. Moreover the situation is even more complex because a stream oriented socket may receive several data segments till select is closed and the it is not what interval should be measured.
If the application data processing is longer than packet processing in the kernel the you can do a reasonable measurement in following way:
print current time and some unique data id based on application protocol when select wakes up due to socket B data availability.
log any packet received for the socket B. You can use either a network traffic capture tool like wireshark or tcpdump. Or you can configure an iptables firewall rule (if it is running on linux) with target -j LOG.
Write a simple script/program that correlates the captured packets and the application log and subtract received and start processing time.
Of course the idea above does ignore the kernel processing time. If you really need exact time I have to introduce a new thread to your application.

Golang tcp socket read gives EOF eventually

I have problem reading from socket. There is asterisk instance running with plenty of calls (10-60 in a minute) and I'm trying to read and process CDR events related to those calls (connected to AMI).
Here is library which I'm using (not mine, but was pushed to fork because of bugs) https://github.com/warik/gami
Its pretty straightforward, main action goes in gami.go - readDispatcher.
buf := make([]byte, _READ_BUF) // read buffer
for {
rc, err := (*a.conn).Read(buf)
So, there is TCPConn (a.conn) and buffer with size 1024 to which I'm reading messages from socket. So far so good, but eventually, from time to time (this time may vary from 10 minutes to 5 hours independently of data amount which comes through socket) Read operation fails with io.EOF error. I was trying to reconnect and relogin immediately, but its also impossible - connection times out, so i was pushed to wait for about 40-60sec, and this time is very crucial to me, I'm losing a lot of data because of delay. I was googling, reading sources and trying a lot of stuff - nothing. The most strange thing, that simple socket opened in python or php does not fail.
Is it possible that problem because of lack of file descriptors to represent socket on mine machine or on asterisk server?
Is it possible that problem in asterisk configuration (because i have another asterisk on which this problem doesn't reproduce, but also, i have time less calls on last one)?
Is it possible that problem in my way to deal with socket connection or with Go in general?
go version go1.2.1 linux/amd64
asterisk 1.8
Update to latest asterisk. There was bug like that when AMI send alot of data.
For check issue, you have send via ami command like "COMMAND sip show peers"(or any other long output command) and see result.
Ok, problem was in OS socket buffer overflow. As appeared there were to much data to handle.
So, were are three possible ways to fix this:
increase socket buffer volume
increase somehow speed of process which reeds data from socket
lower data volume or frequency
The thing that gami is by default reading all data from asterisk. And i was reading all of them and filter them after actual read operation. According that AMI listening application were running on pretty poor PC it appeared that it simply cannot read all the data before buffer capacity will be exposed.But its possible to receive only particular events, by sending "Events" action to AMI and specifying desired "EventMask".
So, my decision was to do that. And create different connections for different events type.