Modbus Server with umodbus - modbus

I am creating a Modbus Server using umodbus python module.
Two clients are connecting to the server. One is reading the registers and other one is writing the same registers after every 5 seconds. Now problem is that both the clients are not able to read and write at the same time. '''
I later figured out that i need to close the connection after every read and write from both the clients. But still some times one of the client is not able to connect and the connection flag show False.
How can i handle this situation on the server side that it run stable, and 1st client can write the register and other can easily read the register?
from socketserver import TCPServer
from collections import defaultdict
from umodbus import conf
from umodbus.server.tcp import RequestHandler, get_server
from umodbus.utils import log_to_stream
log_to_stream(level=logging.DEBUG)
data_store =defaultdict(int)
conf.SIGNED_VALUES = True
TCPServer.allow_reuse_address = True
app = get_server(TCPServer, ('0.0.0.0', 502), RequestHandler)
data_store[10]=0
data_store[11]=0
data_store[20]=0
data_store[21]=0
#app.route(slave_ids=[1], function_codes=[3,4], addresses=list(range(10,15)))
def read_data_store_power(slave_id, function_code, address):
"""" Return value of address. """
print("Read Power: "+str(address))
return data_store[address]
#app.route(slave_ids=[1], function_codes=[6, 16], addresses=list(range(10, 15)))
def write_data_store_power(slave_id, function_code, address, value):
"""" Set value for address. """
print("Write Power: "+str(address)+" Value: "+str(value))
data_store[address] = value
#app.route(slave_ids=[1], function_codes=[3,4], addresses=list(range(20,25)))
def read_data_store_energy(slave_id, function_code, address):
"""" Return value of address. """
print("Read Request for Energy no:"+str(address))
return data_store[address]
#app.route(slave_ids=[1], function_codes=[6, 16], addresses=list(range(20, 25)))
def write_data_store_power_energy(slave_id, function_code, address, value):
"""" Set value for address. """
print("Write Request for: "+str(address)+" and Value: "+str(value))
data_store[address] = value
if __name__ == '__main__':
try:
app.serve_forever()
finally:
app.shutdown()
app.server_close()````

Related

Unable to capture MQTT log callback

I am having trouble getting the on_log callback to trigger. I have used it on other programs w/o problems but this one is being difficult. I have included the relevant code snippets (I hope!). All other call backs are working fine. This program isn't threaded (except MQTT.start) so there aren't any other actions. Any suggestions where to look would be appreciated. fwiw, the problem I'm trying to track down is that MQTT stops responding after a few hours. The MQTT server is on a separate server, is used by numerous other processes and has no known issues.
# Set up MQTT - wait until we have an ipaddr so we know the network has been started
logger.debug("Waiting for ip address to be assigned")
while True:
ipaddr = get_local_IP()
if ipaddr is not None:
logger.info('IP address is {}'.format(ipaddr))
break
sleep(2.0)
logger.debug("Waiting for MQTT broker connection")
mqttc = mqtt.Client()
mqttc.on_message = on_message
mqttc.on_connect = on_connect
mqttc.on_publish = on_publish
mqttc.on_subscribe = on_subscribe
while True:
try:
mqttc.connect("192.168.0.18", 1884, 30)
except IOError as e:
if e.errno != errno.ENETUNREACH:
raise
logger.warning('Network error - retrying')
sleep(15)
continue
logger.debug('Connect initiated without error')
break
mqttc.loop_start()
mqttc.on_log = on_log
while not MQ_link:
sleep(1)
def on_connect(mqttc, obj, flags, rc):
global MQ_link
logger.debug("Connected: rc = " + str(rc))
if rc == 0:
MQ_link = True
def on_log(mqttc, obj, level, string):
verb = string.split('(').strip()
if verb[0] not in ['Sending PINGREQ', 'Received PINGRESP']:
logger.debug('LOG: ' + string)

How can I run my own method in every pymodbus server transaction that is processed

I would like to run my own method whenever in a pymodbus server whenever a message is processed. Is that possible ?
Thanks
While running through the examples, I came across an example where they subclass pymodbus.datastore.ModbusSparseDataBlock https://pymodbus.readthedocs.io/en/latest/source/example/callback_server.html
The example probably implements more than you'd need, at minimum you should just override:
__init__: by passing the a dict of values, it provides the server with the legal address range a client can request
setValues: this is where the magic happens: here you can add your own callbacks to any incoming values for a given address.
My minimal example looks like this:
import logging
from pymodbus.datastore import (
ModbusServerContext,
ModbusSlaveContext,
ModbusSparseDataBlock,
)
from pymodbus.server.sync import StartSerialServer
from pymodbus.transaction import ModbusRtuFramer
logger = logging.getLogger(__name__)
class CallbackDataBlock(ModbusSparseDataBlock):
"""callbacks on operation"""
def __init__(self):
super().__init__({k: k for k in range(60)})
def setValues(self, address, value):
logger.info(f"Got {value} for {address}")
super().setValues(address, value)
def run_server():
block = CallbackDataBlock()
store = ModbusSlaveContext(di=block, co=block, hr=block, ir=block)
context = ModbusServerContext(slaves=store, single=True)
StartSerialServer(
context,
framer=ModbusRtuFramer,
port="/dev/ttyNS0",
timeout=0.005,
baudrate=19200,
)
if __name__ == "__main__":
run_server()

Local variable referenced before assignment in class? with python, discordpy

i'm having some trouble in order to make a cog with the discordpy rewrite branch in python.
I'm trying to make a command to start a connection to a database using mysql connector and to create a simple table. The problem is that when i define a cursor variable like stated in the official mysql docs i get an error:
"local variable 'cnx' referenced before assignment"
Now this is the code:
import discord
from discord.ext import commands
import json
import asyncio
import mysql.connector
from mysql.connector import errorcode
with open("config.json") as configfile:
config = json.load(configfile)
class testcog:
def __init__(self, client):
self.client = client
#commands.command()
async def dbconnect(self, ctx):
await ctx.message.author.send('I\'m connecting to the database, please be patient.')
try:
cnx = mysql.connector.connect(user=config['sqlconfig']['user'], password=config['sqlconfig']['password'],
host=config['sqlconfig']['host'],
database=config['sqlconfig']['database'])
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exist")
else:
print(err)
else:
cnx.close()
cursor = cnx.cursor()
TABLES = {}
TABLES['employee'] = (
"CREATE TABLE `employee` ("
" `emp_no` int(11) NOT NULL AUTO_INCREMENT,"
" `birth_date` date NOT NULL,"
" `first_name` varchar(14) NOT NULL,"
" `last_name` varchar(16) NOT NULL,"
" `gender` enum('M','F') NOT NULL,"
" `hire_date` date NOT NULL,"
" PRIMARY KEY (`emp_no`)"
") ENGINE=InnoDB")
for table_name in TABLES:
table_description = TABLES[table_name]
try:
print("Creating table {}: ".format(table_name), end='')
cursor.execute(table_description)
except mysql.connector.Error as err:
if err.errno == errorcode.ER_TABLE_EXISTS_ERROR:
print("already exists.")
else:
print(err.msg)
else:
print("OK")
cursor.close()
cnx.close()
def setup(client):
client.add_cog(testcog(client))
The table and the code to create it was copied directly from the official docs.
The piece of code that gives me the error is : cursor = cnx.cursor() just before the TABLES dictionary is created.
I don't understand what i'm doing wrong, help is much appreciated.
I think I can provide some help for you!
When working in a cog file, you need to inherit commands.Cog in your main class. In addition to this, you should async be opening and closing your json file.
We use async with discord.py to make it so if multiple people use your commands, the bot wont get backed up (It's so the bot can do multiple things at one time). There is an async library for MySql, and async libraries for opening json files, so let's look into using them.
You can check out the aiomysql documentation here: https://aiomysql.readthedocs.io/en/latest/
Let's work on setting up your problem. In order to do this, we need to make sure our bot is setup for our db. We setup something called a "pool", that changes as the DB changes.
Im going to show the file structure I have in this example for you:
main.py
/cogs
testcog.py
# When creating our bot, we want to setup our db (database) connection, so we can reference it later
from discord.ext import commands
import discord
import aiomysql
import asyncio
import aiofiles, json
loop = asyncio.get_event_loop()
bot = commands.Bot(command_prefix = "!", intents=discord.Intents.all())
#bot.event
async def on_ready():
config = json.loads(await(await aiofiles.open("/home/pi/Desktop/Experimental/prestagingapi.json")).read())
bot.pool = await aiomysql.create_pool(host=config['sqlconfig']['host'], port = 0000, user = config['sqlconfig']['user'],
password = config['sqlconfig']['password'],
db = config['sqlconfig']['database'], loop=loop)
print("Bot is online!")
# We need to load our cogs and setup our db loop to reference it later
initial_extension = (
"cogs.testcog",
)
for extension in initial_extension:
bot.load_extension(extension)
bot.run("YOUR_TOKEN", reconnect=True)
Now, we can work inside of our cog to set everything up. I named the file of this cog, testcog.py inside of the folder, cogs.
import discord
from discord.ext import commands
class testCog(commands.Cog): # I defined that our class inherits the cog for discords
def __init__(self, bot):
self.bot = bot
#commands.command()
async def create_table(self, ctx):
await ctx.author.send('I\'m connecting to the database, please be patient.') #ctx.message.author is ctx.author
# now you can create your db connection here:
# looking at the aiomysql documentation, we can create a connection and execute what we need
async with self.bot.pool.acquire() as conn:
async with conn.cursor() as cur:
# in order to execute something (creaing a table for ex), we can do this:
await cur.execute()
def setup(bot): # every cog needs a setup function
bot.add_cog(testCog(bot))

error 9 Bad file descriptor error using sockets in python

I am trying to implement a very basic code of client server in python using non blocking sockets. I have made two threads for reading and writing.
My client code is below.
import sys
import socket
from time import sleep
from _thread import *
import threading
global s
def writeThread():
while True:
data = str(input('Please input the data you want to send to client 2 ( to end connection type end ) : '))
data = bytes(data, 'utf8')
print('You are trying to send : ', data)
s.sendall(data)
def readThread():
while True:
try:
msg = s.recv(4096)
except socket.timeout as e:
sleep(1)
print('recv timed out, retry later')
continue
except socket.error as e:
# Something else happened, handle error, exit, etc.
print(e)
sys.exit(1)
else:
if len(msg) == 0:
print('orderly shutdown on server end')
sys.exit(0)
else:
# got a message do something :)
print('Message is : ', msg)
if __name__ == '__main__':
global s
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('',6188))
s.settimeout(2)
wThread = threading.Thread(None,writeThread)
rThread = threading.Thread(None,readThread)
wThread.start()
rThread.start()
s.close()
Question:
I know this can be implemented through select module too but I would like to know how to do it this way.
Your main thread creates the socket, then creates thread1 and thread2. Then it closes the socket (and exits because the program ends after that). So that when thread1 and thread2 try to use it, it's no longer open. Hence EBADF (Bad file descriptor error).
Your main thread should not close the socket while the other threads are still running. It could wait for them to end:
[...]
s.settimeout(2)
wThread = threading.Thread(None,writeThread)
rThread = threading.Thread(None,readThread)
wThread.start()
rThread.start()
wThread.join()
rThread.join()
s.close()
However, since the main thread has nothing better to do than wait, it might be better to create only one additional thread (say rThread), then have the main thread take over the task currently being performed by the other. I.e.
[...]
s.settimeout(2)
rThread = threading.Thread(None,readThread)
rThread.start()
writeThread()

python's asyncio and sharing socket among worker processes

Is it possible to share a socket amongst several worker processes using python's asyncio module?
Below is an example code that starts a server listening on port 2000. When a connection is established, and the client sends the string "S", the server starts sending data to the client. But all this happens only on one cpu core. How could I rewrite this example to take advantage of all the cpu cores? I took a look at asyncio subprocess module, but am not sure if I can use it to share the socket so that the server can simultaneously accept connections from multiple worker processes in parallel.
import asyncio
import datetime
clients = []
class MyServerProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
self.peername = transport.get_extra_info("peername")
print("connection_made: {}".format(self.peername))
clients.append(self)
#asyncio.coroutine
def send_data_stream(self):
while True:
yield from asyncio.sleep(3)
if self in clients:
self.transport.write("{} {}\r\n".format('Endless stream of information', str(datetime.datetime.now())).encode())
print("sent data to: {}".format(self.peername))
else:
break
def data_received(self, data):
print("data_received: {}".format(data.decode()))
received = data.decode()
if received == "S":
asyncio.Task(self.send_data_stream())
def connection_lost(self, ex):
print("connection_lost: {}".format(self.peername))
clients.remove(self)
if __name__ == '__main__':
print("starting up..")
loop = asyncio.get_event_loop()
asyncio.set_event_loop(loop)
coro = loop.create_server(MyServerProtocol, port=2000)
server = loop.run_until_complete(coro)
for socket in server.sockets:
print("serving on {}".format(socket.getsockname()))
loop.run_forever()