python's asyncio and sharing socket among worker processes - sockets

Is it possible to share a socket amongst several worker processes using python's asyncio module?
Below is an example code that starts a server listening on port 2000. When a connection is established, and the client sends the string "S", the server starts sending data to the client. But all this happens only on one cpu core. How could I rewrite this example to take advantage of all the cpu cores? I took a look at asyncio subprocess module, but am not sure if I can use it to share the socket so that the server can simultaneously accept connections from multiple worker processes in parallel.
import asyncio
import datetime
clients = []
class MyServerProtocol(asyncio.Protocol):
def connection_made(self, transport):
self.transport = transport
self.peername = transport.get_extra_info("peername")
print("connection_made: {}".format(self.peername))
clients.append(self)
#asyncio.coroutine
def send_data_stream(self):
while True:
yield from asyncio.sleep(3)
if self in clients:
self.transport.write("{} {}\r\n".format('Endless stream of information', str(datetime.datetime.now())).encode())
print("sent data to: {}".format(self.peername))
else:
break
def data_received(self, data):
print("data_received: {}".format(data.decode()))
received = data.decode()
if received == "S":
asyncio.Task(self.send_data_stream())
def connection_lost(self, ex):
print("connection_lost: {}".format(self.peername))
clients.remove(self)
if __name__ == '__main__':
print("starting up..")
loop = asyncio.get_event_loop()
asyncio.set_event_loop(loop)
coro = loop.create_server(MyServerProtocol, port=2000)
server = loop.run_until_complete(coro)
for socket in server.sockets:
print("serving on {}".format(socket.getsockname()))
loop.run_forever()

Related

Modbus Server with umodbus

I am creating a Modbus Server using umodbus python module.
Two clients are connecting to the server. One is reading the registers and other one is writing the same registers after every 5 seconds. Now problem is that both the clients are not able to read and write at the same time. '''
I later figured out that i need to close the connection after every read and write from both the clients. But still some times one of the client is not able to connect and the connection flag show False.
How can i handle this situation on the server side that it run stable, and 1st client can write the register and other can easily read the register?
from socketserver import TCPServer
from collections import defaultdict
from umodbus import conf
from umodbus.server.tcp import RequestHandler, get_server
from umodbus.utils import log_to_stream
log_to_stream(level=logging.DEBUG)
data_store =defaultdict(int)
conf.SIGNED_VALUES = True
TCPServer.allow_reuse_address = True
app = get_server(TCPServer, ('0.0.0.0', 502), RequestHandler)
data_store[10]=0
data_store[11]=0
data_store[20]=0
data_store[21]=0
#app.route(slave_ids=[1], function_codes=[3,4], addresses=list(range(10,15)))
def read_data_store_power(slave_id, function_code, address):
"""" Return value of address. """
print("Read Power: "+str(address))
return data_store[address]
#app.route(slave_ids=[1], function_codes=[6, 16], addresses=list(range(10, 15)))
def write_data_store_power(slave_id, function_code, address, value):
"""" Set value for address. """
print("Write Power: "+str(address)+" Value: "+str(value))
data_store[address] = value
#app.route(slave_ids=[1], function_codes=[3,4], addresses=list(range(20,25)))
def read_data_store_energy(slave_id, function_code, address):
"""" Return value of address. """
print("Read Request for Energy no:"+str(address))
return data_store[address]
#app.route(slave_ids=[1], function_codes=[6, 16], addresses=list(range(20, 25)))
def write_data_store_power_energy(slave_id, function_code, address, value):
"""" Set value for address. """
print("Write Request for: "+str(address)+" and Value: "+str(value))
data_store[address] = value
if __name__ == '__main__':
try:
app.serve_forever()
finally:
app.shutdown()
app.server_close()````

error 9 Bad file descriptor error using sockets in python

I am trying to implement a very basic code of client server in python using non blocking sockets. I have made two threads for reading and writing.
My client code is below.
import sys
import socket
from time import sleep
from _thread import *
import threading
global s
def writeThread():
while True:
data = str(input('Please input the data you want to send to client 2 ( to end connection type end ) : '))
data = bytes(data, 'utf8')
print('You are trying to send : ', data)
s.sendall(data)
def readThread():
while True:
try:
msg = s.recv(4096)
except socket.timeout as e:
sleep(1)
print('recv timed out, retry later')
continue
except socket.error as e:
# Something else happened, handle error, exit, etc.
print(e)
sys.exit(1)
else:
if len(msg) == 0:
print('orderly shutdown on server end')
sys.exit(0)
else:
# got a message do something :)
print('Message is : ', msg)
if __name__ == '__main__':
global s
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('',6188))
s.settimeout(2)
wThread = threading.Thread(None,writeThread)
rThread = threading.Thread(None,readThread)
wThread.start()
rThread.start()
s.close()
Question:
I know this can be implemented through select module too but I would like to know how to do it this way.
Your main thread creates the socket, then creates thread1 and thread2. Then it closes the socket (and exits because the program ends after that). So that when thread1 and thread2 try to use it, it's no longer open. Hence EBADF (Bad file descriptor error).
Your main thread should not close the socket while the other threads are still running. It could wait for them to end:
[...]
s.settimeout(2)
wThread = threading.Thread(None,writeThread)
rThread = threading.Thread(None,readThread)
wThread.start()
rThread.start()
wThread.join()
rThread.join()
s.close()
However, since the main thread has nothing better to do than wait, it might be better to create only one additional thread (say rThread), then have the main thread take over the task currently being performed by the other. I.e.
[...]
s.settimeout(2)
rThread = threading.Thread(None,readThread)
rThread.start()
writeThread()

tornado Periodiccallback and socket operations inside callback

I am trying to make a non-blocking web-application which uses Tornado.
That application uses PeriodicCallback as a scheduler for grabbing the data from news sites:
for nc_uuid in self.LIVE_NEWSCOLLECTORS.keys():
self.LIVE_NEWSCOLLECTORS[nc_uuid].agreggator,ioloop=args
period=int(self.LIVE_NEWSCOLLECTORS[nc_uuid].period)*60
if self.timer is not None: period = int(self.timer)
#self.scheduler.add_job(func=self.LIVE_NEWSCOLLECTORS[nc_uuid].getNews,args=[self.source,i],trigger='interval',seconds=10,id=nc_uuid)
task = tornado.ioloop.PeriodicCallback(lambda:self.LIVE_NEWSCOLLECTORS[nc_uuid].getNews(self.source,i),1000*10,ioloop)
task.start()
'getData' which is calling as a callback has an async http request that parses and sent data to TCPServer for analyzing by calling method process_responce:
#gen.coroutine
def process_response(self,*args,**kwargs):
buf = {'sentence':str('text here')}
data_string = json.dumps(buf)
s.send(data_string)
while True:
try:
data = s.recv(100000)
if not data:
print "connection closed"
s.close()
break
else:
print "Received %d bytes: '%s'" % (len(data), data)
# s.close()
break
except socket.error, e:
if e.args[0] == errno.EWOULDBLOCK:
print 'error',errno.EWOULDBLOCK
time.sleep(1) # short delay, no tight loops
else:
print e
break
i+=1
Inside process_response I use basic example for non-blocking socket operations.
Process_response shows something like this:
error 10035
error 10035
Received 75 bytes: '{"mode": 1, "keyword": "\u0435\u0432\u0440\u043e", "sentence": "text here"}'
That looks normal behavior. But when recieving data the main IOLoop are being locked! If I would ask webserver it wouldn`t return my anydata until periodiccallback task finishes...
Where is my mistake?
time.sleep() is a blocking function and must never be used in non-blocking code. Use yield gen.sleep() instead.
Also consider using tornado.iostream.IOStream instead of raw socket operations.

Scala Remote Actors stop client from terminating

I am writing a simple chat server, and I want to keep it as simple as possible. My server listed below only receives connections and stores them in the clients set. Incoming messages are then broadcasted to all clients on that Server. The server works with no problem, but on the client side, the RemoteActor stops my program from termination. Is there a way to remove the Actor on my client without terminating the Actor on the Server?
I don't want to use a "one actor per client" model yet.
import actors.{Actor,OutputChannel}
import actors.remote.RemoteActor
object Server extends Actor{
val clients = new collection.mutable.HashSet[OutputChannel[Any]]
def act{
loop{
react{
case 'Connect =>
clients += sender
case 'Disconnect =>
clients -= sender
case message:String =>
for(client <- clients)
client ! message
}
}
}
def main(args:Array[String]){
start
RemoteActor.alive(9999)
RemoteActor.register('server,this)
}
}
my client would then look like this
val server = RemoteActor.select(Node("localhost",9999),'server)
server.send('Connect,messageHandler) //answers will be redirected to the messageHandler
/*do something until quit*/
server ! 'Disconnect
I would suggest placing the client side code into an actor itself - ie not calling alive/register in the main thread
(implied by http://www.scala-lang.org/api/current/scala/actors/remote/RemoteActor$.html)
something like
//body of your main:
val client = actor {
alive(..)
register(...)
loop {
receive {
case 'QUIT => exit()
}
}
}
client.start
//then to quit:
client ! 'QUIT
Or similar (sorry I am not using 2.8 so might have messed something up - feel free to edit if you make it actually work for you !).

How should I handle blocking operations when using scala actors?

I started learning the scala actors framework about two days ago. To make the ideas concrete in my mind, I decided to implement a TCP based echo server that could handle multiple simultaneous connections.
Here is the code for the echo server (error handling not included):
class EchoServer extends Actor {
private var connections = 0
def act() {
val serverSocket = new ServerSocket(6789)
val echoServer = self
actor { while (true) echoServer ! ("Connected", serverSocket.accept) }
while (true) {
receive {
case ("Connected", connectionSocket: Socket) =>
connections += 1
(new ConnectionHandler(this, connectionSocket)).start
case "Disconnected" =>
connections -= 1
}
}
}
}
Basically, the server is an Actor that handles the "Connected" and "Disconnected" messages. It delegates the connection listening to an anonymous actor that invokes the accept() method (a blocking operation) on the serverSocket. When a connection arrives it informs the server via the "Connected" message and passes it the socket to use for communication with the newly connected client. An instance of the ConnectionHandler class handles the actual communication with the client.
Here is the code for the connection handler (some error handling included):
class ConnectionHandler(server: EchoServer, connectionSocket: Socket)
extends Actor {
def act() {
for (input <- getInputStream; output <- getOutputStream) {
val handler = self
actor {
var continue = true
while (continue) {
try {
val req = input.readLine
if (req != null) handler ! ("Request", req)
else continue = false
} catch {
case e: IOException => continue = false
}
}
handler ! "Disconnected"
}
var connected = true
while (connected) {
receive {
case ("Request", req: String) =>
try {
output.writeBytes(req + "\n")
} catch {
case e: IOException => connected = false
}
case "Disconnected" =>
connected = false
}
}
}
close()
server ! "Disconnected"
}
// code for getInputStream(), getOutputStream() and close() methods
}
The connection handler uses an anonymous actor that waits for requests to be sent to the socket by calling the readLine() method (a blocking operation) on the input stream of the socket. When a request is received a "Request" message is sent to the handler which then simply echoes the request back to the client. If the handler or the anonymous actor experiences problems with the underlying socket then the socket is closed and a "Disconnect" message is sent to the echo server indicating that the client has been disconnected from the server.
So, I can fire up the echo server and let it wait for connections. Then I can open a new terminal and connect to the server via telnet. I can send it requests and it responds correctly. Now, if I open another terminal and connect to the server the server registers the connection but fails to start the connection handler for this new connection. When I send it messages via any of the existing connections I get no immediate response. Here's the interesting part. When I terminate all but one of the existing client connections and leave client X open, then all the responses to the request I sent via client X are returned. I've done some tests and concluded that the act() method is not being called on subsequent client connections even though I call the start() method on creating the connection handler.
I suppose I'm handling the blocking operations incorrectly in my connection handler. Since a previous connection is handled by a connection handler that has an anonymous actor blocked waiting for a request I'm thinking that this blocked actor is preventing the other actors (connection handlers) from starting up.
How should I handle blocking operations when using scala actors?
Any help would be greatly appreciated.
From the scaladoc for scala.actors.Actor:
Note: care must be taken when invoking thread-blocking methods other than those provided by the Actor trait or its companion object (such as receive). Blocking the underlying thread inside an actor may lead to starvation of other actors. This also applies to actors hogging their thread for a long time between invoking receive/react.
If actors use blocking operations (for example, methods for blocking I/O), there are several options:
The run-time system can be configured to use a larger thread pool size (for example, by setting the actors.corePoolSize JVM property).
The scheduler method of the Actor trait can be overridden to return a ResizableThreadPoolScheduler, which resizes its thread pool to avoid starvation caused by actors that invoke arbitrary blocking methods.
The actors.enableForkJoin JVM property can be set to false, in which case a ResizableThreadPoolScheduler is used by default to execute actors.