tornado Periodiccallback and socket operations inside callback - sockets

I am trying to make a non-blocking web-application which uses Tornado.
That application uses PeriodicCallback as a scheduler for grabbing the data from news sites:
for nc_uuid in self.LIVE_NEWSCOLLECTORS.keys():
self.LIVE_NEWSCOLLECTORS[nc_uuid].agreggator,ioloop=args
period=int(self.LIVE_NEWSCOLLECTORS[nc_uuid].period)*60
if self.timer is not None: period = int(self.timer)
#self.scheduler.add_job(func=self.LIVE_NEWSCOLLECTORS[nc_uuid].getNews,args=[self.source,i],trigger='interval',seconds=10,id=nc_uuid)
task = tornado.ioloop.PeriodicCallback(lambda:self.LIVE_NEWSCOLLECTORS[nc_uuid].getNews(self.source,i),1000*10,ioloop)
task.start()
'getData' which is calling as a callback has an async http request that parses and sent data to TCPServer for analyzing by calling method process_responce:
#gen.coroutine
def process_response(self,*args,**kwargs):
buf = {'sentence':str('text here')}
data_string = json.dumps(buf)
s.send(data_string)
while True:
try:
data = s.recv(100000)
if not data:
print "connection closed"
s.close()
break
else:
print "Received %d bytes: '%s'" % (len(data), data)
# s.close()
break
except socket.error, e:
if e.args[0] == errno.EWOULDBLOCK:
print 'error',errno.EWOULDBLOCK
time.sleep(1) # short delay, no tight loops
else:
print e
break
i+=1
Inside process_response I use basic example for non-blocking socket operations.
Process_response shows something like this:
error 10035
error 10035
Received 75 bytes: '{"mode": 1, "keyword": "\u0435\u0432\u0440\u043e", "sentence": "text here"}'
That looks normal behavior. But when recieving data the main IOLoop are being locked! If I would ask webserver it wouldn`t return my anydata until periodiccallback task finishes...
Where is my mistake?

time.sleep() is a blocking function and must never be used in non-blocking code. Use yield gen.sleep() instead.
Also consider using tornado.iostream.IOStream instead of raw socket operations.

Related

Jmeter - Force close a socket/wait until message recieved

I am opening a socket in jmeter (using groovy in JSR223 Sampler), and storing the message in a jmeter variable. This is the below code:
SocketAddress inetSocketAddress = new InetSocketAddress(InetAddress.getByName("localhost"),4801);
def server = new ServerSocket()
server.bind(inetSocketAddress)
while(!vars.get("caseId"))) {
server.accept { socket ->
log.info('Someone is connected')
socket.withStreams { input, output ->
InputStreamReader isReader = new InputStreamReader(input);
BufferedReader reader = new BufferedReader(isReader);
StringBuffer sb = new StringBuffer();
String str;
while((str = reader.readLine())!= null){
sb.append(str);
}
String finalStr = sb.toString()
String caseId = finalStr.split("<caseId>")[1].split("</caseId>")[0]
vars.put("caseId", caseId)
}
log.info("Connection processed")
}
}
if(vars.get("caseId"))
{
try
{
server.close();
vars.put("socketClose",true);
}
catch(Exception e)
{
log.info("Error in closing the socket: " + e.getMessage());
}
}
Now, there is some time delay between the first loop is executed and the message being recieved from the port. It doesnt receive the message immediately, and hence while loop is executed again. And then message is received and it sets caseId. It goes on to close the socket, because caseId is set. And that is throwing the error, because socket is still waiting for the message. So is there a way, to wait until socket has recieved all the messages, so i could properly close it?
Or just force close the socket, and Jmeter wont throw any exception?
Or when i execute next component, say IF controller in Jmeter, it waits until variable socketClose is set true? In that way, instead of while loops inside JSR223 sampler, i could use multiple If Controllers in Jmeter thread.
This is how ServerSocket.close() function works
public void close()
throws IOException
Closes this socket. Any thread currently blocked in accept() will throw a SocketException.
I don't think there is a way "to wait until socket has recieved all the messages" because Socket is dump as a rock and it can either listen for connections or shut down.
Maybe you might be interested in setSoTimeout() function?
Also this line:
vars.put("socketClose",true)
is very suspicious, I think you need to change it either to:
vars.put("socketClose", "true")
or to
vars.putObject("socketClose",true)
as JMeterVariables.put() function can accept only a String, see Top 8 JMeter Java Classes You Should Be Using with Groovy article for more details.

Enqueue liquidsoap request from script instead of command

I'm trying to write my very first liquidsoap program. It goes something like this:
sounds_path = "../var/sounds"
# Log file
set("log.file.path","var/log/liquidsoap.log")
set("harbor.bind_addr", "127.0.0.1")
set("harbor.timeout", 5)
set("harbor.verbose", true)
set("harbor.reverse_dns", false)
silence = blank()
queue = request.queue()
def play(~protocol, ~data, ~headers, uri) =
request.push("#{sounds_path}#{uri}")
http_response(protocol=protocol, code=20000)
end
harbor.http.register(port=8080, method="POST", "^/(?!\0)+", play)
stream = fallback(track_sensitive=false, [queue, silence])
...output.whatever...
And I was wondering if there is any way to push to the queue from the harbor callback.
Else, how should I proceed about making requests originate from HTTP calls? I really want to avoid telnet. My final objective is having an endpoint that I can call to make my stream play a file on demand and be silent the rest of the time.
give this a go its liquidsoap so its tricky to understand but it should do the trick
########### functions ##############
def playnow(source,~action="override", ~protocol, ~data, ~headers, uri) =
queue_count = list.length(server.execute("playnow.primary_queue"))
arr = of_json(default=[("key","value")], data)
track = arr["track"];
log("adding playnow track '#{track}'")
if queue_count != 0 and action == "override" then
server.execute("playnow.insert 0 #{track}")
source.skip(source)
print("skipping playnow queue")
else
server.execute("playnow.push #{track}")
print("no skip required")
end
http_response(
protocol=protocol,
code=200,
headers=[("Content-Type","application/json; charset=utf-8")],
data='{"status":"success", "track": "#{track}", "action": "#{action}"}'
)
end
######## live stuff below #######
playlist= playlist(reload=1, reload_mode="watch", "/etc/liquidsoap/playlist.xspf")
requested = crossfade(request.equeue(id="playnow"))
live= fallback(track_sensitive=false,transitions=[crossfade, crossfade],[requested, playlist])
output.harbor(%mp3,id="live",mount="live_radio", radio)
harbor.http.register(port=MY_HARBOR_PORT, method="POST","/playnow", playnow(live))
to use the above you need to send a post request with json data like so:
{"track":"http://mydomain/mysong.mp3"}
this is also with the assumption you have the harbor running which you should be able to find out using the liquidsoap docs
there are multiple methods of sending into the queue, there is telnet, you can create a http input, or a metadata request to playnow via the harbor, let me know which one you opt for and i can provide you with a code example

error 9 Bad file descriptor error using sockets in python

I am trying to implement a very basic code of client server in python using non blocking sockets. I have made two threads for reading and writing.
My client code is below.
import sys
import socket
from time import sleep
from _thread import *
import threading
global s
def writeThread():
while True:
data = str(input('Please input the data you want to send to client 2 ( to end connection type end ) : '))
data = bytes(data, 'utf8')
print('You are trying to send : ', data)
s.sendall(data)
def readThread():
while True:
try:
msg = s.recv(4096)
except socket.timeout as e:
sleep(1)
print('recv timed out, retry later')
continue
except socket.error as e:
# Something else happened, handle error, exit, etc.
print(e)
sys.exit(1)
else:
if len(msg) == 0:
print('orderly shutdown on server end')
sys.exit(0)
else:
# got a message do something :)
print('Message is : ', msg)
if __name__ == '__main__':
global s
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('',6188))
s.settimeout(2)
wThread = threading.Thread(None,writeThread)
rThread = threading.Thread(None,readThread)
wThread.start()
rThread.start()
s.close()
Question:
I know this can be implemented through select module too but I would like to know how to do it this way.
Your main thread creates the socket, then creates thread1 and thread2. Then it closes the socket (and exits because the program ends after that). So that when thread1 and thread2 try to use it, it's no longer open. Hence EBADF (Bad file descriptor error).
Your main thread should not close the socket while the other threads are still running. It could wait for them to end:
[...]
s.settimeout(2)
wThread = threading.Thread(None,writeThread)
rThread = threading.Thread(None,readThread)
wThread.start()
rThread.start()
wThread.join()
rThread.join()
s.close()
However, since the main thread has nothing better to do than wait, it might be better to create only one additional thread (say rThread), then have the main thread take over the task currently being performed by the other. I.e.
[...]
s.settimeout(2)
rThread = threading.Thread(None,readThread)
rThread.start()
writeThread()

Rust persistent TcpStream

I seem to be struggling with the std::io::TcpStream. I'm actually trying to open a TCP connection with another system but the below code emulates the problem exactly.
I have a Tcp server that simply writes "Hello World" to the TcpStream upon opening and then loops to keep the connection open.
fn main() {
let listener = io::TcpListener::bind("127.0.0.1", 8080);
let mut acceptor = listener.listen();
for stream in acceptor.incoming() {
match stream {
Err(_) => { /* connection failed */ }
Ok(stream) => spawn(proc() {
handle(stream);
})
}
}
drop(acceptor);
}
fn handle(mut stream: io::TcpStream) {
stream.write(b"Hello Connection");
loop {}
}
All the client does is attempt to read a single byte from the connection and print it.
fn main() {
let mut socket = io::TcpStream::connect("127.0.0.1", 8080).unwrap();
loop {
match socket.read_byte() {
Ok(i) => print!("{}", i),
Err(e) => {
println!("Error: {}", e);
break
}
}
}
}
Now the problem is my client remains blocked on the read until I kill the server or close the TCP connection. This is not what I want, I need to open a TCP connection for a very long time and send messages back and forth between client and server. What am I misunderstanding here? I have the exact same problem with the real system i'm communicating with - I only become unblocked once I kill the connection.
Unfortunately, Rust does not have any facility for asynchronous I/O now. There are some attempts to rectify the situation, but they are far from complete yet. That is, there is a desire to make truly asynchronous I/O possible (proposals include selecting over I/O sources and channels at the same time, which would allow waking tasks which are blocked inside an I/O operation via an event over a channel, though it is not clear how this should be implemented on all supported platforms), but there's still a lot to do and there's nothing really usable now, as far as I'm aware.
You can emulate this to some extent with timeouts, however. This is far from the best solution, but it works. It could look like this (simplified example from my code base):
let mut socket = UdpSocket::bind(address).unwrap();
let mut buf = [0u8, ..MAX_BUF_LEN];
loop {
socket.set_read_timeout(Some(5000));
match socket.recv_from(buf) {
Ok((amt, src)) => { /* handle successful read */ }
Err(ref e) if e.kind == TimedOut => {} // continue
Err(e) => fail!("error receiving data: {}", e) // bail out
}
// do other work, check exit flags, for example
}
Here recv_from will return IoError with kind set to TimedOut if there is no data available on the socket during 5 seconds inside recv_from call. You need to reset the timeout before inside each loop iteration since it is more like a "deadline" than a timeout - when it expires, all calls will start to fail with timeout error.
This is definitely not the way it should be done, but Rust currently does not provide anything better. At least it does its work.
Update
There is now an attempt to create an asynchronous event loop and network I/O based on it. It is called mio. It probably can be a good temporary (or even permanent, who knows) solution for asynchronous I/O.

Reading Data from a Socket with WSAAsyncSelect

Is it ok to invoke WSAAsyncSelect in the WM_CREATE message of a Window Process (WinProc), and then perform all recv actions inside the same WinProc (e.g. to recv and populate a control with the received byte data) under WM_SOCKET?
For example, I know that performing long tasks inside the WinProc can cause the window to be unresponsive (since it cannot handle other messages until this message is completed), but I've seen no examples that treat this recv I/O with a thread or event object. Is it completely unnecessary?
Here's the example case in the WinProc I've seen on the net, and also in Petzold the recv is handled in a similar fashion:
case WM_SOCKET:
{
if(WSAGETSELECTERROR(lParam))
{
MessageBox(hWnd,
"Connection to server failed",
"Error",
MB_OK|MB_ICONERROR);
SendMessage(hWnd,WM_DESTROY,NULL,NULL);
break;
}
switch(WSAGETSELECTEVENT(lParam))
{
case FD_READ:
{
char szIncoming[1024];
ZeroMemory(szIncoming,sizeof(szIncoming));
int inDataLength=recv(Socket,
(char*)szIncoming,
sizeof(szIncoming)/sizeof(szIncoming[0]),
0);
strncat(szHistory,szIncoming,inDataLength);
strcat(szHistory,"\r\n");
SendMessage(hEditIn,
WM_SETTEXT,
sizeof(szIncoming)-1,
reinterpret_cast<LPARAM>(&szHistory));
}
break;
case FD_CLOSE:
{
MessageBox(hWnd,
"Server closed connection",
"Connection closed!",
MB_ICONINFORMATION|MB_OK);
closesocket(Socket);
SendMessage(hWnd,WM_DESTROY,NULL,NULL);
}
break;
}
}
Yes, this is perfectly acceptable. Though typically you would wait until CreateWindow/Ex() exits before then calling WSAAsyncSelect(). But either way works fine. Just be sure to handle the case where recv() fails, or returns fewer bytes than you asked for.