Sending messages to handler function fibers in TCP server - sockets

If handler function passed to tcp_server() from socket module runs as fiber is there possibility to communicate with each tcp_connection by fiber.channel?

Yes, it is.
#!/usr/bin/tarantool
local fiber = require('fiber')
local socket = require('socket')
local clients = {}
function rc_handle(s)
-- You can save socket reference in some table
clients[s] = true
-- You can create a channel
-- I recommend attaching it to the socket
-- so it'll be esier to collect garbage
s.channel = fiber.channel()
-- You can also get the reference to the handling fiber.
-- It'll help you to tell alive clients from dead ones
s.fiber = fiber.self()
s:write(string.format('Message for %s:%s: %s',
s:peer().host, s:peer().port, s.channel:get()
))
-- Don't forget to unref the client if it's done manually
-- Or you could make clients table a weak table.
clients[s] = nil
end
server = socket.tcp_server('127.0.0.1', 3003, {
name = 'srv',
handler = rc_handle,
})
function greet_all(msg)
-- So you can broadcast your message to all the clients
for s, _ in pairs(clients) do
s.channel:put(msg)
end
end
require('console').start()
Of course, this snippet if far from being perfect, but I hope it'll help you to get the work done.

Related

Assign session specific data to a socket

I'm writing a server application in D, who should be able to manage n connections simultaneously.
To achieve this i am using std.socket.Socket.select. This works fine. But I can't bind session specific data to the socket and i don't see any way to do this, cause Socket does not allow to save a handle to user specific data. After
Socket.select(socketSet, null, null);
I'm able to get all affected sockets, but I can't assign this sockets to my user specific session data. What's my mistake? Is it possible to reach my goal in this way? Or should I choose another way for my requirements?
My relevant code:
ushort port = 5010;
stoprequest = false;
auto listener = new TcpSocket();
assert(listener.isAlive);
listener.blocking = false;
listener.bind(new InternetAddress(port));
listener.listen(10);
enum MAX_CONNECTIONS = 100;
auto socketSet = new SocketSet(MAX_CONNECTIONS + 1);
Socket[] reads;
Session[] sessions;
while (true)
{
socketSet.add(listener);
foreach (session; sessions)
socketSet.add(session.socket);
Socket.select(socketSet, null, null);
for (size_t i = 0; i < reads.length; i++)
{
if (socketSet.isSet(reads[i]))
{
// Now i should acces to session related data, but how?
char[1024] buf;
auto datLength = reads[i].receive(buf[]);
if (datLength == Socket.ERROR)
writeln("Connection error.");
else if (datLength != 0)
{
writefln("Received %d bytes from %s: \"%s\"", datLength, reads[i].remoteAddress().toString(), buf[0..datLength]);
continue;
}
else { // Error Handling. Shortened, since unimportant for the example}
reads[i].close();
reads = reads.remove(i);
i--;
}
}
if (socketSet.isSet(listener))
{
Socket sn = null;
sn = listener.accept();
if (reads.length < MAX_CONNECTIONS)
{
Session session = new Session();
session.socket = sn;
sessions ~= session;
}
else { // Error Handling for too many connection. Shortened, since unimportant for the example}}
}
socketSet.reset();
}
The hint to use poll() was helpful. After reading https://daniel.haxx.se/docs/poll-vs-select.html I think that both variants work and neither of them are the real thing. For an efficient way, I should better deal with libev. Fortunately, efficiency is not my problem in this particular project. For this reason I will use select(), because i found out, that accessing handle gives me a unique number which can be passed to a own lookup table. This allows me to assign session data to a socket. So I prefer to stick with the encapsulated functionality of std.socket.Socket and don't work around it.
My concrete question can therefore be answered with :
Use Socket.handle to identify the socket and manage session related
data
A few other alternatives you can consider:
1) use a subclass of Socket. You can make your own class that inherits from it and adds more stuff.
2) The poll function is found in import core.sys.posix.poll;, and you can pass socket.handle to that as well. But note it will not work on Windows without modification.
or indeed 3) do your own lookup table, that works too.
Note that the std.socket.Socket is a very thin wrapper around the bsd socket api, just internally it does conveniently handle the slight differences between Windows and posix. Still it is pretty easy to adapt code to use the other apis with it (or tutorials on C language stuff to D) since it is all basically the same thing - and literally the same functions if you import core.sys stuff.

Rust persistent TcpStream

I seem to be struggling with the std::io::TcpStream. I'm actually trying to open a TCP connection with another system but the below code emulates the problem exactly.
I have a Tcp server that simply writes "Hello World" to the TcpStream upon opening and then loops to keep the connection open.
fn main() {
let listener = io::TcpListener::bind("127.0.0.1", 8080);
let mut acceptor = listener.listen();
for stream in acceptor.incoming() {
match stream {
Err(_) => { /* connection failed */ }
Ok(stream) => spawn(proc() {
handle(stream);
})
}
}
drop(acceptor);
}
fn handle(mut stream: io::TcpStream) {
stream.write(b"Hello Connection");
loop {}
}
All the client does is attempt to read a single byte from the connection and print it.
fn main() {
let mut socket = io::TcpStream::connect("127.0.0.1", 8080).unwrap();
loop {
match socket.read_byte() {
Ok(i) => print!("{}", i),
Err(e) => {
println!("Error: {}", e);
break
}
}
}
}
Now the problem is my client remains blocked on the read until I kill the server or close the TCP connection. This is not what I want, I need to open a TCP connection for a very long time and send messages back and forth between client and server. What am I misunderstanding here? I have the exact same problem with the real system i'm communicating with - I only become unblocked once I kill the connection.
Unfortunately, Rust does not have any facility for asynchronous I/O now. There are some attempts to rectify the situation, but they are far from complete yet. That is, there is a desire to make truly asynchronous I/O possible (proposals include selecting over I/O sources and channels at the same time, which would allow waking tasks which are blocked inside an I/O operation via an event over a channel, though it is not clear how this should be implemented on all supported platforms), but there's still a lot to do and there's nothing really usable now, as far as I'm aware.
You can emulate this to some extent with timeouts, however. This is far from the best solution, but it works. It could look like this (simplified example from my code base):
let mut socket = UdpSocket::bind(address).unwrap();
let mut buf = [0u8, ..MAX_BUF_LEN];
loop {
socket.set_read_timeout(Some(5000));
match socket.recv_from(buf) {
Ok((amt, src)) => { /* handle successful read */ }
Err(ref e) if e.kind == TimedOut => {} // continue
Err(e) => fail!("error receiving data: {}", e) // bail out
}
// do other work, check exit flags, for example
}
Here recv_from will return IoError with kind set to TimedOut if there is no data available on the socket during 5 seconds inside recv_from call. You need to reset the timeout before inside each loop iteration since it is more like a "deadline" than a timeout - when it expires, all calls will start to fail with timeout error.
This is definitely not the way it should be done, but Rust currently does not provide anything better. At least it does its work.
Update
There is now an attempt to create an asynchronous event loop and network I/O based on it. It is called mio. It probably can be a good temporary (or even permanent, who knows) solution for asynchronous I/O.

Async sockets in D

Okay this is my first question here on Stack Overflow, so bare over with it if I'm not asking properly.
Basically I'm trying to code some asynchronous sockets using std.socket, but I'm not sure if I've understood the concept correct. I've only ever worked with asynchronous sockets in C# and in D it seem to be on a much lower level. I've researched a lot and looked up a lot of code, documentation etc. both for D and C/C++ to get an understanding, however I'm not sure if I understand the concept correctly and if any of you have some examples. I tried looking at splat, but it's very outdated and vibe seems to be too complex just for a simple asynchronous socket wrapper.
If I understood correctly there is no poll() function in std.socket so you'd have to use SocketSet with a single socket on select() to poll the status of the socket right?
So basically how I'd go about handling the sockets is polling to get the read status of the socket and if it has a success (value > 0) then I can call receive() which will return 0 for disconnection else the received value, but I'd have to keep doing this until the expected bytes are received.
Of course the socket is set to nonblocked!
Is that correct?
Here is the code I've made up so far.
void HANDLE_READ()
{
while (true)
{
synchronized
{
auto events = cast(AsyncObject[int])ASYNC_EVENTS_READ;
foreach (asyncObject; events)
{
int poll = pollRecv(asyncObject.socket.m_socket);
switch (poll)
{
case 0:
{
throw new SocketException("The socket had a time out!");
continue;
}
default:
{
if (poll <= -1)
{
throw new SocketException("The socket was interrupted!");
continue;
}
int recvGetSize = (asyncObject.socket.m_readBuffer.length - asyncObject.socket.readSize);
ubyte[] recvBuffer = new ubyte[recvGetSize];
int recv = asyncObject.socket.m_socket.receive(recvBuffer);
if (recv == 0)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.socket.disconnect();
continue;
}
asyncObject.socket.m_readBuffer ~= recvBuffer;
asyncObject.socket.readSize += recv;
if (asyncObject.socket.readSize == asyncObject.socket.expectedReadSize)
{
removeAsyncObject(asyncObject.event_id, true);
asyncObject.event(asyncObject.socket);
}
break;
}
}
}
}
}
}
So basically how I'd go about handling the sockets is polling to get the read status of the socket
Not quite right. Usually, the idea is to build an event loop around select, so that your application is idle as long as there are no network or timer events that need to be handled. With polling, you'd have to check for new events continuously or on a timer, which leads to wasted CPU cycles, and events getting handled a bit later than they occur.
In the event loop, you populate the SocketSets with sockets whose events you are interested in. If you want to be notified of new received data on a socket, it goes to the "readable" set. If you have data to send, the socket should be in the "writable" set. And all sockets should be on the "error" set.
select will then block (sleep) until an event comes in, and fill the SocketSets with the sockets which have actionable events. Your application can then respond to them appropriately: receive data for readable sockets, send queued data for writable sockets, and perform cleanup for errored sockets.
Here's my D implementation of non-fiber event-based networking: ae.net.asockets.

How to check a socket is closed or not in luasocket library?

I am writing a server using Lua programming language, and the network layer is based on LuaSocket.
And I cannot find any method to detect a socket is closed or not in its reference manual except by just try to read data from it(it will return nil and string 'close' when calling that).
My code looks like this:
local socket = require 'socket'
local server = socket.tcp()
local port = 9527
server:bind('*', port)
local status, errorMessage = server:listen()
if status == 1 then
printf('Server is launched successfully on port %u', port)
else
printf('Server listen failed, error message is %s', errorMessage)
return
end
local sockets = {server}
while true do
local results = socket.select(sockets)
for _, sock in ipairs(results) do
if sock == server then
local s = server:accept()
callback('Connected', s)
table.insert(sockets, s)
printf('%s connected', s:getsockname())
else
-- trying to detect socket is closed
if sock:isClosed() then
callback('Disconnected', sock)
for i, s in ipairs(sockets) do
if s == sock then
table.remove(sockets, i)
break
end
end
printf('%s disconnected', sock:getsockname())
else
callback('ReadyRead', sock)
end
end
end
end
except by just try to read data from it (it will return nil and string 'close' when calling that).
I'm not aware of any other method. Why doesn't checking the result from reading a socket work for you?
You need to use settimeout to make the call non-blocking and check the error returned for closed (not close). You can read one byte and store it or you can try reading zero bytes.

Send commands over socket, but wait every time for response (Node.js)

I need to send several commands over telnet to a server. If I try to send them without a time delay between every command, the server freaks out:
var net = require('net');
var conn = net.createConnection(8888, 'localhost');
conn.on('connect', function() {
conn.write(command_1);
conn.write(command_2);
conn.write(command_3);
//...
conn.write(command_n);
})
I guess the server needs some time to respond to command n before I send it command n+1. One way is to write something to the log and fake a "wait":
var net = require('net');
var conn = net.createConnection(8888, 'localhost');
conn.on('connect', function() {
console.log('connected to server');
console.log('I'm about to send command #1');
conn.write(command_1);
console.log('I'm about to send command #2');
conn.write(command_2);
console.log('I'm about to send command #3');
conn.write(command_3);
//...
console.log('I'm about to send command #n');
conn.write(command_n);
})
It might also be the fact that conn.write() is asynchronous, and putting one command after another doesn't guranty the correct order??
Anyway, what is the correct pattern to assure correct order and enough time between two consecutive commands, for the server to respond?
First things first: if this is truly a telnet server, then you should do something with the telnet handshaking (where terminal options are negotiated between the peers, this is the binary data you can see when opening the socket).
If you don't want to get into that (it will depend on your needs), you can ignore the negotiation and go straight to business, but you will have to read this data and ignore it yourself.
Now, in your code, you're sending the data as soon as the server accepts the connection. This may be the cause of your troubles. You're not supposed to "wait" for the response, the response will get to you asynchronously thanks to nodejs :) So you just need to send the commands as soon as you get the "right" response from the server (this is actually useful, because you can see if there were any errors, etc).
I've tried this code (based on yours) against a device I've got at hand that has a telnet server. It will do a login and then a logout. See how the events are dispatched according to the sever's response:
var net = require('net');
var conn = net.createConnection(23, '1.1.1.1');
var commands = [ "logout\n" ];
var i = 0;
conn.setEncoding('ascii');
conn.on('connect', function() {
conn.on('login', function () {
conn.write('myUsername\n');
});
conn.on('password', function () {
conn.write('myPassword\n');
});
conn.on('prompt', function () {
conn.write(commands[i]);
i++;
});
conn.on('data', function(data) {
console.log("got: " + data + "\n");
if (data.indexOf("login") != -1) {
conn.emit('login');
}
if (data.indexOf("password") != -1) {
conn.emit('password');
}
if (data.indexOf(">#") != -1) {
conn.emit('prompt');
}
});
});
See how the commands are in an array, where you can iteratively send them (the prompt event will trigger the next command). So the right response from the server is the next prompt. When the server sends (in this case) the string ># another command is sent.
Hope it helps :)
The order of writes is guaranteed. However:
You must subscribe to data event. conn.on('data', function(data)
{}) will do.
You must check return values of writes - if a write
fails, you must wait for 'drain' event. So you should check if any
write really fails and if it does then fix the problem. If it
doesn't - then you can leave current dirty solution as is.
You
must check if your server supports request piplining (sending
multiple requests without waiting for responses). If it doesn't -
you must not send next request before receiving a data event after
the previous one.
You must ensure that the commands you send are real telnet commands - telnet expects a \0 byte after \r\n (see the RFC), so certain servers may freak if \0 is not present.
So:
var net = require('net');
var conn = net.createConnection(8888, 'localhost');
conn.on('connect', function() {
console.log(conn.write(command_1) &&
conn.write(command_2) &&
conn.write(command_3) &&
//...
conn.write(command_n))
})
conn.on('data', function () {})
If it writes false - then you must wait for 'drain'. If it writes true - you must implement waiting. I discourage the event-based solution and suggest to look at using async or Step NPM modules instead.