Socket IO broadcast to each client with their socket id - sockets

I'm getting comfortable with socket.io. It really rocks.
I am aware that from the server, I can either:
Respond to a socket client:
socket.emit(event, data);
Broadcast to other clients:
socket.broadcast.emit(event, data);
Broadcast to all clients without distinction:
io.emit(event, data);
But what I'd like to do is to loop over the clients to emit to each of them, with their socket.id as a parameter:
io.emitEach(socket => socket.emit(event, dataWichDependsOn(socket.id)));
Can I achieve this?
I tried this:
io.of('/').clients((error, clients) => {
if (error) throw error;
return clients.forEach(clientId => {
io.to(clientId).emit(event, dataWichDependsOn(clientId));
})
}
Without success :( the message doesn't seem to be emited.

Object.keys(io.sockets.sockets).forEach((clientId)=>{
io.to(clientId).emit(event, dataWichDependsOn(clientId))
})

Related

Pattern for async request/response over TCP socket

I'm trying to implement a Dart client for the mpd protocol. mpd communicates over a TCP connection by exchanging text messages. A command is sent and terminated by \n. The server replies with one or multiple lines delimited by \n and ends a response with a OK or ACK message.
I'm struggling to implement a request / response approach due the async nature of Dart and the Socket class. I'm coming from Java & Go and the async approach is not natural to me.
This is how I want a caller to use the client:
MpdClient c = MpdClient('hostname', 6600);
await c.connect();
String response = await c.command('status');
print(response);
await c.close();
In the client, connecting is no issue:
Socket socket = await Socket.connect(hostname, port);
Then I'm not sure what to do with the socket and how to implement the Future<string> command(String cmd) function. I am able to listen() on the Socket and decode the response, but I don't see how I can tie that to a Future<string> returned by c.command(...).
I also tried to use the stream methods on Socket (which works well via .takeWhile() and .fold() to decode a response). Something like this:
Future<String> command(String cmd) {
String last = "";
return _stream
.map((event) => event.toList())
.transform(utf8.decoder)
.takeWhile((event) {
if (last.startsWith("OK") || last.startsWith("ACK")) {
return false;
} else {
last = event;
return true;
}
})
.fold("", (previous, element) => previous + element);
}
But the Socket stream can only be subscribed once, so this method cannot be called multiples times.
Is there a way to achieve what I want from the caller side with the Socket class? Or would I be better off using RawSocket and its read() method which offers a more low-level / controlled way to read the response?

It's possible to implement socket.io with ack, and if the server did not receive the ack, the server try again to emit the event?

It's possible to implement socket.io with ack , and if the server did not receive the ack, the server try again to emit the event?
socket.io uses TCP as the underlying transport which is a "reliable" transport. TCP will retry on it's own. The packet will be delivered unless the connection is permanently down. If the connection is down, what you really need is for the client to reconnect (which it will do eventually if the connection is actually down).
You can use socket.io's ACK feature and implement your own timeout to retry, but I don't think it will really buy you much because if the connection is working, then TCP will deliver it for you as soon as the connection allows. If TCP can't deliver it for you, then you really need to client to establish a new connection (which it will do eventually) and then when the new connection comes in is when you need to retransmit.
If you wanted to try your own retry, you could do it like this:
function delay(t) {
return new Promise(function(resolve) {
setTimeout(resolve, t);
});
}
function send(socket, msg, data, maxRetries, retryTime) {
maxRetries = maxRetries || 5;
retryTime = retryTime || 30 * 1000;
return write(socket, msg, data).catch(function() {
--maxRetries;
if (maxRetries >= 0) {
return delay(retryTime).write(socket, msg, data);
} else {
throw new Error("socket.io write failed after maxRetries")
}
});
function write(socket, msg, data) {
return new Promise(function(resolve, reject) {
let timer = setTimeout(reject, retryTime);
socket.emit(msg, data, function() {
clearTimeout(timer);
resolve();
});
});
}
}
And, the listener on the client for the message you are sending would have to reply appropriately to the ACK.
// client side
io.on('someMsg', function(data, fn) {
// process data here
console.log(data);
// call ACK function to send ack back
fn();
});

Netty: when does writeAndFlush channel future listener callback get executed?

I am new to netty and trying to understand how the channel future for writeAndFlush works. Consider the following code running on a netty client:
final ChannelFuture writeFuture = abacaChannel.writeAndFlush("Test");
writeFuture.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (writeFuture.isSuccess()) {
LOGGER.debug("Write successful");
} else {
LOGGER.error("Error writing message to Abaca host");
}
}
});
When does this writeFuture operationComplete callback executed?
After netty hands over the data to the OS send buffers (or)
After the OS writes the data to the network socket. (or)
After this data is actually received by the server.
TIA
1. After netty hands over the data to the OS send buffers (or)
Listener will be notified after data is removed from ChannelOutboundBuffer (netty's send buffer)

socket io emit failed callback

Is there any way to know socket io emit failed and success, something like ajax callback methods: onSuccess, onError?
For socket io emit i only find:
socket.emit('publish', {message:'test message'},function (data) {
alert("")})
This callback only be called when the server send an ack response.But it can not apply for this situation:
At the moment of emit message to server, there is bad network or lost connection, that means server not receive this message, so the client callback function is not called.
What I want is:
When I call the socket io emit, if it fails, I want to retry 3 times.
I know this is an old post, but just in case anyone is still having trouble with this.
var socket = new io.connect('http://localhost:3000', {
'reconnection': true,
'reconnectionDelay': 1000,
'reconnectionDelayMax' : 5000,
'reconnectionAttempts': 3
});
socket.on('connect_error', function() {
console.log('Connection failed');
});
socket.on('reconnect_failed', function() {
// fired only after the 3 attemps in this example fail
console.log('Reconnection failed');
});
More info here -> https://socket.io/docs/client-api/#manager-reconnectionAttempts-value

Rust persistent TcpStream

I seem to be struggling with the std::io::TcpStream. I'm actually trying to open a TCP connection with another system but the below code emulates the problem exactly.
I have a Tcp server that simply writes "Hello World" to the TcpStream upon opening and then loops to keep the connection open.
fn main() {
let listener = io::TcpListener::bind("127.0.0.1", 8080);
let mut acceptor = listener.listen();
for stream in acceptor.incoming() {
match stream {
Err(_) => { /* connection failed */ }
Ok(stream) => spawn(proc() {
handle(stream);
})
}
}
drop(acceptor);
}
fn handle(mut stream: io::TcpStream) {
stream.write(b"Hello Connection");
loop {}
}
All the client does is attempt to read a single byte from the connection and print it.
fn main() {
let mut socket = io::TcpStream::connect("127.0.0.1", 8080).unwrap();
loop {
match socket.read_byte() {
Ok(i) => print!("{}", i),
Err(e) => {
println!("Error: {}", e);
break
}
}
}
}
Now the problem is my client remains blocked on the read until I kill the server or close the TCP connection. This is not what I want, I need to open a TCP connection for a very long time and send messages back and forth between client and server. What am I misunderstanding here? I have the exact same problem with the real system i'm communicating with - I only become unblocked once I kill the connection.
Unfortunately, Rust does not have any facility for asynchronous I/O now. There are some attempts to rectify the situation, but they are far from complete yet. That is, there is a desire to make truly asynchronous I/O possible (proposals include selecting over I/O sources and channels at the same time, which would allow waking tasks which are blocked inside an I/O operation via an event over a channel, though it is not clear how this should be implemented on all supported platforms), but there's still a lot to do and there's nothing really usable now, as far as I'm aware.
You can emulate this to some extent with timeouts, however. This is far from the best solution, but it works. It could look like this (simplified example from my code base):
let mut socket = UdpSocket::bind(address).unwrap();
let mut buf = [0u8, ..MAX_BUF_LEN];
loop {
socket.set_read_timeout(Some(5000));
match socket.recv_from(buf) {
Ok((amt, src)) => { /* handle successful read */ }
Err(ref e) if e.kind == TimedOut => {} // continue
Err(e) => fail!("error receiving data: {}", e) // bail out
}
// do other work, check exit flags, for example
}
Here recv_from will return IoError with kind set to TimedOut if there is no data available on the socket during 5 seconds inside recv_from call. You need to reset the timeout before inside each loop iteration since it is more like a "deadline" than a timeout - when it expires, all calls will start to fail with timeout error.
This is definitely not the way it should be done, but Rust currently does not provide anything better. At least it does its work.
Update
There is now an attempt to create an asynchronous event loop and network I/O based on it. It is called mio. It probably can be a good temporary (or even permanent, who knows) solution for asynchronous I/O.