Multicast UDP packets using Tokio futures - sockets

I'm playing around with Tokio and Rust and as an example, I am trying to write a simple UDP proxy that will just accept UDP packets on one socket and send it out to multiple other destinations. However, I stumble over the situation that I need to send the received packet to multiple addresses and am not sure how to do that in a idiomatic way.
Code I have this far:
extern crate bytes;
extern crate futures;
use std::net::SocketAddr;
use tokio::codec::BytesCodec;
use tokio::net::{UdpFramed, UdpSocket};
use tokio::prelude::*;
fn main() {
let listen_address = "127.0.0.1:4711".parse::<SocketAddr>().unwrap();
let forwarder = {
let socket = UdpSocket::bind(&listen_address).unwrap();
let peers = vec![
"192.168.1.136:4711".parse::<SocketAddr>().unwrap(),
"192.168.1.136:4712".parse::<SocketAddr>().unwrap(),
];
UdpFramed::new(UdpSocket::bind(&listen_address).unwrap(), BytesCodec::new()).for_each(
move |(bytes, _from)| {
// These are the problematic lines
for peer in peers.iter() {
socket.send_dgram(&bytes, &peer);
}
Ok(())
},
)
};
tokio::run({
forwarder
.map_err(|err| println!("Error: {}", err))
.map(|_| ())
});
}
The problematic lines are trying to send the received packet to multiple other addresses using a newly bound socket.
The existing examples all forward packets to single destinations, or internally use mpsc channels to communicate between internal tasks. I do not think that this is necessary and that it should be possible to do without having to spawn more than one task per listening socket.
Update: Thanks to #Ă–mer-erden I got this code that works.
extern crate bytes;
extern crate futures;
use std::net::SocketAddr;
use tokio::codec::BytesCodec;
use tokio::net::{UdpFramed, UdpSocket};
use tokio::prelude::*;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let listen_address = "0.0.0.0:4711".parse::<SocketAddr>()?;
let socket = UdpSocket::bind(&listen_address)?;
let peers: Vec<SocketAddr> = vec!["192.168.1.136:8080".parse()?, "192.168.1.136:8081".parse()?];
let (mut writer, reader) = UdpFramed::new(socket, BytesCodec::new()).split();
let forwarder = reader.for_each(move |(bytes, _from)| {
for peer in peers.iter() {
writer.start_send((bytes.clone().into(), peer.clone()))?;
}
writer.poll_complete()?;
Ok(())
});
tokio::run({
forwarder
.map_err(|err| println!("Error: {}", err))
.map(|_| ())
});
Ok(())
}
Note that:
It is not necessary to call poll_completion for each start_send: it just need to be called after all start_send has been dispatched.
For some reason, the content of the peer is gutted between calls (but there is no compiler error), generating an Error 22 (which is usually because a bad address is given to sendto(2)).
Looking in a debugger, it is quite clear that the second time, the peer address is pointing to invalid memory. I opted to clone the peer instead.
I removed the calls to unwrap() and propagate the Result upwards instead.

Your code has a logical mistake: you are trying to bind the same address twice, as sender and receiver respectively. Instead, you can use a stream and sink. UdpFramed has the functionality to provide that, please see Sink:
A Sink is a value into which other values can be sent, asynchronously.
let listen_address = "127.0.0.1:4711".parse::<SocketAddr>().unwrap();
let forwarder = {
let (mut socket_sink, socket_stream) =
UdpFramed::new(UdpSocket::bind(&listen_address).unwrap(), BytesCodec::new()).split();
let peers = vec![
"192.168.1.136:4711".parse::<SocketAddr>().unwrap(),
"192.168.1.136:4712".parse::<SocketAddr>().unwrap(),
];
socket_stream.for_each(move |(bytes, _from)| {
for peer in peers.iter() {
socket_sink.start_send((bytes.clone().into(), *peer));
socket_sink.poll_complete();
}
Ok(())
})
};
tokio::run({
forwarder
.map_err(|err| println!("Error: {}", err))
.map(|_| ())
});

Related

Swift-NIO + WebSocket-Kit: Proper Setup/Cleanup in a Mac App

Context
I'm developing a Mac app. In this app, I want to run a websocket server. To do this, I'm using Swift NIO and Websocket-Kit. My full setup is below.
Question
All of the documentation for Websocket-Kit and SwiftNIO is geared towards a creating a single server-side process that starts up when you launch it from the command line and then runs infinitely.
In my app, I must be able to start the websocket server and then shut it down and restart it on demand, without re-launching my application. The code below does that, but I would like confirmation of two things:
In the test() function, I send some text to all connected clients. I am unsure if this is thread-safe and correct. Can I store the WebSocket instances as I'm doing here and message them from the main thread of my application?
Am I shutting down the websocket server correctly? The result of the call to serverBootstrap(group:)[...].bind(host:port:).wait() creates a Channel and then waits infinitely. When I call shutdownGracefully() on the associated EventLoopGroup, is that server cleaned up correctly? (I can confirm that port 5759 is free again after this shutdown, so I'm guessing everything is cleaned up?)
Thanks for the input; it's tough to find examples of using SwiftNIO and Websocket-Kit inside an application.
Code
import Foundation
import NIO
import NIOHTTP1
import NIOWebSocket
import WebSocketKit
#objc class WebsocketServer: NSObject
{
private var queue: DispatchQueue?
private var eventLoopGroup: MultiThreadedEventLoopGroup?
private var websocketClients: [WebSocket] = []
#objc func startServer()
{
queue = DispatchQueue.init(label: "socketServer")
queue?.async
{
let upgradePipelineHandler: (Channel, HTTPRequestHead) -> EventLoopFuture<Void> = { channel, req in
WebSocket.server(on: channel) { ws in
ws.send("You have connected to WebSocket")
DispatchQueue.main.async {
self.websocketClients.append(ws)
print("websocketClients after connection: \(self.websocketClients)")
}
ws.onText { ws, string in
print("received")
ws.send(string.trimmingCharacters(in: .whitespacesAndNewlines).reversed())
}
ws.onBinary { ws, buffer in
print(buffer)
}
ws.onClose.whenSuccess { value in
print("onClose")
DispatchQueue.main.async
{
self.websocketClients.removeAll { (socketToTest) -> Bool in
return socketToTest === ws
}
print("websocketClients after close: \(self.websocketClients)")
}
}
}
}
self.eventLoopGroup = MultiThreadedEventLoopGroup(numberOfThreads: 2)
let port: Int = 5759
let promise = self.eventLoopGroup!.next().makePromise(of: String.self)
let server = try? ServerBootstrap(group: self.eventLoopGroup!)
// Specify backlog and enable SO_REUSEADDR for the server itself
.serverChannelOption(ChannelOptions.backlog, value: 256)
.serverChannelOption(ChannelOptions.socketOption(.so_reuseaddr), value: 1)
.childChannelInitializer { channel in
let webSocket = NIOWebSocketServerUpgrader(
shouldUpgrade: { channel, req in
return channel.eventLoop.makeSucceededFuture([:])
},
upgradePipelineHandler: upgradePipelineHandler
)
return channel.pipeline.configureHTTPServerPipeline(
withServerUpgrade: (
upgraders: [webSocket],
completionHandler: { ctx in
// complete
})
)
}.bind(host: "0.0.0.0", port: port).wait()
_ = try! promise.futureResult.wait()
}
}
///
/// Send a message to connected clients, then shut down the server.
///
#objc func test()
{
self.websocketClients.forEach { (ws) in
ws.eventLoop.execute {
ws.send("This is a message being sent to all websockets.")
}
}
stopServer()
}
#objc func stopServer()
{
self.websocketClients.forEach { (ws) in
try? ws.eventLoop.submit { () -> Void in
print("closing websocket: \(ws)")
_ = ws.close()
}.wait() // Block until complete so we don't shut down the eventLoop before all clients get closed.
}
eventLoopGroup?.shutdownGracefully(queue: .main, { (error: Error?) in
print("Eventloop shutdown now complete.")
self.eventLoopGroup = nil
self.queue = nil
})
}
}
In the test() function, I send some text to all connected clients. I am unsure if this is thread-safe and correct. Can I store the WebSocket instances as I'm doing here and message them from the main thread of my application?
Exactly as you're doing here, yes, that should be safe. ws.eventLoop.execute will execute that block on the event loop thread belonging to that WebSocket connection. This will be safe.
When I call shutdownGracefully() on the associated EventLoopGroup, is that server cleaned up correctly? (I can confirm that port 5759 is free again after this shutdown, so I'm guessing everything is cleaned up?)
Yes. shutdownGracefully forces all connections and listening sockets closed.

the trait `std::convert::From<mongodb::error::Error>` is not implemented for `std::io::Error`

Trying to make server with actix-web & mongodb in rust. Getting error
the trait std::convert::From<mongodb::error::Error> is not implemented for std::io::Error
here is my code
use actix_web::{web, App, HttpRequest, HttpServer, Responder};
use mongodb::{options::ClientOptions, Client};
async fn greet(req: HttpRequest) -> impl Responder {
let name = req.match_info().get("name").unwrap_or("World");
format!("Hello {}!", &name)
}
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
// Parse a connection string into an options struct.
let mut client_options = ClientOptions::parse("mongodb://localhost:27017")?;
// Manually set an option.
client_options.app_name = Some("My App".to_string());
// Get a handle to the deployment.
let client = Client::with_options(client_options)?;
// List the names of the databases in that deployment.
for db_name in client.list_database_names(None)? {
println!("{}", db_name);
}
HttpServer::new(|| {
App::new()
.route("/", web::get().to(greet))
.route("/{name}", web::get().to(greet))
})
.bind("127.0.0.1:8000")?
.run()
.await
}
Did I missed anything?
It means that one of the functions you are calling with a ? at the end can return a mongodb::error::Error. But the signature of the main is a std::io::Result<()>, wich is an implied Result<(), std::io::Error>. The only error type it can accept is a io::Error, not a mongodb::Error.
It looks like all the functions you are escaping might return this mongodb::error::Error, so you can try to change the main signature to such a result: Result<(). mongodb::error::Error>.
But I would recommend you do proper error handling on those potential errors, as this is your main(). Change those ? to .expect("Some error message"); at least. The program will still crash, but it will crash in a way that is meaningful to you.

Bound UDP socket not closed when network becomes unavailable

On Linux, I open an UDP socket and bind it to an address that is currently available. I then listen in a loop for new packets. Then I disable wifi, the interface goes down and the network address is removed from the interface. I would expect the receive call to return an error but this is not the case.
Is this expected behaviour? Is there a way to receive an error from a call to receive when the address the socket is bound to disappears?
Example code in Rust:
use std::net::UdpSocket;
fn main() {
let mut socket = UdpSocket::bind("192.168.2.43:64041").expect("Unable to open socket");
loop {
let mut buf = [0u8; 1500];
match socket.recv_from(&mut buf) {
Ok((n, _addr)) => println!("Received {} bytes", n),
Err(err) => println!("Error receiving bytes: {}", err)
}
}
}

Using a callback when handling TCP connections with Tokio

I am trying to have a struct that starts an event loop, listens for TCP connections and calls a callback for each connection.
(The callback will be handed some prepossessed data from the socket. In my example below I just hand it the IP address of the connection but in my real code I will parse the contents that I receive with serde into a struct and pass that into the callback. I hope that doesn't invalidate the following "not working example").
My Cargo.toml:
[package]
name = "lifetime-problem"
version = "0.1.0"
edition = "2018"
[dependencies]
tokio-tcp = "0.1.3"
tokio = "0.1.14"
[[bin]]
name = "lifetime-problem"
path = "main.rs"
and main.rs:
use tokio::prelude::*;
struct Test {
printer: Option<Box<Fn(std::net::SocketAddr) + Sync>>,
}
impl Test {
pub fn start(&mut self) -> Result<(), Box<std::error::Error>> {
let addr = "127.0.0.1:4242".parse::<std::net::SocketAddr>()?;
let listener = tokio::net::TcpListener::bind(&addr)?;
let server = listener
.incoming()
.map_err(|e| eprintln!("failed to accept socket; error = {:?}", e))
.for_each(move |socket: tokio::net::TcpStream| {
let address = socket.peer_addr().expect("");
match self.printer {
Some(callback) => { callback(address); }
None => { println!("{}", address); }
}
Ok(())
});
tokio::run(server);
Ok(())
}
}
fn main() {
let mut x = Test{ printer: None };
x.start();
}
I have tried several things starting from this code (which is adopted directly from the Tokio example).
If I use the code like posted above I get:
error[E0277]: (dyn std::ops::Fn(std::net::SocketAddr) + std::marker::Sync + 'static) cannot be sent between threads safely
for the line 24 (tokio::run(server)).
If I add the Send trait on the Fn in the printer field XOR if I remove the move in the closure in the for_each call I get another error instead:
error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements
which points me to the closure that apparently cannot outlive the start method where it is defined but tokio::run seems to have conflicting requirements for it.
Do you know if I am addressing the callback pattern in totally the wrong way or if there is just some minor error in my code?
First things first:
Compiler will translate Box<Fn(std::net::SocketAddr) + Sync> to Box<Fn(std::net::SocketAddr) + Sync + 'static> unless the lifetime is explicitly specified.
Let's have a look at the errors:
error[E0277]: (dyn std::ops::Fn(std::net::SocketAddr) + std::marker::Sync + 'static) cannot be sent between threads safely
This is self-explanatory. You are trying to move &mut T to another thread, but cannot, because T here is not Send. To send &mut T to another thread T too needs to be of type Send.
Here is the minimal code that will give the same error:
use std::fmt::Debug;
fn func<T> (i:&'static mut T) where T: Debug {
std::thread::spawn(move || {
println!("{:?}", i);
});
}
If I make T above to also be of type Send, the error goes away.
But in your case when you add the Send trait, it gives lifetime error. Why?
&mut self has some lifetime greater than the function start() set by the caller, but there's no guarantee that its 'static. You move this reference into the closure which is passed to the thread and can potentially outlive the scope it is closing over, leading to a dangling reference.
Here's a minimal version, that would give the same error.
use std::fmt::Debug;
fn func<'a, T:'a> (i:&'a mut T) where T: Debug + Sync + Send {
std::thread::spawn(move || {
println!("{:?}", i);
});
}
Sync is not really required here as it is &mut T. Changing &mut T to &T (retaining Sync), will also result into the same error. The onus here is on references and not mutability. So you see, there is some lifetime 'a and it is moved into a closure (given to a thread), which means the closure now contains a reference (disjoint from the main context). So now, what is 'a and how long will it live from the closure's perspective that is invoked from another thread? Not inferable! As a result, the compiler complains saying cannot infer an appropriate lifetime due to conflicting requirements.
If we tweak the code a bit to;
impl Test {
pub fn start(&'static mut self) -> Result<(), Box<std::error::Error>> {
let addr = "127.0.0.1:4242".parse::<std::net::SocketAddr>()?;
let listener = tokio::net::TcpListener::bind(&addr)?;
let server = listener
.incoming()
.map_err(|e| eprintln!("failed to accept socket; error = {:?}", e))
.for_each(move |socket: tokio::net::TcpStream| {
let address = socket.peer_addr().expect("");
match &self.printer {
Some(callback) => { callback(address); }
None => { println!("{}", address); }
}
Ok(())
});
tokio::run(server);
Ok(())
}
}
it will compile fine. There's a guarantee there that self has a 'static lifetime. Please note that in the match statement we need to pass &self.printer, as you cannot move out of a borrowed context.
However, this expects Test to be declared static and that too a mutable one, which is generally not the best way, if you have other options.
Another way is; if it's ok for you to pass Test by value to start() and then further move it into for_each(), the code would look like this:
use tokio::prelude::*;
struct Test {
printer: Option<Box<Fn(std::net::SocketAddr) + Send>>,
}
impl Test {
pub fn start(mut self) -> Result<(), Box<std::error::Error>> {
let addr = "127.0.0.1:4242".parse::<std::net::SocketAddr>()?;
let listener = tokio::net::TcpListener::bind(&addr)?;
let server = listener
.incoming()
.map_err(|e| eprintln!("failed to accept socket; error = {:?}", e))
.for_each(move |socket: tokio::net::TcpStream| {
let address = socket.peer_addr().expect("");
match &self.printer {
Some(callback) => {
callback(address);
}
None => {
println!("{}", address);
}
}
Ok(())
});
tokio::run(server);
Ok(())
}
}
fn main() {
let mut x = Test { printer: None };
x.start();
}

How do I close a Unix socket in Rust?

I have a test that opens and listens to a Unix Domain Socket. The socket is opened and reads data without issues, but it doesn't shutdown gracefully.
This is the error I get when I try to run the test a second time:
thread 'test_1' panicked at 'called Result::unwrap() on an Err
value: Error { repr: Os { code: 48, message: "Address already in use"
} }', ../src/libcore/result.rs:799 note: Run with RUST_BACKTRACE=1
for a backtrace.
The code is available at the Rust playground and there's a Github Gist for it.
use std::io::prelude::*;
use std::thread;
use std::net::Shutdown;
use std::os::unix::net::{UnixStream, UnixListener};
Test Case:
#[test]
fn test_1() {
driver();
assert_eq!("1", "2");
}
Main entry point function
fn driver() {
let listener = UnixListener::bind("/tmp/my_socket.sock").unwrap();
thread::spawn(|| socket_server(listener));
// send a message
busy_work(3);
// try to disconnect the socket
let drop_stream = UnixStream::connect("/tmp/my_socket.sock").unwrap();
let _ = drop_stream.shutdown(Shutdown::Both);
}
Function to send data in intervals
#[allow(unused_variables)]
fn busy_work(threads: i32) {
// Make a vector to hold the children which are spawned.
let mut children = vec![];
for i in 0..threads {
// Spin up another thread
children.push(thread::spawn(|| socket_client()));
}
for child in children {
// Wait for the thread to finish. Returns a result.
let _ = child.join();
}
}
fn socket_client() {
let mut stream = UnixStream::connect("/tmp/my_socket.sock").unwrap();
stream.write_all(b"hello world").unwrap();
}
Function to handle data
fn handle_client(mut stream: UnixStream) {
let mut response = String::new();
stream.read_to_string(&mut response).unwrap();
println!("got response: {:?}", response);
}
Server socket that listens to incoming messages
#[allow(unused_variables)]
fn socket_server(listener: UnixListener) {
// accept connections and process them, spawning a new thread for each one
for stream in listener.incoming() {
match stream {
Ok(mut stream) => {
/* connection succeeded */
let mut response = String::new();
stream.read_to_string(&mut response).unwrap();
if response.is_empty() {
break;
} else {
thread::spawn(|| handle_client(stream));
}
}
Err(err) => {
/* connection failed */
break;
}
}
}
println!("Breaking out of socket_server()");
drop(listener);
}
Please learn to create a minimal reproducible example and then take the time to do so. In this case, there's no need for threads or functions or testing frameworks; running this entire program twice reproduces the error:
use std::os::unix::net::UnixListener;
fn main() {
UnixListener::bind("/tmp/my_socket.sock").unwrap();
}
If you look at the filesystem before and after the test, you will see that the file /tmp/my_socket.sock is not present before the first run and it is present before the second run. Deleting the file allows the program to run to completion again (at which point it recreates the file).
This issue is not unique to Rust:
Note that, once created, this socket file will continue to exist, even after the server exits. If the server subsequently restarts, the file prevents re-binding:
[...]
So, servers should unlink the socket pathname prior to binding it.
You could choose to add some wrapper around the socket that would automatically delete it when it is dropped or create a temporary directory that is cleaned when it is dropped, but I'm not sure how well that would work. You could also create a wrapper function that deletes the file before it opens the socket.
Unlinking the socket when it's dropped
use std::path::{Path, PathBuf};
struct DeleteOnDrop {
path: PathBuf,
listener: UnixListener,
}
impl DeleteOnDrop {
fn bind(path: impl AsRef<Path>) -> std::io::Result<Self> {
let path = path.as_ref().to_owned();
UnixListener::bind(&path).map(|listener| DeleteOnDrop { path, listener })
}
}
impl Drop for DeleteOnDrop {
fn drop(&mut self) {
// There's no way to return a useful error here
let _ = std::fs::remove_file(&self.path).unwrap();
}
}
You may also want to consider implementing Deref / DerefMut to make this into a smart pointer for sockets:
impl std::ops::Deref for DeleteOnDrop {
type Target = UnixListener;
fn deref(&self) -> &Self::Target {
&self.listener
}
}
impl std::ops::DerefMut for DeleteOnDrop {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.listener
}
}
Unlinking the socket before it's opened
This is much simpler:
use std::path::Path;
fn bind(path: impl AsRef<Path>) -> std::io::Result<UnixListener> {
let path = path.as_ref();
std::fs::remove_file(path)?;
UnixListener::bind(path)
}
Note that you can combine the two solutions, such that the socket is deleted before creation and when it's dropped.
I think that deleting during creation is a less-optimal solution: if you ever start a second server, you'll prevent the first server from receiving any more connections. It's probably better to error and tell the user instead.