Bound UDP socket not closed when network becomes unavailable - sockets

On Linux, I open an UDP socket and bind it to an address that is currently available. I then listen in a loop for new packets. Then I disable wifi, the interface goes down and the network address is removed from the interface. I would expect the receive call to return an error but this is not the case.
Is this expected behaviour? Is there a way to receive an error from a call to receive when the address the socket is bound to disappears?
Example code in Rust:
use std::net::UdpSocket;
fn main() {
let mut socket = UdpSocket::bind("192.168.2.43:64041").expect("Unable to open socket");
loop {
let mut buf = [0u8; 1500];
match socket.recv_from(&mut buf) {
Ok((n, _addr)) => println!("Received {} bytes", n),
Err(err) => println!("Error receiving bytes: {}", err)
}
}
}

Related

[Rust: UnixDatagram socket]: Address already in use Unix datagram socket

I am trying to create simple client server code with unix datagram socket in rust. I am having issue with server keeps address in use even after closing out the terminal so it never binds again with socket file. What is the best way to close the socket and create new one upon start?
here is the sample code
use std::fs::{File, read_to_string};
use std::fs;
use std::os::unix::net::UnixDatagram;
use std::path::{Path, PathBuf};
use std::net::Shutdown;
fn unlink_socket (path: impl AsRef<Path>) {
let path = path.as_ref();
if Path::new(path).exists() {
std::fs::remove_file(path);
}
}
fn tcp_datagram_server() {
const FILE_PATH: &str = "/tmp/test.sock";
let socket;
let mut buf = vec![0; 4096];
unlink_socket(FILE_PATH);
File::create(FILE_PATH).expect("Unable to create file");
socket = match UnixDatagram::bind(FILE_PATH) {
Ok(socket) => socket,
Err(e) => {
println!("Couldn't bind: {:?}", e);
return;
}
};
println!("Waiting for client to connect...");
loop {
let data = socket.recv(buf.as_mut_slice()).expect("recv function failed");
println!("Received {:?}", data);
}
}
fn main() {
tcp_datagram_server();
}

Multicast UDP packets using Tokio futures

I'm playing around with Tokio and Rust and as an example, I am trying to write a simple UDP proxy that will just accept UDP packets on one socket and send it out to multiple other destinations. However, I stumble over the situation that I need to send the received packet to multiple addresses and am not sure how to do that in a idiomatic way.
Code I have this far:
extern crate bytes;
extern crate futures;
use std::net::SocketAddr;
use tokio::codec::BytesCodec;
use tokio::net::{UdpFramed, UdpSocket};
use tokio::prelude::*;
fn main() {
let listen_address = "127.0.0.1:4711".parse::<SocketAddr>().unwrap();
let forwarder = {
let socket = UdpSocket::bind(&listen_address).unwrap();
let peers = vec![
"192.168.1.136:4711".parse::<SocketAddr>().unwrap(),
"192.168.1.136:4712".parse::<SocketAddr>().unwrap(),
];
UdpFramed::new(UdpSocket::bind(&listen_address).unwrap(), BytesCodec::new()).for_each(
move |(bytes, _from)| {
// These are the problematic lines
for peer in peers.iter() {
socket.send_dgram(&bytes, &peer);
}
Ok(())
},
)
};
tokio::run({
forwarder
.map_err(|err| println!("Error: {}", err))
.map(|_| ())
});
}
The problematic lines are trying to send the received packet to multiple other addresses using a newly bound socket.
The existing examples all forward packets to single destinations, or internally use mpsc channels to communicate between internal tasks. I do not think that this is necessary and that it should be possible to do without having to spawn more than one task per listening socket.
Update: Thanks to #Ömer-erden I got this code that works.
extern crate bytes;
extern crate futures;
use std::net::SocketAddr;
use tokio::codec::BytesCodec;
use tokio::net::{UdpFramed, UdpSocket};
use tokio::prelude::*;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let listen_address = "0.0.0.0:4711".parse::<SocketAddr>()?;
let socket = UdpSocket::bind(&listen_address)?;
let peers: Vec<SocketAddr> = vec!["192.168.1.136:8080".parse()?, "192.168.1.136:8081".parse()?];
let (mut writer, reader) = UdpFramed::new(socket, BytesCodec::new()).split();
let forwarder = reader.for_each(move |(bytes, _from)| {
for peer in peers.iter() {
writer.start_send((bytes.clone().into(), peer.clone()))?;
}
writer.poll_complete()?;
Ok(())
});
tokio::run({
forwarder
.map_err(|err| println!("Error: {}", err))
.map(|_| ())
});
Ok(())
}
Note that:
It is not necessary to call poll_completion for each start_send: it just need to be called after all start_send has been dispatched.
For some reason, the content of the peer is gutted between calls (but there is no compiler error), generating an Error 22 (which is usually because a bad address is given to sendto(2)).
Looking in a debugger, it is quite clear that the second time, the peer address is pointing to invalid memory. I opted to clone the peer instead.
I removed the calls to unwrap() and propagate the Result upwards instead.
Your code has a logical mistake: you are trying to bind the same address twice, as sender and receiver respectively. Instead, you can use a stream and sink. UdpFramed has the functionality to provide that, please see Sink:
A Sink is a value into which other values can be sent, asynchronously.
let listen_address = "127.0.0.1:4711".parse::<SocketAddr>().unwrap();
let forwarder = {
let (mut socket_sink, socket_stream) =
UdpFramed::new(UdpSocket::bind(&listen_address).unwrap(), BytesCodec::new()).split();
let peers = vec![
"192.168.1.136:4711".parse::<SocketAddr>().unwrap(),
"192.168.1.136:4712".parse::<SocketAddr>().unwrap(),
];
socket_stream.for_each(move |(bytes, _from)| {
for peer in peers.iter() {
socket_sink.start_send((bytes.clone().into(), *peer));
socket_sink.poll_complete();
}
Ok(())
})
};
tokio::run({
forwarder
.map_err(|err| println!("Error: {}", err))
.map(|_| ())
});

GCDAsyncUdpSocket Socket Closes In between sending 255 packets

I have a module where I have to discover by sending a packets to the 255 IP addresses.
eg. Connected IP : 192.188.2.1 then I have to send a packet changing the last value i.e.
var HOST = "192.188.2.1"
var arr = HOST.components(separatedBy: ".")
for i in 1 ..< 254
{
dispatchGroup.enter()
time += 0.005
DispatchQueue.main.asyncAfter(deadline: .now() + time) {
let obj = LPScanPacket()
arr[3] = "\(i)"
let str = arr.joined(separator: ".")
SenderWrapper.sendLPPacket(lpPacket: obj, HOST: str)
dispatchGroup.leave()
}
}
dispatchGroup.notify(queue: .main) {
print("Completed sending 👍")
}
But on sending this many packet it shows me error within udpSocketDidClose delegate method
Error Domain=NSPOSIXErrorDomain Code=65 "No route to host" UserInfo={NSLocalizedDescription=No route to host, NSLocalizedFailureReason=Error in send() function.}
Firstly why do I get this error, is there any alternative way I can achieve this result.
EDIT :
Try running this code, I am trying to get response from the device connected to the same router. To find the device IP I am using the above code. But the socket closes in between sometimes it works and sometime it doesn't I am not able to find the solution why it closes.
Thanks
A broadcast message is sent to all hosts on a network or subnetwork and is created by setting the node part of the IP address to all 1’s.
The error message you received is related to the fact, that broadcasts messages do not go through routers.
To be able to broadcast a datagram, the underlying socket must be in the broadcast mode. Run man setsockopt in your terminal for further reference.

How do I close a Unix socket in Rust?

I have a test that opens and listens to a Unix Domain Socket. The socket is opened and reads data without issues, but it doesn't shutdown gracefully.
This is the error I get when I try to run the test a second time:
thread 'test_1' panicked at 'called Result::unwrap() on an Err
value: Error { repr: Os { code: 48, message: "Address already in use"
} }', ../src/libcore/result.rs:799 note: Run with RUST_BACKTRACE=1
for a backtrace.
The code is available at the Rust playground and there's a Github Gist for it.
use std::io::prelude::*;
use std::thread;
use std::net::Shutdown;
use std::os::unix::net::{UnixStream, UnixListener};
Test Case:
#[test]
fn test_1() {
driver();
assert_eq!("1", "2");
}
Main entry point function
fn driver() {
let listener = UnixListener::bind("/tmp/my_socket.sock").unwrap();
thread::spawn(|| socket_server(listener));
// send a message
busy_work(3);
// try to disconnect the socket
let drop_stream = UnixStream::connect("/tmp/my_socket.sock").unwrap();
let _ = drop_stream.shutdown(Shutdown::Both);
}
Function to send data in intervals
#[allow(unused_variables)]
fn busy_work(threads: i32) {
// Make a vector to hold the children which are spawned.
let mut children = vec![];
for i in 0..threads {
// Spin up another thread
children.push(thread::spawn(|| socket_client()));
}
for child in children {
// Wait for the thread to finish. Returns a result.
let _ = child.join();
}
}
fn socket_client() {
let mut stream = UnixStream::connect("/tmp/my_socket.sock").unwrap();
stream.write_all(b"hello world").unwrap();
}
Function to handle data
fn handle_client(mut stream: UnixStream) {
let mut response = String::new();
stream.read_to_string(&mut response).unwrap();
println!("got response: {:?}", response);
}
Server socket that listens to incoming messages
#[allow(unused_variables)]
fn socket_server(listener: UnixListener) {
// accept connections and process them, spawning a new thread for each one
for stream in listener.incoming() {
match stream {
Ok(mut stream) => {
/* connection succeeded */
let mut response = String::new();
stream.read_to_string(&mut response).unwrap();
if response.is_empty() {
break;
} else {
thread::spawn(|| handle_client(stream));
}
}
Err(err) => {
/* connection failed */
break;
}
}
}
println!("Breaking out of socket_server()");
drop(listener);
}
Please learn to create a minimal reproducible example and then take the time to do so. In this case, there's no need for threads or functions or testing frameworks; running this entire program twice reproduces the error:
use std::os::unix::net::UnixListener;
fn main() {
UnixListener::bind("/tmp/my_socket.sock").unwrap();
}
If you look at the filesystem before and after the test, you will see that the file /tmp/my_socket.sock is not present before the first run and it is present before the second run. Deleting the file allows the program to run to completion again (at which point it recreates the file).
This issue is not unique to Rust:
Note that, once created, this socket file will continue to exist, even after the server exits. If the server subsequently restarts, the file prevents re-binding:
[...]
So, servers should unlink the socket pathname prior to binding it.
You could choose to add some wrapper around the socket that would automatically delete it when it is dropped or create a temporary directory that is cleaned when it is dropped, but I'm not sure how well that would work. You could also create a wrapper function that deletes the file before it opens the socket.
Unlinking the socket when it's dropped
use std::path::{Path, PathBuf};
struct DeleteOnDrop {
path: PathBuf,
listener: UnixListener,
}
impl DeleteOnDrop {
fn bind(path: impl AsRef<Path>) -> std::io::Result<Self> {
let path = path.as_ref().to_owned();
UnixListener::bind(&path).map(|listener| DeleteOnDrop { path, listener })
}
}
impl Drop for DeleteOnDrop {
fn drop(&mut self) {
// There's no way to return a useful error here
let _ = std::fs::remove_file(&self.path).unwrap();
}
}
You may also want to consider implementing Deref / DerefMut to make this into a smart pointer for sockets:
impl std::ops::Deref for DeleteOnDrop {
type Target = UnixListener;
fn deref(&self) -> &Self::Target {
&self.listener
}
}
impl std::ops::DerefMut for DeleteOnDrop {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.listener
}
}
Unlinking the socket before it's opened
This is much simpler:
use std::path::Path;
fn bind(path: impl AsRef<Path>) -> std::io::Result<UnixListener> {
let path = path.as_ref();
std::fs::remove_file(path)?;
UnixListener::bind(path)
}
Note that you can combine the two solutions, such that the socket is deleted before creation and when it's dropped.
I think that deleting during creation is a less-optimal solution: if you ever start a second server, you'll prevent the first server from receiving any more connections. It's probably better to error and tell the user instead.

golang unix socket error. dial: resource temporarily unavailable

So I'm trying to use unix sockets with fluentd for a logging task and find that randomly, once in a while the error
dial: {socket_name} resource temporarily unavailable
Any ideas as to why this might be occurring?
I tried adding "retry" logic, to reduce the error, but it still occurs at times.
Also, for fluntd we are using the default config for unix sockets communication
func connect() {
var connection net.Conn
var err error
for i := 0; i < retry_count; i++ {
connection, err = net.Dial("unix", path_to_socket)
if err == nil {
break
}
time.Sleep(time.Duration(math.Exp2(float64(retry_count))) * time.Millisecond)
}
if err != nil {
fmt.Println(err)
} else {
connection.Write(data_to_send_socket)
}
defer connection.Close()
}
Go creates its sockets in non-blocking mode, which means that certain system calls that would usually block instead. In most cases it transparently handles the EAGAIN error (what is indicated by the "resource temporarily unavailable" message) by waiting until the socket is ready to read/write. It doesn't seem to have this logic for the connect call in Dial though.
It is possible for connect to return EAGAIN when connecting to a UNIX domain socket if its listen queue has filled up. This will happen if clients are connecting to it faster than it is accepting them. Go should probably wait on the socket until it becomes connectable in this case and retry similar to what it does for Read/Write, but it doesn't seem to have that logic.
So your best bet would be to handle the error by waiting and retrying the Dial call. That, or work out why your server isn't accepting connections in a timely manner.
For the exponential backoff you can use this library: github.com/cenkalti/backoff. I think the way you have it now it always sleeps for the same amount of time.
For the network error you need to check if it's a temporary error or not. If it is then retry:
type TemporaryError interface {
Temporary() bool
}
func dial() (conn net.Conn, err error) {
backoff.Retry(func() error {
conn, err = net.Dial("unix", "/tmp/ex.socket")
if err != nil {
// if this is a temporary error, then retry
if terr, ok := err.(TemporaryError); ok && terr.Temporary() {
return err
}
}
// if we were successful, or there was a non-temporary error, fail
return nil
}, backoff.NewExponentialBackOff())
return
}