exit 0 not terminating program - sockets

I'm developing a client-server application in OCaml using the high-level network connection functions available in OCaml Unix library, following the steps available at https://caml.inria.fr/pub/docs/oreilly-book/html/book-ora187.html. These functions are:
val open_connection : sockaddr -> in_channel * out_channel
val shutdown_connection : in_channel -> unit
val establish_server : (in_channel -> out_channel -> unit) -> sockaddr -> unit
I'm able to successfully build the client and verifier but I cannot terminate the server using the exit OCaml function.
My (minimal) server code is the following:
let handle_service ic oc =
try while true do
...
if ... then raise Finish_interaction
done ;
with
| Finish_interaction -> raise Sys.Break
| _ -> ...
let main_server serv_fun =
if Array.length Sys.argv < 4 then ...
else try
let port = int_of_string Sys.argv.(1) in
...
let my_address = Unix.inet_addr_loopback in
Unix.establish_server serv_fun (Unix.ADDR_INET(my_address, port))
with
| Sys.Break -> exit 0 (* PROGRAM DOES NOT TERMINATE *)
| _ -> ...
let go_server () =
Unix.handle_unix_error main_server handle_service ;;
go_server ()
I can successfully catch the Sys.Break exception, but the exit 0 code after catching that exception does nothing and the server just keeps running and waiting for another client connection.
OCaml documentation says the following regarding establish_server:
The function Unix.establish_server never returns normally.
I don't know if this implies that I can never terminate the program without user interaction (via Ctrl + C, for example).
In a nutshell, how can I terminate my server? The client does terminate after shutdown_connection but the server keeps waiting for incoming connections. BTW, I'm compiling my code using OCamlbuild.

From the documentation of Unix.establish_server:
A new process is created for each connection
I recommend printing the process IDs (Unix.getpid ()) to make sure the process calling exit is the one you're expecting (the parent).
Another thing you can check is that the program is not stuck in the execution of an at_exit callback. For example, the following program enters an infinite loop during the call to exit:
let () =
at_exit (fun () -> while true do () done);
print_endline "all is well!";
exit 0
(probably not the problem you're having but could be useful to future visitors)

I managed to solve my problem, without having the need to send signals between processes.
I tweaked the establish_server function so that it doesn't create a loop that answers all incoming connections. By removing the loop, it will only answer one incoming connection, and then it simply ends its execution.
Here is the code for the new establish_server function:
let establish_server server_fun sockaddr =
let domain = Unix.domain_of_sockaddr sockaddr in
let sock = Unix.socket domain Unix.SOCK_STREAM 0
in Unix.bind sock sockaddr ;
Unix.listen sock 3;
(*while true do*)
let (s, caller) = Unix.accept sock
in match Unix.fork() with
0 -> if Unix.fork() <> 0 then exit 0 ;
let inchan = Unix.in_channel_of_descr s
and outchan = Unix.out_channel_of_descr s
in server_fun inchan outchan ;
close_in inchan ;
close_out outchan ;
exit 0
| id -> Unix.close s; ignore(Unix.waitpid [] id)
(*done ;;*)
Removing the two commented lines gives you the original version of it.
Thanks everyone for the answers!

Related

How do I prevent BB8 connections to break after several repeats

I have an application that should use a shared connection pool for all requests. I observe that at seemingly-random times, requests fail with the error type "Closed". I have isolated this behavior into the following example:
use lazy_static::lazy_static;
use bb8_postgres::bb8::Pool;
use bb8_postgres::PostgresConnectionManager;
use bb8_postgres::tokio_postgres::{NoTls, Client};
lazy_static! {
static ref CONNECTION_POOL: Pool<PostgresConnectionManager<NoTls>> = {
let manager = PostgresConnectionManager::new_from_stringlike("dbname=demodb host=localhost user=postgres", NoTls).unwrap();
Pool::builder().build_unchecked(manager)
};
}
fn main() {
println!("Hello, world!");
}
#[cfg(test)]
mod test {
use super::*;
#[tokio::test]
async fn much_insert_traffic() {
much_traffic("INSERT INTO foo(a,b) VALUES (1, 2) RETURNING id").await
}
#[tokio::test]
async fn much_select_traffic() {
much_traffic("SELECT MAX(id) FROM foo").await
}
#[tokio::test]
async fn much_update_traffic() {
much_traffic("UPDATE foo SET a = 81 WHERE id = 1919 RETURNING b").await;
}
async fn much_traffic(stmt: &str) {
let c = CONNECTION_POOL.get().await.expect("Get a connection");
let client = &*c;
for i in 0..10000i32 {
let res = client.query_opt(stmt, &[]).await.expect(&format!("Perform repeat {} of {} ok", i, stmt));
}
}
}
When executing the tests, >50% one of the tests will fail in a later iteration with output similar to the following:
Perform repeat 8782 of UPDATE foo SET a = 81 WHERE id = 1919 RETURNING b ok: Error { kind: Closed, cause: None }
thread 'test::much_update_traffic' panicked at 'Perform repeat 8782 of UPDATE foo SET a = 81 WHERE
id = 1919 RETURNING b ok: Error { kind: Closed, cause: None }', src\main.rs:44:23
Turns out, the problem is completely predicated on the [tokio::test] annotation starting up a distinct runtime whenever a test is executed. The lazy static is initialized with one of these runtimes, and as soon as that runtime shuts down, the pool is destroyed. The other tests (with different runtimes) can use the value as long as the spawning test still runs, but are met with an invalid state once it has shut down.

Multicast UDP packets using Tokio futures

I'm playing around with Tokio and Rust and as an example, I am trying to write a simple UDP proxy that will just accept UDP packets on one socket and send it out to multiple other destinations. However, I stumble over the situation that I need to send the received packet to multiple addresses and am not sure how to do that in a idiomatic way.
Code I have this far:
extern crate bytes;
extern crate futures;
use std::net::SocketAddr;
use tokio::codec::BytesCodec;
use tokio::net::{UdpFramed, UdpSocket};
use tokio::prelude::*;
fn main() {
let listen_address = "127.0.0.1:4711".parse::<SocketAddr>().unwrap();
let forwarder = {
let socket = UdpSocket::bind(&listen_address).unwrap();
let peers = vec![
"192.168.1.136:4711".parse::<SocketAddr>().unwrap(),
"192.168.1.136:4712".parse::<SocketAddr>().unwrap(),
];
UdpFramed::new(UdpSocket::bind(&listen_address).unwrap(), BytesCodec::new()).for_each(
move |(bytes, _from)| {
// These are the problematic lines
for peer in peers.iter() {
socket.send_dgram(&bytes, &peer);
}
Ok(())
},
)
};
tokio::run({
forwarder
.map_err(|err| println!("Error: {}", err))
.map(|_| ())
});
}
The problematic lines are trying to send the received packet to multiple other addresses using a newly bound socket.
The existing examples all forward packets to single destinations, or internally use mpsc channels to communicate between internal tasks. I do not think that this is necessary and that it should be possible to do without having to spawn more than one task per listening socket.
Update: Thanks to #Ă–mer-erden I got this code that works.
extern crate bytes;
extern crate futures;
use std::net::SocketAddr;
use tokio::codec::BytesCodec;
use tokio::net::{UdpFramed, UdpSocket};
use tokio::prelude::*;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let listen_address = "0.0.0.0:4711".parse::<SocketAddr>()?;
let socket = UdpSocket::bind(&listen_address)?;
let peers: Vec<SocketAddr> = vec!["192.168.1.136:8080".parse()?, "192.168.1.136:8081".parse()?];
let (mut writer, reader) = UdpFramed::new(socket, BytesCodec::new()).split();
let forwarder = reader.for_each(move |(bytes, _from)| {
for peer in peers.iter() {
writer.start_send((bytes.clone().into(), peer.clone()))?;
}
writer.poll_complete()?;
Ok(())
});
tokio::run({
forwarder
.map_err(|err| println!("Error: {}", err))
.map(|_| ())
});
Ok(())
}
Note that:
It is not necessary to call poll_completion for each start_send: it just need to be called after all start_send has been dispatched.
For some reason, the content of the peer is gutted between calls (but there is no compiler error), generating an Error 22 (which is usually because a bad address is given to sendto(2)).
Looking in a debugger, it is quite clear that the second time, the peer address is pointing to invalid memory. I opted to clone the peer instead.
I removed the calls to unwrap() and propagate the Result upwards instead.
Your code has a logical mistake: you are trying to bind the same address twice, as sender and receiver respectively. Instead, you can use a stream and sink. UdpFramed has the functionality to provide that, please see Sink:
A Sink is a value into which other values can be sent, asynchronously.
let listen_address = "127.0.0.1:4711".parse::<SocketAddr>().unwrap();
let forwarder = {
let (mut socket_sink, socket_stream) =
UdpFramed::new(UdpSocket::bind(&listen_address).unwrap(), BytesCodec::new()).split();
let peers = vec![
"192.168.1.136:4711".parse::<SocketAddr>().unwrap(),
"192.168.1.136:4712".parse::<SocketAddr>().unwrap(),
];
socket_stream.for_each(move |(bytes, _from)| {
for peer in peers.iter() {
socket_sink.start_send((bytes.clone().into(), *peer));
socket_sink.poll_complete();
}
Ok(())
})
};
tokio::run({
forwarder
.map_err(|err| println!("Error: {}", err))
.map(|_| ())
});

Bound UDP socket not closed when network becomes unavailable

On Linux, I open an UDP socket and bind it to an address that is currently available. I then listen in a loop for new packets. Then I disable wifi, the interface goes down and the network address is removed from the interface. I would expect the receive call to return an error but this is not the case.
Is this expected behaviour? Is there a way to receive an error from a call to receive when the address the socket is bound to disappears?
Example code in Rust:
use std::net::UdpSocket;
fn main() {
let mut socket = UdpSocket::bind("192.168.2.43:64041").expect("Unable to open socket");
loop {
let mut buf = [0u8; 1500];
match socket.recv_from(&mut buf) {
Ok((n, _addr)) => println!("Received {} bytes", n),
Err(err) => println!("Error receiving bytes: {}", err)
}
}
}

Change HTTP routing by server status

I'm building a REST server in Go and using mongoDB as my database (but this question is actually related to any other external resource).
I want my server to start and respond, even if the database is not up yet (not when the database is down after the server started - this is a different issue, much easier one).
So my dao package includes a connection go-routine that receives a boolean channel, and write 'true' to the channel when it successfully connected to the database. If the connection failed, the go-routine will keep trying every X seconds.
When I use this package with another software I wrote, that is a just a command-line program, I'm using select with timeout:
dbConnected := make(chan bool)
storage.Connect(dbConnected)
timeout := time.After(time.Minute)
select {
case <-dbConnected:
createReport()
case <-timeout:
log.Fatalln("Can't connect to the database")
}
I want to use the same packge in a server, but I don't want to fail the whole server. Instead, I want to start the server with handler that returns 503 SERVER BUSY, until the server is connected to the database, and then start serve requests normally. Is there a simple way to implement this logic in go standard library? Using solutions like gorilla is an option, but the server is simple with very few APIs, and gorilla is a bit overkill.
== edited: ==
I know I can use a middleware but I don't know how to do that without sharing data between the main method and the handlers. That why I'm using the channel in the first place.
I have something working, but it does based on common data. However, the data is a single boolean, so I guess it's not so dramatic. I would love to get comments for this solution:
In the dao package, I have this Connect method, that return a boolean channel. The private connect go routine, writes 'true' and exit when succeed:
func Connect() chan bool {
connected := make(chan bool)
go connect(mongoUrl, connected)
return connected
}
I also added the Ping() method to the dao package; it run forever and monitor the database status. It reports the status to a new channel and try to reconnect if needed:
func Ping() chan bool {
status := make(chan bool)
go func() {
for {
if err := session.Ping(); err != nil {
session.Close()
status <- false
<- Connect()
status <- true
}
time.Sleep(time.Second)
}
}()
return status
}
In the main package, I have this simple type:
type Connected struct {
isConnected bool
}
// this one is called as go-routine
func (c *Connected) check(dbConnected chan bool) {
// first connection, on server boot
c.isConnected = <- dbConnected
// monitor the database status
status := dao.Ping()
for {
c.isConnected = <- status
}
}
// the middleware
func (c *Connected) checkDbHandleFunc(next http.HandlerFunc) http.HandlerFunc {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !c.isConnected {
w.Header().Add("Retry-After", "10")
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(503)
respBody := `{"error":"The server is busy; Try again soon"}`
w.Write([]byte(respBody))
} else {
next.ServeHTTP(w, r)
}
})
}
Middleware usage:
...
connected := Connected{
isConnected: false,
}
dbConnected := dao.Connect()
go connected.check(dbConnected)
mux := http.NewServeMux()
mux.HandleFunc("/", mainPage)
mux.HandleFunc("/some-db-required-path/", connected.checkDbHandleFunc(someDbRequiredHandler))
...
log.Fatal(http.ListenAndServe(addr, mux))
...
Does it make sense?

Force non blocking read with TcpStream

I've got a thread, that maintains a list of sockets, and I'd like to traverse the list, see if there is anything to read, if so - act upon it, if not - move onto the next. The problem is, as soon as I come across the first node, all execution is halted until something comes through on the read.
I'm using std::io::Read::read(&mut self, buf: &mut [u8]) -> Result<usize>
From the doc
This function does not provide any guarantees about whether it blocks waiting for data, but if an object needs to block for a read but cannot it will typically signal this via an Err return value.
Digging into the source, the TcpStream Read implementation is
impl Read for TcpStream {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> { self.0.read(buf) }
}
Which invokes
pub fn read(&mut self, buf: &mut [u8]) -> IoResult<uint> {
let fd = self.fd();
let dolock = || self.lock_nonblocking();
let doread = |nb| unsafe {
let flags = if nb {c::MSG_DONTWAIT} else {0};
libc::recv(fd,
buf.as_mut_ptr() as *mut libc::c_void,
buf.len() as wrlen,
flags) as libc::c_int
};
read(fd, self.read_deadline, dolock, doread)
}
And finally, calls read<T, L, R>(fd: sock_t, deadline: u64, mut lock: L, mut read: R)
Where I can see loops over non blocking reads until data has been retrieved or an error has occurred.
Is there a way to force a non-blocking read with TcpStream?
Updated Answer
It should be noted, that as of Rust 1.9.0, std::net::TcpStream has added functionality:
fn set_nonblocking(&self, nonblocking: bool) -> Result<()>
Original Answer
Couldn't exactly get it with TcpStream, and didn't want to pull in a separate lib for IO operations, so I decided to set the file descriptor as Non-blocking before using it, and executing a system call to read/write. Definitely not the safest solution, but less work than implementing a new IO lib, even though MIO looks great.
extern "system" {
fn read(fd: c_int, buffer: *mut c_void, count: size_t) -> ssize_t;
}
pub fn new(user: User, stream: TcpStream) -> Socket {
// First we need to setup the socket as Non-blocking on POSIX
let fd = stream.as_raw_fd();
unsafe {
let ret_value = libc::fcntl(fd,
libc::consts::os::posix01::F_SETFL,
libc::consts::os::extra::O_NONBLOCK);
// Ensure we didnt get an error code
if ret_value < 0 {
panic!("Unable to set fd as non-blocking")
}
}
Socket {
user: user,
stream: stream
}
}
pub fn read(&mut self) {
let count = 512 as size_t;
let mut buffer = [0u8; 512];
let fd = self.stream.as_raw_fd();
let mut num_read = 0 as ssize_t;
unsafe {
let buf_ptr = buffer.as_mut_ptr();
let void_buf_ptr: *mut c_void = mem::transmute(buf_ptr);
num_read = read(fd, void_buf_ptr, count);
if num_read > 0 {
println!("Read: {}", num_read);
}
println!("test");
}
}