I have a go-routine which is listening for TCP connections and send these on a channel back to the main loop. The reason I'm doing this in a go-routine is to make this listening non-blocking and be able to handle active connections simultaneously.
I have implemented this with a select statement with an empty default case like this:
go pollTcpConnections(listener, rawConnections)
for {
// Check for new connections (non-blocking)
select {
case tcpConn := <-rawConnections:
currentCon := NewClientConnection()
pendingConnections.PushBack(currentCon)
fmt.Println(currentCon)
go currentCon.Routine(tcpConn)
default:
}
// ... handle active connections
}
Here is my pollTcpConnections routine:
func pollTcpConnections(listener net.Listener, rawConnections chan net.Conn) {
for {
conn, err := listener.Accept() // this blocks, afaik
if(err != nil) {
checkError(err)
}
fmt.Println("New connection")
rawConnections<-conn
}
}
The problem is that I never recieve these connections. If I do it in a blocking way, like this:
for {
tcpConn := <-rawConnections
// ...
}
I recieve the connections, but it blocks... I have tried buffering the channel as well, but the same thing happens. What am I missing here?
it's a little hard to tell why you're not seeing any connections based on the existing code. One problem with your sample is that you have an empty default case in a select statement, and then we can't see what else is happening in this for loop. The way you've written it, that loop might never yield to the scheduler. You're basically saying "get a thing from the channel. don't have one? ok, start over. get a thing from the channel!", but you never actually wait. When you do some action that blocks your goroutine, that goroutine yields to the scheduler. So when you do a channel read in the normal fashion, if there's no value to be read, that goroutine is blocked reading. Since it's blocked, it also yields to the scheduler to allow other goroutines to continue executing on the underlying thread. I'm fairly certain this is why your select with an empty default is breaking; you're causing that goroutine to loop infinitely on the for loop without ever yielding to the scheduler.
It's not clear what the role of pendingConnections is, or whether it's needed at all.
The other thing that's impossible to tell from the behavior is what your checkError function does. It doesn't, for example, continue to the top of the for loop, or bail.
Anyway, it looks like this is more complicated than it needs to be. Just have a function that takes the new connection as it's one parameter, and then launch that in a new goroutine when it connects. I always write it like this:
func handleConnection(c net.Conn) {
// do something with your connection here.
}
for {
// Wait for a connection.
conn, err := l.Accept()
if err != nil {
// do something with your error. You probably want to break or return here.
break
}
// handle each connection in a new goroutine
go handleConnection(conn)
}
this is more or less exactly what they do in the documentation.
Related
I am trying to write an HTTP server in Go for school. I am required to AVOID any libraries that make this task easy (the net/http for example).
My problem is that I cannot seem to display the HTTP Response headers correctly. I want to print each header to my terminal, line by line.
I have written this program in Java, and it works well, however, I would like to have this working with a Go program.
I have a function called 'handleClient' that takes an accepted socket.
func handleClient(c net.Conn) {
defer c.Close()
req, _ := bufio.NewReader(c).ReadString('\n')
fmt.Print(string(req))
When using a web browser to connect to 'localhost:8080', my terminal displays "GET / HTTP/1.1", which is correct, however, I need the additional lines to be posted as well. I understand that 'ReadString('\n') is what is stopping this from happening, however, it is the only way I know of to end that line.
How to I start additional lines?
You can call ReadString inside a loop until you hit EOF.
Something like this:
func handleClient(c net.Conn) {
defer c.Close()
r := bufio.NewReader(c)
for {
req, err := r.ReadString('\n')
if err != nil && err != io.EOF {
panic(err)
}
fmt.Println(req)
// break the loop if the err is eof
if err == io.EOF {
break
}
}
mkopriva pointed me in the right direction, however, EOF is not the correct way to break the loop. I need to stop looping when I run into a line that is blank. Technically it is never EOF.
To do this, I have adjusted the if condition to break the loop to the following:
if len(req) <=2 {
After all the headers are read / printed, an HTTP request will end with "\r\n" I believe. I can confirm that the last line is intact of length 2.
Thanks for the help!
1.Can we call send from one thread and recv from another on the same net.UDPConn or net.TCPConn objects?
2.Can we call multiple sends parallely from different threads on the same net.UDPConn or net.TCPConn objects?
I am unable to find a good documentation also for the same.
Is golang socket api thread safe?
I find that it is hard to test if it is thread safe.
Any pointers in the direction will be helpful.
My test code is below:
package main
import (
"fmt"
"net"
"sync"
)
func udp_server() {
// create listen
conn, err := net.ListenUDP("udp", &net.UDPAddr{
IP: net.IPv4(0, 0, 0, 0),
Port: 8080,
})
if err != nil {
fmt.Println("listen fail", err)
return
}
defer conn.Close()
var wg sync.WaitGroup
for i := 0; i < 10; i = i + 1 {
wg.Add(1)
go func(socket *net.UDPConn) {
defer wg.Done()
for {
// read data
data := make([]byte, 4096)
read, remoteAddr, err := socket.ReadFromUDP(data)
if err != nil {
fmt.Println("read data fail!", err)
continue
}
fmt.Println(read, remoteAddr)
fmt.Printf("%s\n\n", data)
// send data
senddata := []byte("hello client!")
_, err = socket.WriteToUDP(senddata, remoteAddr)
if err != nil {
return
fmt.Println("send data fail!", err)
}
}
}(conn)
}
wg.Wait()
}
func main() {
udp_server()
}
Is it OK for this test code?
The documentation for net.Conn says:
Multiple goroutines may invoke methods on a Conn simultaneously.
Multiple goroutines may invoke methods on a Conn simultaneously.
My interpretation of the doc above, is that nothing catastrophic will happen if you invoke Read and Write on a net.Conn from multiple go routines, and that calls to Write on a net.Conn from multiple go routines will be serialised so that the bytes from 2 separate calls to Write will not be interleaved as they are written to the network.
The problem with the code you have presented is that there is no guarantee that Write will write the whole byte slice provided to it in one go. You are ignoring the indication of how many bytes have been written.
_, err = socket.WriteToUDP(senddata, remoteAddr)
So to make sure you write everything you would need to loop and call Write till all the senddata is sent. But net.Conn only ensures that data from a single call to Write is not interleaved. Given that you could be sending a single block of data with multiple calls to write there is no guarantee that the single block of data would reach its destination intact.
So for example 3 "hello client!" messages could arrive in the following form.
"hellohellohello client! client! client!"
So if you want reliable message writing on a net.Conn from multiple go routines you will need to synchronise those routines to ensure that single messages are written intact.
If I wanted to do this, as a first attempt I would have a single go routine reading from one or many message channels and writing to a net.Conn and then multiple go routines can write to those message channels.
Suppose I had a Tcp server in linux, it would create a new goroutine for a new connnection. When I want to write data to the tcp connection, should I do it just like this
conn.Write(data)
or do it in a goroutine especially for writing, like this
func writeRoutine(sendChan chan []byte){
for {
select {
case msg := <- sendChan :
conn.Write(msg)
}
}
}
just in case that the network was busy.
In a short, Did I need a write buffer in go just like in c/c++ when writing to a socket?
PS maybe I didn't exclaim the problem clearly.
1 I talked of the server, meaning a tcp server runing in linux. It would create a new goroutine for a new connnection. like this
listener, err := net.ListenTCP("tcp", tcpAddr)
if err != nil {
log.Error(err.Error())
os.Exit(-1)
}
for {
conn, err := listener.AcceptTCP()
if err != nil {
continue
}
log.Debug("Accept a new connection ", conn.RemoteAddr())
go handleClient(conn)
}
2 I think my problem isn't much concerned with the code. As we know, when we use size_t write(int fd, const void *buf, size_t count); to write a socket fd in c/c++, for a tcp server, we need a write buffer for a socket in your code necessaryly, or maybe only some of the data is writen successfully. I mean, Do I have to do so in go ?
You are actually asking two different questions here:
1) Should you use a goroutine per accepted client connection in my TCP server?
2) Given a []byte, how should I write to the connection?
For 1), the answer is yes. This is the type of pattern that go is most suited for. If you take a look at the source code for the net/http, you will see that it spawns a goroutine for each connection.
As for 2), you should do the same that you would do in a c/c++ server: write, check how much was written and keep on writing until your done, always checking for errors. Here is a code snippet on how to do it:
func writeConn(data []byte) error {
var start,c int
var err error
for {
if c, err = conn.Write(data[start:]); err != nil {
return err
}
start += c
if c == 0 || start == len(data) {
break
}
}
return nil
}
server [...] create a new goroutine for a new connnection
This makes sense because the handler goroutines can block without delaying the server's accept loop.
If you handled each request serially, any blocking syscall would essentially lock up the server for all clients.
goroutine especially for writing
This would only make sense in use cases where you're writing either a really big chunk of data or to a very slow connection and you need your handler to continue unblocked, for instance.
Note that this is not what is commonly understood as a "write buffer".
I seem to be struggling with the std::io::TcpStream. I'm actually trying to open a TCP connection with another system but the below code emulates the problem exactly.
I have a Tcp server that simply writes "Hello World" to the TcpStream upon opening and then loops to keep the connection open.
fn main() {
let listener = io::TcpListener::bind("127.0.0.1", 8080);
let mut acceptor = listener.listen();
for stream in acceptor.incoming() {
match stream {
Err(_) => { /* connection failed */ }
Ok(stream) => spawn(proc() {
handle(stream);
})
}
}
drop(acceptor);
}
fn handle(mut stream: io::TcpStream) {
stream.write(b"Hello Connection");
loop {}
}
All the client does is attempt to read a single byte from the connection and print it.
fn main() {
let mut socket = io::TcpStream::connect("127.0.0.1", 8080).unwrap();
loop {
match socket.read_byte() {
Ok(i) => print!("{}", i),
Err(e) => {
println!("Error: {}", e);
break
}
}
}
}
Now the problem is my client remains blocked on the read until I kill the server or close the TCP connection. This is not what I want, I need to open a TCP connection for a very long time and send messages back and forth between client and server. What am I misunderstanding here? I have the exact same problem with the real system i'm communicating with - I only become unblocked once I kill the connection.
Unfortunately, Rust does not have any facility for asynchronous I/O now. There are some attempts to rectify the situation, but they are far from complete yet. That is, there is a desire to make truly asynchronous I/O possible (proposals include selecting over I/O sources and channels at the same time, which would allow waking tasks which are blocked inside an I/O operation via an event over a channel, though it is not clear how this should be implemented on all supported platforms), but there's still a lot to do and there's nothing really usable now, as far as I'm aware.
You can emulate this to some extent with timeouts, however. This is far from the best solution, but it works. It could look like this (simplified example from my code base):
let mut socket = UdpSocket::bind(address).unwrap();
let mut buf = [0u8, ..MAX_BUF_LEN];
loop {
socket.set_read_timeout(Some(5000));
match socket.recv_from(buf) {
Ok((amt, src)) => { /* handle successful read */ }
Err(ref e) if e.kind == TimedOut => {} // continue
Err(e) => fail!("error receiving data: {}", e) // bail out
}
// do other work, check exit flags, for example
}
Here recv_from will return IoError with kind set to TimedOut if there is no data available on the socket during 5 seconds inside recv_from call. You need to reset the timeout before inside each loop iteration since it is more like a "deadline" than a timeout - when it expires, all calls will start to fail with timeout error.
This is definitely not the way it should be done, but Rust currently does not provide anything better. At least it does its work.
Update
There is now an attempt to create an asynchronous event loop and network I/O based on it. It is called mio. It probably can be a good temporary (or even permanent, who knows) solution for asynchronous I/O.
I am building a data tool that collects data in a stream and operates on it. I have a main routine, a "process manager", which is responsible for creating new routines of an accumulation function. The manager is informed to create the routines based on a channel receive select case which it is running in an infinite for loop (I already have my cancel routine for itself and all of the routines it creates). The problem is that the manager needs to be able to run the goroutine accumulators in its main scope so that they can operate outside of the select and for loop's scope (I want them to keep running while the manager accepts new cases).
cancel := make(chan struct{})
chanchannel := make(chan chan datatype)
func operationManager (chanchannel chan chan datatype, cancel chan struct{}) {
for {
select {
case newchan := <- chanchannel:
go runAccum(newchan, cancel)
case <- cancel:
return
}
}
}
func runAccum(inchan chan datatype, cancel chan struct{}) {
for {
select {
case data := <- inchan;
//do something
case <- cancel:
return
}
}
}
This is a very, very dumbed-down example of my use-case, but I hope it illustrates my problem's component pieces. Let me know if this is possible, feasible, reasonable, inadvisable; And no, this is not how I implemented my teardown haha
There is no "scope" to goroutines. All goroutines are equal.
There is "scope" for closures but your goroutines do not span closures.
So all your goroutines spanned by go runAccum(newchan, cancel) will be like any other goroutine you span, no matter from where.
I assume you did not test your solution?