Suppose I had a Tcp server in linux, it would create a new goroutine for a new connnection. When I want to write data to the tcp connection, should I do it just like this
conn.Write(data)
or do it in a goroutine especially for writing, like this
func writeRoutine(sendChan chan []byte){
for {
select {
case msg := <- sendChan :
conn.Write(msg)
}
}
}
just in case that the network was busy.
In a short, Did I need a write buffer in go just like in c/c++ when writing to a socket?
PS maybe I didn't exclaim the problem clearly.
1 I talked of the server, meaning a tcp server runing in linux. It would create a new goroutine for a new connnection. like this
listener, err := net.ListenTCP("tcp", tcpAddr)
if err != nil {
log.Error(err.Error())
os.Exit(-1)
}
for {
conn, err := listener.AcceptTCP()
if err != nil {
continue
}
log.Debug("Accept a new connection ", conn.RemoteAddr())
go handleClient(conn)
}
2 I think my problem isn't much concerned with the code. As we know, when we use size_t write(int fd, const void *buf, size_t count); to write a socket fd in c/c++, for a tcp server, we need a write buffer for a socket in your code necessaryly, or maybe only some of the data is writen successfully. I mean, Do I have to do so in go ?
You are actually asking two different questions here:
1) Should you use a goroutine per accepted client connection in my TCP server?
2) Given a []byte, how should I write to the connection?
For 1), the answer is yes. This is the type of pattern that go is most suited for. If you take a look at the source code for the net/http, you will see that it spawns a goroutine for each connection.
As for 2), you should do the same that you would do in a c/c++ server: write, check how much was written and keep on writing until your done, always checking for errors. Here is a code snippet on how to do it:
func writeConn(data []byte) error {
var start,c int
var err error
for {
if c, err = conn.Write(data[start:]); err != nil {
return err
}
start += c
if c == 0 || start == len(data) {
break
}
}
return nil
}
server [...] create a new goroutine for a new connnection
This makes sense because the handler goroutines can block without delaying the server's accept loop.
If you handled each request serially, any blocking syscall would essentially lock up the server for all clients.
goroutine especially for writing
This would only make sense in use cases where you're writing either a really big chunk of data or to a very slow connection and you need your handler to continue unblocked, for instance.
Note that this is not what is commonly understood as a "write buffer".
Related
NOTE
I rewrote this question for the bounty as i was able to figure out how to get the first question solved but didn't want to start a new question. the comments below pertain to the original question, not the revised.
Description of Issue
My tcp client executes querying the tcp server one time successfully, returns the response, and then subsequent requests to the server from the client fails. Also, if i terminate the client and reload it fails on the first attempt as well.
Here's what my command prompt looks like:
root#ubuntuT:/home/jon/gocode/udps# ./udpservtcpclient
Text to send: SampleQuery
Message from server: SampleResponse
Text to send: SampleQuery
((End - No Response))
root#ubuntuT:/home/jon/gocode/udps# ./udpservtcpclient
Text to send: SampleQuery
((End - No Response))
What I expect
I expect to be able to query the tcp server from the tcp client endlessly, and have the tcp server return a response from the UDP Server every time. Also, if i terminate the tcp client and reload it should also query correctly to the tcp server without hiccups.
What I think
Something is incorrect with the tcp server accepting connections. I moved the UDP Portion in separate code (not here as not important still does not work) to its own function which opens and closes UDP connections and it still does not function after first connection.
UPDATE
I updated the code to display "Accepted Connection" just below c,err:=serverConn.Accept() and it only printed once, for the first request. any subsequent queries from the client did not display the line so it has to do with accepting connections
Sourcecode
Server Code:
package main
import (
"log"
//"fmt"
"net"
//"strings"
"bufio"
//"time"
//"github.com/davecgh/go-spew/spew"
)
var connUDP *net.UDPConn
func udpRoutine(query string)(string){
connUDP.Write([]byte(query))
buf:= make([]byte,128)
n,err:=connUDP.Read(buf[0:])
if err != nil{
log.Fatal(err)
}
response := string(buf[0:n])
return response
}
func handle(conn net.Conn) error {
defer func(){
conn.Close()
}()
r := bufio.NewReader(conn)
w := bufio.NewWriter(conn)
scanr := bufio.NewScanner(r)
for {
scanned := scanr.Scan()
if !scanned {
if err := scanr.Err(); err != nil {
log.Printf("%v(%v)", err, conn.RemoteAddr())
return err
}
break
}
response:=udpRoutine(scanr.Text())
w.WriteString(response+"\n")
w.Flush()
}
return nil
}
func main(){
// setup tcp listener
serverConn,err := net.Listen("tcp","127.0.0.1:8085")
if err != nil{
log.Fatal(err)
}
defer serverConn.Close()
// setup udp client
udpAddr,err:=net.ResolveUDPAddr("udp4","127.0.0.1:1175")
if err != nil{
log.Fatal(err)
}
connUDP,err=net.DialUDP("udp",nil,udpAddr)
if err != nil{
log.Fatal(err)
}
defer connUDP.Close()
for{
c,err:=serverConn.Accept()
if err != nil{
log.Fatal(err)
}
//c.SetDeadline(time.Now().Add(5))
go handle(c)
}
}
Client Code:
package main
import "net"
import "fmt"
import "bufio"
import "os"
func main() {
// connect to this socket
conn, _ := net.Dial("tcp", "127.0.0.1:8085")
for {
reader := bufio.NewReader(os.Stdin)
// read in input from stdin
fmt.Print("Text to send: ")
text,_ := reader.ReadString('\n')
// send to socket
fmt.Fprintf(conn, text + "\n")
// listen for reply
message, _ := bufio.NewReader(conn).ReadString('\n')
fmt.Print("Message from server: "+message)
}
}
It seems there are two problems here:
1) UDP Server
Your question describes an issue when the client is not able to make a second request to the server.
I used a simple echo UDP server, along with the code you posted for the server and client and can't reproduce the problem (I can still make several requests to the server), so I suspect that has to do with the UDP server you are using (code for which I can't see in this question).
I suggest you try this with a simple UDP server that just echoes messages back:
package main
import (
"fmt"
"net"
)
func main() {
conn, _ := net.ListenUDP("udp", &net.UDPAddr{IP:[]byte{0,0,0,0},Port:1175,Zone:""})
defer conn.Close()
buf := make([]byte, 1024)
for {
n, addr, _ := conn.ReadFromUDP(buf)
conn.WriteTo(buf[0:n], addr)
fmt.Println("Received ", string(buf[0:n]), " from ", addr)
}
}
2) Extra new line in TCP Client
Using the exact code you posted and that UDP server I posted above, this seems to work, but the output I get on the client is not what I would have expected.
It seems that is cause by a second issue which is this line in the client:
// send to socket
fmt.Fprintf(conn, text + "\n")
That line end you are sending is causing the scanner you use on the server to read two lines (the text you send and then an empty line), making the server write two lines back to the client.
But in the client you only read one line, so the second line seems to be pending until the client connects again.
That can be fixed by simply changing that to:
// send to socket
fmt.Fprintf(conn, text)
Output for the fixed code
Using that UDP server and making that change to the client, this is the output I get when running all three components:
Text to send: first msg
Message from server: first msg
Text to send: second msg
Message from server: second msg
Text to send: third msg
Message from server: third msg
Text to send:
I can then stop just the client, start it again and it'd keep working:
Text to send: fourth msg
Message from server: fourth msg
Text to send:
Aditional notes
About the two other lines in the client code that use newlines:
// read in input from stdin
fmt.Print("Text to send: ")
text,_ := reader.ReadString('\n')
That newline is needed there, cause when you input the text using standard input, you finish the line using the enter key (and thus ending the line with a newline), so the line should be read until the \n character.
message, _ := bufio.NewReader(conn).ReadString('\n')
That one is needed cause when the server writes the response to the connection it does w.WriteString(response+"\n"). So the response includes a newline at the end, and you should read up to that newline when reading the response text.
1.Can we call send from one thread and recv from another on the same net.UDPConn or net.TCPConn objects?
2.Can we call multiple sends parallely from different threads on the same net.UDPConn or net.TCPConn objects?
I am unable to find a good documentation also for the same.
Is golang socket api thread safe?
I find that it is hard to test if it is thread safe.
Any pointers in the direction will be helpful.
My test code is below:
package main
import (
"fmt"
"net"
"sync"
)
func udp_server() {
// create listen
conn, err := net.ListenUDP("udp", &net.UDPAddr{
IP: net.IPv4(0, 0, 0, 0),
Port: 8080,
})
if err != nil {
fmt.Println("listen fail", err)
return
}
defer conn.Close()
var wg sync.WaitGroup
for i := 0; i < 10; i = i + 1 {
wg.Add(1)
go func(socket *net.UDPConn) {
defer wg.Done()
for {
// read data
data := make([]byte, 4096)
read, remoteAddr, err := socket.ReadFromUDP(data)
if err != nil {
fmt.Println("read data fail!", err)
continue
}
fmt.Println(read, remoteAddr)
fmt.Printf("%s\n\n", data)
// send data
senddata := []byte("hello client!")
_, err = socket.WriteToUDP(senddata, remoteAddr)
if err != nil {
return
fmt.Println("send data fail!", err)
}
}
}(conn)
}
wg.Wait()
}
func main() {
udp_server()
}
Is it OK for this test code?
The documentation for net.Conn says:
Multiple goroutines may invoke methods on a Conn simultaneously.
Multiple goroutines may invoke methods on a Conn simultaneously.
My interpretation of the doc above, is that nothing catastrophic will happen if you invoke Read and Write on a net.Conn from multiple go routines, and that calls to Write on a net.Conn from multiple go routines will be serialised so that the bytes from 2 separate calls to Write will not be interleaved as they are written to the network.
The problem with the code you have presented is that there is no guarantee that Write will write the whole byte slice provided to it in one go. You are ignoring the indication of how many bytes have been written.
_, err = socket.WriteToUDP(senddata, remoteAddr)
So to make sure you write everything you would need to loop and call Write till all the senddata is sent. But net.Conn only ensures that data from a single call to Write is not interleaved. Given that you could be sending a single block of data with multiple calls to write there is no guarantee that the single block of data would reach its destination intact.
So for example 3 "hello client!" messages could arrive in the following form.
"hellohellohello client! client! client!"
So if you want reliable message writing on a net.Conn from multiple go routines you will need to synchronise those routines to ensure that single messages are written intact.
If I wanted to do this, as a first attempt I would have a single go routine reading from one or many message channels and writing to a net.Conn and then multiple go routines can write to those message channels.
I seem to be struggling with the std::io::TcpStream. I'm actually trying to open a TCP connection with another system but the below code emulates the problem exactly.
I have a Tcp server that simply writes "Hello World" to the TcpStream upon opening and then loops to keep the connection open.
fn main() {
let listener = io::TcpListener::bind("127.0.0.1", 8080);
let mut acceptor = listener.listen();
for stream in acceptor.incoming() {
match stream {
Err(_) => { /* connection failed */ }
Ok(stream) => spawn(proc() {
handle(stream);
})
}
}
drop(acceptor);
}
fn handle(mut stream: io::TcpStream) {
stream.write(b"Hello Connection");
loop {}
}
All the client does is attempt to read a single byte from the connection and print it.
fn main() {
let mut socket = io::TcpStream::connect("127.0.0.1", 8080).unwrap();
loop {
match socket.read_byte() {
Ok(i) => print!("{}", i),
Err(e) => {
println!("Error: {}", e);
break
}
}
}
}
Now the problem is my client remains blocked on the read until I kill the server or close the TCP connection. This is not what I want, I need to open a TCP connection for a very long time and send messages back and forth between client and server. What am I misunderstanding here? I have the exact same problem with the real system i'm communicating with - I only become unblocked once I kill the connection.
Unfortunately, Rust does not have any facility for asynchronous I/O now. There are some attempts to rectify the situation, but they are far from complete yet. That is, there is a desire to make truly asynchronous I/O possible (proposals include selecting over I/O sources and channels at the same time, which would allow waking tasks which are blocked inside an I/O operation via an event over a channel, though it is not clear how this should be implemented on all supported platforms), but there's still a lot to do and there's nothing really usable now, as far as I'm aware.
You can emulate this to some extent with timeouts, however. This is far from the best solution, but it works. It could look like this (simplified example from my code base):
let mut socket = UdpSocket::bind(address).unwrap();
let mut buf = [0u8, ..MAX_BUF_LEN];
loop {
socket.set_read_timeout(Some(5000));
match socket.recv_from(buf) {
Ok((amt, src)) => { /* handle successful read */ }
Err(ref e) if e.kind == TimedOut => {} // continue
Err(e) => fail!("error receiving data: {}", e) // bail out
}
// do other work, check exit flags, for example
}
Here recv_from will return IoError with kind set to TimedOut if there is no data available on the socket during 5 seconds inside recv_from call. You need to reset the timeout before inside each loop iteration since it is more like a "deadline" than a timeout - when it expires, all calls will start to fail with timeout error.
This is definitely not the way it should be done, but Rust currently does not provide anything better. At least it does its work.
Update
There is now an attempt to create an asynchronous event loop and network I/O based on it. It is called mio. It probably can be a good temporary (or even permanent, who knows) solution for asynchronous I/O.
I have a go-routine which is listening for TCP connections and send these on a channel back to the main loop. The reason I'm doing this in a go-routine is to make this listening non-blocking and be able to handle active connections simultaneously.
I have implemented this with a select statement with an empty default case like this:
go pollTcpConnections(listener, rawConnections)
for {
// Check for new connections (non-blocking)
select {
case tcpConn := <-rawConnections:
currentCon := NewClientConnection()
pendingConnections.PushBack(currentCon)
fmt.Println(currentCon)
go currentCon.Routine(tcpConn)
default:
}
// ... handle active connections
}
Here is my pollTcpConnections routine:
func pollTcpConnections(listener net.Listener, rawConnections chan net.Conn) {
for {
conn, err := listener.Accept() // this blocks, afaik
if(err != nil) {
checkError(err)
}
fmt.Println("New connection")
rawConnections<-conn
}
}
The problem is that I never recieve these connections. If I do it in a blocking way, like this:
for {
tcpConn := <-rawConnections
// ...
}
I recieve the connections, but it blocks... I have tried buffering the channel as well, but the same thing happens. What am I missing here?
it's a little hard to tell why you're not seeing any connections based on the existing code. One problem with your sample is that you have an empty default case in a select statement, and then we can't see what else is happening in this for loop. The way you've written it, that loop might never yield to the scheduler. You're basically saying "get a thing from the channel. don't have one? ok, start over. get a thing from the channel!", but you never actually wait. When you do some action that blocks your goroutine, that goroutine yields to the scheduler. So when you do a channel read in the normal fashion, if there's no value to be read, that goroutine is blocked reading. Since it's blocked, it also yields to the scheduler to allow other goroutines to continue executing on the underlying thread. I'm fairly certain this is why your select with an empty default is breaking; you're causing that goroutine to loop infinitely on the for loop without ever yielding to the scheduler.
It's not clear what the role of pendingConnections is, or whether it's needed at all.
The other thing that's impossible to tell from the behavior is what your checkError function does. It doesn't, for example, continue to the top of the for loop, or bail.
Anyway, it looks like this is more complicated than it needs to be. Just have a function that takes the new connection as it's one parameter, and then launch that in a new goroutine when it connects. I always write it like this:
func handleConnection(c net.Conn) {
// do something with your connection here.
}
for {
// Wait for a connection.
conn, err := l.Accept()
if err != nil {
// do something with your error. You probably want to break or return here.
break
}
// handle each connection in a new goroutine
go handleConnection(conn)
}
this is more or less exactly what they do in the documentation.
full code could download at https://groups.google.com/forum/#!topic/golang-nuts/e1Ir__Dq_gE
Could anyone help me to improve this sample code to zero bug?
I think it will help us to develop a bug free client / server code.
my develop steps:
Create a server which could handle multiple connections by goroutine.
Build a client which works fine with simple protocol.
Expand the client to simulate multiple clients (with option -n=1000 clients as default)
TODO: try to reduce lock of server
TODO: try to use bufio to enhance throughput
I found this code is very unstable contains with three problems:
launch 1000 clients, one of them occurs a EOF when reading from server.
launch 1050 clients, got too many open files soon (No any clients opened).
launch 1020 clients, got runtime error with long trace stacks.
Start pollServer: pipe: too many open files
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x28 pc=0x4650d0]
Here I paste my more simplified code.
const ClientCount = 1000
func main() {
srvAddr := "127.0.0.1:10000"
var wg sync.WaitGroup
wg.Add(ClientCount)
for i := 0; i < ClientCount; i++ {
go func(i int) {
client(i, srvAddr)
wg.Done()
}(i)
}
wg.Wait()
}
func client(i int, srvAddr string) {
conn, e := net.Dial("tcp", srvAddr)
if e != nil {
log.Fatalln("Err:Dial():", e)
}
defer conn.Close()
conn.SetTimeout(proto.LINK_TIMEOUT_NS)
defer func() {
conn.Close()
}()
l1 := proto.L1{uint32(i), uint16(rand.Uint32() % 10000)}
log.Println(conn.LocalAddr(), "WL1", l1)
e = binary.Write(conn, binary.BigEndian, &l1)
if e == os.EOF {
return
}
if e != nil {
return
}
// ...
}
This answer on serverfault [1] suggests that for servers that can handle a lot of connections, setting a higher ulimit is the thing to do. Also check for application leaks of memory or file descriptor leaks using lsof.
ulimit -n 99999
[1] https://serverfault.com/a/48820/110909