I have a tcp server and a client, the server does the following
func providerCallback(conn net.Conn) {
reader := bufio.NewReader(conn)
var err error
for {
lenbyte, _ := reader.Peek(4)
reader.Discard(4)
slen := int(binary.BigEndian.Uint32(lenbyte))
data, err = reader.Peek(slen)
process(data)
reader.Discard(slen)
}
}
The client seems to send packet faster than process can deal with, therefore I'd like to buffer the requests in bufio and process later.
However, as the size of bufio is fixed(4096, even though I can increase it, it is still fixed), which means I can't manually Reset it because there might be a packet cutting of in the end of bufio, as follows
|normal data... [First 20 bytes of packet P] | [the rest of packet P]
|------------------- size of bufio ------------------|
How can I splice packet that is cut off, and reuse the bufio for later packets?
For example,
import (
"bufio"
"encoding/binary"
"io"
"net"
)
func providerCallback(conn net.Conn) error {
rdr := bufio.NewReader(conn)
data := make([]byte, 0, 4*1024)
for {
n, err := io.ReadFull(rdr, data[:4])
data = data[:n]
if err != nil {
if err == io.EOF {
break
}
return err
}
dataLen := binary.BigEndian.Uint32(data)
if uint64(dataLen) > uint64(cap(data)) {
data = make([]byte, 0, dataLen)
}
n, err = io.ReadFull(rdr, data[:dataLen])
data = data[:n]
if err != nil {
return err
}
process(data)
}
return nil
}
func process([]byte) {}
Related
I've accidentally spotted a bug when parts of a message from previous connection go to the next message.
I have a basic server with client. I have removed all the error handling to avoid bloating the examples too much.
Also I've replaced some Printf's with time.Sleep since I just don't have a chance to break the connection in time to reproduce the bug because it reads the data too fast.
The "package" is a simple structure, where the first 4 bytes is the length and then goes the content.
Client code:
package main
import (
"encoding/binary"
"fmt"
"net"
)
func main() {
conn, _ := net.Dial("tcp", "0.0.0.0:8081")
defer conn.Close()
str := "msadsakdjsajdklsajdklsajdk"
// Creating a package
buf := make([]byte, len(str)+4)
copy(buf[4:], str)
binary.LittleEndian.PutUint32(buf[:4], uint32(len(str)))
for {
_, err := conn.Write(buf)
if err != nil {
fmt.Println(err)
return
}
}
}
Server code:
package main
import (
"encoding/binary"
"fmt"
"net"
"sync"
"time"
)
func ReadConnection(conn net.Conn, buf []byte) (err error) {
maxLen := cap(buf)
readSize := 0
for readSize < maxLen {
// instead of Printf
time.Sleep(time.Nanosecond * 10)
readN, err := conn.Read(buf[readSize:])
if err != nil {
return err
}
readSize += readN
}
return nil
}
func handleConnection(conn net.Conn, waitGroup *sync.WaitGroup) {
waitGroup.Add(1)
defer conn.Close()
defer waitGroup.Done()
fmt.Printf("Serving %s\n", conn.RemoteAddr().String())
var packageSize int32 = 0
int32Buf := make([]byte, 4)
for {
// read the length
conn.Read(int32Buf)
packageSize = int32(binary.LittleEndian.Uint32(int32Buf))
// assuming the length should be 26
if packageSize > 26 {
fmt.Println("Package size error")
return
}
// read the content
packageBuf := make([]byte, packageSize)
if err := ReadConnection(conn, packageBuf); err != nil {
fmt.Printf("ERR: %s\n", err)
return
}
// instead of Printf
time.Sleep(time.Nanosecond * 100)
}
}
func main() {
//establish connection
listener, _ := net.Listen("tcp", "0.0.0.0:8081")
defer listener.Close()
waitGroup := sync.WaitGroup{}
for {
conn, err := listener.Accept()
if err != nil {
break
}
go handleConnection(conn, &waitGroup)
}
waitGroup.Wait()
}
So for some reason, int32Buf receives the last 2 bytes from a previous message (d, k) and the first 2 bytes of the length, resulting in [107, 100, 26, 0] bytes slice, when it should be [26, 0, 0, 0].
And of course, the rest of the data contains remaining two zeroes:
conn.Read(int32Buf)
You need to check the return value of conn.Read and compare it against your expectations. You are assuming in your code that conn.Read will always completely fill the given buffer of 4 bytes.
This assumption is wrong, i.e. it might actually read less data. Specifically it might read only 2 bytes in which case you'll end up with \x1a\x00\x00\x00 in your buffer which still translates to a message length of 26. Only, the first 2 bytes of the message will actually be the last 2 bytes of the length which were not included in the last read. This means after reading the 26 bytes it will not have read the full message. 2 bytes are legt and will be included into the next message - this is what you observed.
To be sure that the exact size of the buffer is read check the return values of conn.Read or use io.ReadFull. After you've done this it works as expected (from the comment):
Ok, now it works perfect
So why does this happened only in context of a new connection? Maybe because the additional load due to another connection changed the behavior slightly but significantly enough. Still, these are not the data read from a different connection but data from the current one contrary to the description in the question. This could be easily checked by using different messages with different clients.
I am writing a pair of socket server and client in Golang.
Here is my code for client:
func handler(conn net.Conn) {
defer conn.Close()
buffer := make([]byte, 2048)
reader := bufio.NewReader(os.Stdin)
for {
nbytes, err := reader.Read(buffer)
if err != nil {
if err == io.EOF {
break
}
log.Fatal(err)
}
log.Printf("client: send %d bytes -> %s", nbytes, buffer)
_, err = conn.Write(buffer)
if err != nil {
log.Fatal(err)
}
}
}
code for server:
func serverHandler(conn net.Conn) {
defer conn.Close()
buffer := make([]byte, 2048)
for {
nbytes, err := conn.Read(buffer)
if err != nil {
if err == io.EOF {
break
}
log.Print("Error reading: ", err.Error())
}
log.Printf("server: %d bytes received -> %s", nbytes, buffer)
writer := bufio.NewWriter(os.Stdout)
_, err = writer.Write(buffer)
if err != nil {
log.Print("Error writing: ", err.Error())
}
writer.Flush()
}
}
Here is some output:
./server 12345
2019/12/24 00:35:31 Listen on localhost:12345
2019/12/24 00:35:38 client from: 127.0.0.1:51352
2019/12/24 00:35:48 server: 2048 bytes received -> hello?
./client localhost 12345
hello?
2019/12/24 00:35:48 client: send 7 bytes -> hello?
As you can see, client sent 7 bytes, but the server received 2048 bytes. what is the problem of server's code?
You didn't send 7 bytes, you sent 2048. You're passing the buffer to conn.Write, and it writes all 2048 bytes of it. To send only the number of bytes read, you should only send the part that has the data you read:
_, err = conn.Write(buffer[:nbytes])
To avoid this problem, use this safe buf idiom for a Read. In this example, it will write the number of bytes read.
buf := make([]byte, 2048)
for {
n, err := conn.Read(buf[:cap(buf)])
buf = buf[:n]
if err != nil {
// handle error
}
_, err = w.Write(buf)
}
Talk is cheap, so here we go the simple code:
package main
import (
"fmt"
"time"
"net"
)
func main() {
addr := "127.0.0.1:8999"
// Server
go func() {
tcpaddr, err := net.ResolveTCPAddr("tcp4", addr)
if err != nil {
panic(err)
}
listen, err := net.ListenTCP("tcp", tcpaddr)
if err != nil {
panic(err)
}
for {
if conn, err := listen.Accept(); err != nil {
panic(err)
} else if conn != nil {
go func(conn net.Conn) {
buffer := make([]byte, 1024)
n, err := conn.Read(buffer)
if err != nil {
fmt.Println(err)
} else {
fmt.Println(">", string(buffer[0 : n]))
}
conn.Close()
}(conn)
}
}
}()
time.Sleep(time.Second)
// Client
if conn, err := net.Dial("tcp", addr); err == nil {
for i := 0; i < 2; i++ {
_, err := conn.Write([]byte("hello"))
if err != nil {
fmt.Println(err)
conn.Close()
break
} else {
fmt.Println("ok")
}
// sleep 10 seconds and re-send
time.Sleep(10*time.Second)
}
} else {
panic(err)
}
}
Ouput:
> hello
ok
ok
The Client writes to the Server twice. After the first read, the Server closes the connection immediately, but the Client sleeps 10 seconds and then re-writes to the Server with the same already closed connection object(conn).
Why can the second write succeed (returned error is nil)?
Can anyone help?
PS:
In order to check if the buffering feature of the system affects the result of the second write, I edited the Client like this, but it still succeeds:
// Client
if conn, err := net.Dial("tcp", addr); err == nil {
_, err := conn.Write([]byte("hello"))
if err != nil {
fmt.Println(err)
conn.Close()
return
} else {
fmt.Println("ok")
}
// sleep 10 seconds and re-send
time.Sleep(10*time.Second)
b := make([]byte, 400000)
for i := range b {
b[i] = 'x'
}
n, err := conn.Write(b)
if err != nil {
fmt.Println(err)
conn.Close()
return
} else {
fmt.Println("ok", n)
}
// sleep 10 seconds and re-send
time.Sleep(10*time.Second)
} else {
panic(err)
}
And here is the screenshot:
attachment
There are several problems with your approach.
Sort-of a preface
The first one is that you do not wait for the server goroutine
to complete.
In Go, once main() exits for whatever reason,
all the other goroutines still running, if any, are simply
teared down forcibly.
You're trying to "synchronize" things using timers,
but this only works in toy situations, and even then it
does so only from time to time.
Hence let's fix your code first:
package main
import (
"fmt"
"log"
"net"
"time"
)
func main() {
addr := "127.0.0.1:8999"
tcpaddr, err := net.ResolveTCPAddr("tcp4", addr)
if err != nil {
log.Fatal(err)
}
listener, err := net.ListenTCP("tcp", tcpaddr)
if err != nil {
log.Fatal(err)
}
// Server
done := make(chan error)
go func(listener net.Listener, done chan<- error) {
for {
conn, err := listener.Accept()
if err != nil {
done <- err
return
}
go func(conn net.Conn) {
var buffer [1024]byte
n, err := conn.Read(buffer[:])
if err != nil {
log.Println(err)
} else {
log.Println(">", string(buffer[0:n]))
}
if err := conn.Close(); err != nil {
log.Println("error closing server conn:", err)
}
}(conn)
}
}(listener, done)
// Client
conn, err := net.Dial("tcp", addr)
if err != nil {
log.Fatal(err)
}
for i := 0; i < 2; i++ {
_, err := conn.Write([]byte("hello"))
if err != nil {
log.Println(err)
err = conn.Close()
if err != nil {
log.Println("error closing client conn:", err)
}
break
}
fmt.Println("ok")
time.Sleep(2 * time.Second)
}
// Shut the server down and wait for it to report back
err = listener.Close()
if err != nil {
log.Fatal("error closing listener:", err)
}
err = <-done
if err != nil {
log.Println("server returned:", err)
}
}
I've spilled a couple of minor fixes
like using log.Fatal (which is
log.Print + os.Exit(1)) instead of panicking,
removed useless else clauses to adhere to the coding standard of keeping the main
flow where it belongs, and lowered the client's timeout.
I have also added checking for possible errors Close on sockets may return.
The interesting part is that we now properly shut the server down by closing the listener and then waiting for the server goroutine to report back (unfortunately Go does not return an error of a custom type from net.Listener.Accept in this case so we can't really check that Accept exited because we've closed the listener).
Anyway, our goroutines are now properly synchronized, and there is
no undefined behaviour, so we can reason about how the code works.
Remaining problems
Some problems still remain.
The more glaring is you making wrong assumption that TCP preserves
message boundaries—that is, if you write "hello" to the client
end of the socket, the server reads back "hello".
This is not true: TCP considers both ends of the connection
as producing and consuming opaque streams of bytes.
This means, when the client writes "hello", the client's
TCP stack is free to deliver "he" and postpone sending "llo",
and the server's stack is free to yield "hell" to the read
call on the socket and only return "o" (and possibly some other
data) in a later read.
So, to make the code "real" you'd need to somehow introduce these
message boundaries into the protocol above TCP.
In this particular case the simplest approach would be either
using "messages" consisting of a fixed-length and agreed-upon
endianness prefix indicating the length of the following
data and then the string data itself.
The server would then use a sequence like
var msg [4100]byte
_, err := io.ReadFull(sock, msg[:4])
if err != nil { ... }
mlen := int(binary.BigEndian.Uint32(msg[:4]))
if mlen < 0 {
// handle error
}
if mlen == 0 {
// empty message; goto 1
}
_, err = io.ReadFull(sock, msg[5:5+mlen])
if err != nil { ... }
s := string(msg[5:5+mlen])
Another approach is to agree on that the messages do not contain
newlines and terminate each message with a newline
(ASCII LF, \n, 0x0a).
The server side would then use something like
a usual bufio.Scanner loop to get
full lines from the socket.
The remaining problem with your approach is to not dealing with
what Read on a socket returns: note that io.Reader.Read
(that's what sockets implement, among other things) is allowed
to return an error while having had read some data from the
underlying stream. In your toy example this might rightfully
be unimportant, but suppose that you're writing a wget-like
tool which is able to resume downloading of a file: even if
reading from the server returned some data and an error, you
have to deal with that returned chunk first and only then
handle the error.
Back to the problem at hand
The problem presented in the question, I beleive, happens simply because in your setup you hit some TCP buffering problem due to the tiny length of your messages.
On my box which runs Linux 4.9/amd64 two things reliably "fix"
the problem:
Sending messages of 4000 bytes in length: the second call
to Write "sees" the problem immediately.
Doing more Write calls.
For the former, try something like
msg := make([]byte, 4000)
for i := range msg {
msg[i] = 'x'
}
for {
_, err := conn.Write(msg)
...
and for the latter—something like
for {
_, err := conn.Write([]byte("hello"))
...
fmt.Println("ok")
time.Sleep(time.Second / 2)
}
(it's sensible to lower the pause between sending stuff in
both cases).
It's interesting to note that the former example hits the
write: connection reset by peer (ECONNRESET in POSIX)
error while the second one hits write: broken pipe
(EPIPE in POSIX).
This is because when we're sending in chunks worth 4k bytes,
some of the packets generated for the stream manage to become
"in flight" before the server's side of the connection manages
to propagate the information on its closure to the client,
and those packets hit an already closed socket and get rejected
with the RST TCP flag set.
In the second example an attempt to send another chunk of data
sees that the client side already knows that the connection
has been teared down and fails the sending without "touching
the wire".
TL;DR, the bottom line
Welcome to the wonderful world of networking. ;-)
I'd recommend buying a copy of "TCP/IP Illustrated",
read it and experiment.
TCP (and IP and other protocols above IP)
sometimes works not like people expect them to by applying
their "common sense".
I'm developing a fast dns client in go just to mess around with But I'm facing troubles at the time of reading from server responses cause it never arrives and I know it actually did because I have WireShark open and it read the packet.
Here is the code sample(8.8.8.8 is Google DNS and the hex msg is a valid DNS query):
package main
import (
"fmt"
"net"
"encoding/hex"
"bufio"
)
func CheckError(err error) {
if err != nil {
fmt.Println("Error: " , err)
}
}
func main() {
Conn, err := net.Dial("udp", "8.8.8.8:53")
CheckError(err)
defer Conn.Close()
msg, _ := hex.DecodeString("5ab9010000010000000000001072312d2d2d736e2d68357137646e65650b676f6f676c65766964656f03636f6d0000010001")
scanner := bufio.NewScanner(Conn)
buf := []byte(msg)
_, err1 := Conn.Write(buf)
if err1 != nil {
fmt.Println(msg, err1)
}
for scanner.Scan() {
fmt.Println(scanner.Bytes())
}
}
Here you have the proof that it actually arrives:
WireShark Screen Capture
I've testes reading directly from conn with:
func main() {
Conn, err := net.Dial("udp", "8.8.8.8:53")
CheckError(err)
defer Conn.Close()
msg, _ := hex.DecodeString("5ab9010000010000000000001072312d2d2d736e2d68357137646e65650b676f6f676c65766964656f03636f6d0000010001")
buf := []byte(msg)
_, err1 := Conn.Write(buf)
if err1 != nil {
fmt.Println(msg, err1)
}
Reader(Conn)
}
func Reader(conn net.Conn) {
var buf []byte
for {
conn.Read(buf)
fmt.Println(buf)
}
}
You can't use bufio around a UDP connection. UDP is not a stream oriented protocol, so you need to differentiate the individual datagrams yourself, and avoid partial reads to prevent data loss.
In order to read from an io.Reader, you must have space allocated to read into, and you need to use the bytes read value returned from the Read operation. Your example could be reduced to:
conn, err := net.Dial("udp", "8.8.8.8:53")
if err != nil {
log.Fatal(err)
}
defer conn.Close()
msg, _ := base64.RawStdEncoding.DecodeString("WrkBAAABAAAAAAAAEHIxLS0tc24taDVxN2RuZWULZ29vZ2xldmlkZW8DY29tAAABAAE")
resp := make([]byte, 512)
conn.Write(msg)
n, err := conn.Read(resp)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%q\n", resp[:n])
I have zeromq: stable 4.1.4 installed using brew on MacOSX and have written a simple PUB/SUB program to test zeromq. But when I run the sample program using flags --bufsize > 5 (to use a buffer of size > 5MB) (go run go_zmq_pubsub.go --bufsize=6); it throws the following exception:
No buffer space available (tcp.cpp:69)
SIGABRT: abort
PC=0x7fff9911c286 m=0
signal arrived during cgo execution
Below is the program I used to test the zeromq4.x
package main
import (
"fmt"
"flag"
"strconv"
"sync"
log "github.com/Sirupsen/logrus"
zmq "github.com/pebbe/zmq4"
"time"
)
var _ = fmt.Println
func main(){
var port int
var bufsize int
flag.IntVar(&port, "port", 7676, "server's zmq tcp port")
flag.IntVar(&bufsize, "bufsize", 0, "socket kernel buffer size")
flag.Parse();
publisher, err := zmq.NewSocket(zmq.PUB)
if(err != nil) {
log.Fatal(err)
}
//set publisher kernel transmit buffer size
//convert into bytes
if err := publisher.SetSndbuf(bufsize * 1000000); err != nil {
log.Fatal(err)
}
defer publisher.Close()
publisher.Bind("tcp://*:" + strconv.Itoa(port))
//SETUP subscriber
subscriber, err := zmq.NewSocket(zmq.SUB)
if(err != nil) {
log.Fatal(err)
}
//set subscriber kernel receive buffer size
if err := subscriber.SetRcvbuf(bufsize * 1000000); err != nil {
log.Fatal(err)
}
defer subscriber.Close()
subscriber.Connect("tcp://127.0.0.1:" + strconv.Itoa(port))
subscriber.SetSubscribe("")
var wg sync.WaitGroup
wg.Add(2)
idx := 0
go func(wg *sync.WaitGroup) {
//start streaming messages
ticker := time.NewTicker(1 * time.Second)
go func() {
for {
select {
case <-ticker.C:
_, err = publisher.Send("PKG:"+strconv.Itoa(idx), 0)
idx++;
if(err != nil) {
log.Error(err)
}
}
}
}()
}(&wg)
//receiver
go func(wg *sync.WaitGroup) {
go func(){
for {
payload, err := subscriber.Recv(0)
_ = payload
if err != nil {
log.Error(err)
break
}
//now sending into worker pool
log.Info("RECEIVE:" + payload)
}
}()
}(&wg)
wg.Wait()
}
On Centos7 with lib-zeromq built from source, the above code works without problem.
Not sure if it's due to libzeromq or the OS itself.
Thanks.
A buffer size of > 5MB is pointless. Anything beyond the bandwidth-delay product of the link concerned is wasted space.
Moderate your requirements.