So I'm trying to use unix sockets with fluentd for a logging task and find that randomly, once in a while the error
dial: {socket_name} resource temporarily unavailable
Any ideas as to why this might be occurring?
I tried adding "retry" logic, to reduce the error, but it still occurs at times.
Also, for fluntd we are using the default config for unix sockets communication
func connect() {
var connection net.Conn
var err error
for i := 0; i < retry_count; i++ {
connection, err = net.Dial("unix", path_to_socket)
if err == nil {
break
}
time.Sleep(time.Duration(math.Exp2(float64(retry_count))) * time.Millisecond)
}
if err != nil {
fmt.Println(err)
} else {
connection.Write(data_to_send_socket)
}
defer connection.Close()
}
Go creates its sockets in non-blocking mode, which means that certain system calls that would usually block instead. In most cases it transparently handles the EAGAIN error (what is indicated by the "resource temporarily unavailable" message) by waiting until the socket is ready to read/write. It doesn't seem to have this logic for the connect call in Dial though.
It is possible for connect to return EAGAIN when connecting to a UNIX domain socket if its listen queue has filled up. This will happen if clients are connecting to it faster than it is accepting them. Go should probably wait on the socket until it becomes connectable in this case and retry similar to what it does for Read/Write, but it doesn't seem to have that logic.
So your best bet would be to handle the error by waiting and retrying the Dial call. That, or work out why your server isn't accepting connections in a timely manner.
For the exponential backoff you can use this library: github.com/cenkalti/backoff. I think the way you have it now it always sleeps for the same amount of time.
For the network error you need to check if it's a temporary error or not. If it is then retry:
type TemporaryError interface {
Temporary() bool
}
func dial() (conn net.Conn, err error) {
backoff.Retry(func() error {
conn, err = net.Dial("unix", "/tmp/ex.socket")
if err != nil {
// if this is a temporary error, then retry
if terr, ok := err.(TemporaryError); ok && terr.Temporary() {
return err
}
}
// if we were successful, or there was a non-temporary error, fail
return nil
}, backoff.NewExponentialBackOff())
return
}
Related
As per the documentation in Readme:
Make sure to defer a call to Disconnect after instantiating your client:
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
Does the above documentation meant to disconnect, during shutdown of the program(using mongodb driver)?
They just reminded you that you should always close the connection to the database at some point. When exactly is up to you. Usually, you initialize the database connection at the top-level of your application, so the defer call should be at the same level. For example,
func main() {
client, err := mongo.Connect(ctx, options.Client().ApplyURI("mongodb://localhost:27017"))
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
// Pass the connection to other components of your appliaction
doSomeWork(client)
}
Note: If you are using Goroutines don't forget to synchronize them in main otherwise the connection will be close too early.
I think the documentation meant to use defer with client.Disconnect in the main function of your program. Thanks to that your program will close all the MongoDB client connections before exiting.
If you would use it e.g. in a helper function that prepares that client, it would close all the connections right after the client creation which may not be something you want.
Today when I try to send 100M data to my server (a very simple TCP server also written in Golang), I found that the TCPConn.Write method returns 104857600 and nil error and then I close the socket. But my server only receives very little data. I think it is because Write method works in async mode, so although the method returns 104857600, only a little data is sent to the server. So I want to know whether there is a way to set the Write work in sync mode or how to detect whether all data is sent to the server from the socket.
The code is as follows:
server:
const ListenAddress = "192.168.0.128:8888"
func main() {
var l net.Listener
var err error
l, err = net.Listen("tcp", ListenAddress)
if err != nil {
fmt.Println("Error listening:", err)
os.Exit(1)
}
defer l.Close()
fmt.Println("listen on " + ListenAddress)
for {
conn, err := l.Accept()
if err != nil {
fmt.Println("Error accepting: ", err)
os.Exit(1)
}
//logs an incoming message
fmt.Printf("Received message %s -> %s \n", conn.RemoteAddr(), conn.LocalAddr())
// Handle connections in a new goroutine.
go handleRequest(conn)
}
}
func handleRequest(conn net.Conn) {
defer conn.Close()
rcvLen := 0
rcvData := make([]byte,20 * 1024 * 1024) // 20M
for {
l , err := conn.Read(rcvData)
if err != nil {
fmt.Printf("%v", err)
return
}
rcvLen += l
fmt.Printf("recv: %d\r\n", rcvLen)
conn.Write(rcvData[:l])
}
}
Client:
conn, err := net.Dial("tcp", "192.168.0.128:8888")
if err != nil {
fmt.Println(err)
os.Exit(-1)
}
defer conn.Close()
data := make([]byte, 500 * 1024 * 1024)
length, err := conn.Write(data)
fmt.Println("send len: ", length)
The output of the client:
send len: 524288000
The output of the server:
listen on 192.168.0.128:8888
Received message 192.168.0.2:50561 -> 192.168.0.128:8888
recv: 166440
recv: 265720
EOF
I know if I can make the client wait for a while by SetLinger method, the data will be all sent to the server before the socket is closed. But I want to find a way to make the socket send all data before returns without calling SetLinger().
Please excuse my poor English.
Did you poll the socket before trying to write?
Behind the socket is your operating system's tcp stack. When writing on a socket, you push bytes to the send buffer. Your operating system then self determines when and how to send. If the receiving end has no buffer space.available in their receice buffer, your sending end knows this and will not put any more information in the send buffer.
Make sure your send buffer has enough space for whatever you are trying to send next. This is done by polling the socket. This method is usually called Socket.Poll. I.recommend you ccheck the golang docs for the exact usage.
You are not handling the error returned by conn.Read correctly. From the docs (emphasis mine):
When Read encounters an error or end-of-file condition after successfully reading n > 0 bytes, it returns the number of bytes read. It may return the (non-nil) error from the same call or return the error (and n == 0) from a subsequent call. [...]
Callers should always process the n > 0 bytes returned before considering the error err. Doing so correctly handles I/O errors that happen after reading some bytes and also both of the allowed EOF behaviors.
Note that you are re-inventing io.Copy (albeit with an excessive buffer size). Your server code can be rewritten as:
func handleRequest(conn net.Conn) {
defer conn.Close()
n, err := io.Copy(conn, conn)
}
I love the way Go handles I/O multiplexing internally which epoll and another mechanisms and schedules green threads (go-routine here) on its own giving the freedom to write synchronous code.
I know TCP sockets are non-blocking and read will give EAGAIN when no data is available. Given that, conn.Read(buffer) will detect this and blocks the go routine doing a connection read with no data available in the socket buffer. Is there a way to stop such go routine without closing the underlying connection. I am using a connection pool so closing the TCP connection won't make sense for me and want to return that connection back to the pool.
Here is the code to simulate such scenario:
func main() {
conn, _ := net.Dial("tcp", "127.0.0.1:9090")
// Spawning a go routine
go func(conn net.Conn) {
var message bytes.Buffer
for {
k := make([]byte, 255) // buffer
m, err := conn.Read(k) // blocks here
if err != nil {
if err != io.EOF {
fmt.Println("Read error : ", err)
} else {
fmt.Println("End of the file")
}
break // terminate loop if error
}
// converting bytes to string for printing
if m > 0 {
for _, b := range k {
message.WriteByte(b)
}
fmt.Println(message.String())
}
}
}(conn)
// prevent main from exiting
select {}
}
What are the other approaches can I take if it's not possible:
1) Call syscall.Read and handle this manually. In this case, I need a way to check if the socket is readable before calling syscall.Readotherwise I will end up wasting unnecessary CPU cycles. For my scenario, I think I can skip the event based polling thing and keep on calling syscall.Read as there always be data in my use case.
2) Any suggestions :)
func receive(conn net.TCPConn, kill <-chan struct{}) error {
// Spawn a goroutine to read from the connection.
data := make(chan []byte)
readErr := make(chan error)
go func() {
for {
b := make([]byte, 255)
_, err := conn.Read(b)
if err != nil {
readErr <- err
break
}
data <- b
}
}()
for {
select {
case b := <-data:
// Do something with `b`.
case err := <-readErr:
// Handle the error.
return err
case <-kill:
// Received kill signal, returning without closing the connection.
return nil
}
}
}
Send an empty struct to kill from another goroutine to stop receiving from the connection. Here's a program that stops receiving after a second:
kill := make(chan struct{})
go func() {
if err := receive(conn, kill); err != nil {
log.Fatal(err)
}
}()
time.Sleep(time.Second)
kill <- struct{}{}
This might not be exactly what you're looking for, because the reading goroutine would still be blocked on Read even after you send to kill. However, the goroutine that handles incoming reads would terminate.
So I'm having some trouble figuring out best practices for using concurrency with a MongoDB in go. My first implementation of getting a session looked like this:
var globalSession *mgo.Session
func getSession() (*mgo.Session, error) {
//Establish our database connection
if globalSession == nil {
var err error
globalSession, err = mgo.Dial(":27017")
if err != nil {
return nil, err
}
//Optional. Switch the session to a monotonic behavior.
globalSession.SetMode(mgo.Monotonic, true)
}
return globalSession.Copy(), nil
}
This works great the trouble I'm running into is that mongo has a limit of 204 connections then it starts refusing connections connection refused because too many open connections: 204;however, the issue is since I'm calling session.Copy() it only returns a session and not an error. So event though the connection refused my program never thrown an error.
Now what I though about doing is just having one session and using that instead of copy so I can have access to a connection error like so:
var session *mgo.Session = nil
func NewSession() (*mgo.Session, error) {
if session == nil {
session, err = mgo.Dial(url)
if err != nil {
return nil, err
}
}
return session, nil
}
Now the problem I have with this is that I don't know what would happen if I try to make concurrent usage of that same session.
The key is to duplicate the session and then close it when you've finished with it.
func GetMyData() []myMongoDoc {
sessionCopy, _ := getSession() // from the question above
defer sessionCopy.Close() // this is the important bit
results := make([]myMongoDoc, 0)
sessionCopy.DB("myDB").C("myCollection").Find(nil).All(&results)
return results
}
Having said that it looks like mgo doesn't actually expose control over the underlying connections (see the comment from Gustavo Niemeyer who maintains the library). A session pretty much equates to a connection, but even if you call Close() on a session mgo keeps the connection alive. From reading around it seems that Clone() might be the way to go, as it reuses the underlying socket, this will avoid the 3 way handshake of creating a new socket (see here for more discussion on the difference).
Also see this SO answer describing a standard pattern to handle sessions.
When I send requests from the following code:
req, err := http.NewRequest("GET", "my_target:", nil)
if err != nil {
panic(err)
}
req.Close = true
client := http.DefaultClient
resp, err := client.Do(req)
if err != nil {
panic(err)
}
defer resp.Body.Close()
After a few hours of sending average 10 requests per minutes, I get this error:
socket: too many open files
How do I find the cause of this problem?
Am I not closing the http.Request?
I thought req.Close = true does that job.
Thanks!
Why are you deferring the close? Are you actually reading from this body?
defer resp.Body.Close()
Do you actually return from the current function before performing another Get? If not, then the defer will never execute, and you'll never release this connection for reuse.
req.Close = true is an unusual choice here, as well. This also prevents connection reuse, which is something you'd probably want rather than forbid. This doesn't automatically close the request on your side. It forces the server to immediately close the connection, which you would otherwise reuse. You'll hold your side open until you close it.
Typically for a simple GET request like you have here, I would just do this:
resp, err := http.Get("...")
if err != nil {
panic(err) // panic seems harsh, usually you'd just return the error. But either way...
}
resp.Body.Close()
There's no need for a special client here. Just use the default one. It'll take care of the rest as long as you make sure to close the response body. And there's no need to complicate things with a defer if you're going to immediately close the body. The reason for defer is to make sure you close the body if you have a bunch of processing later that might return or panic on error.