Connection(localhost:27017[-4]) failed to write: context canceled - mongodb

Here I read it's necessary to cancel the context. This is how my db.go looks like:
package db
import (
"context"
"fmt"
"log"
"os"
"time"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
const (
connectionStringTemplate = "mongodb://%s:%s#%s"
)
var DB *mongo.Client
var Ctx context.Context
// Connect with create the connection to MongoDB
func Connect() {
username := os.Getenv("MONGODB_USERNAME")
password := os.Getenv("MONGODB_PASSWORD")
clusterEndpoint := os.Getenv("MONGODB_ENDPOINT")
connectionURI := fmt.Sprintf(connectionStringTemplate, username, password, clusterEndpoint)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.NewClient(options.Client().ApplyURI(connectionURI))
if err != nil {
log.Printf("Failed to create client: %v", err)
}
err = client.Connect(ctx)
if err != nil {
log.Printf("Failed to connect to cluster: %v", err)
}
// Force a connection to verify our connection string
err = client.Ping(ctx, nil)
if err != nil {
log.Printf("Failed to ping cluster: %v", err)
}
DB = client
Ctx = ctx
log.Printf("Connected to MongoDB!")
}
I execute this in the main.go:
func main() {
// Configure
db.Connect()
defer db.DB.Disconnect(context.Background())
r := gin.Default()
// Routes
//r.GET("/movies", handlers.GetAllMoviesHandler)
r.POST("/movies", handlers.AddMovieHandler)
// listen and serve on 0.0.0.0:8080
r.Run()
}
and it works fine if my local mongodb is running.
Now when I use the client like this (I know the naming is still bad):
func AddMovie(movie *Movie) (primitive.ObjectID, error) {
movie.ID = primitive.NewObjectID()
result, err := db.DB.Database("movies").Collection("movies").InsertOne(db.Ctx, movie)
if err != nil {
log.Printf("Could not create movie: %v", err)
return primitive.NilObjectID, err
}
oid := result.InsertedID.(primitive.ObjectID)
return oid, nil
}
I get an error like this
{"msg":{"Code":0,"Message":"connection(localhost:27017[-4]) failed to write: context canceled","Labels":["NetworkError","RetryableWriteError"],"Name":"","Wrapped":{"ConnectionID":"localhost:27017[-4]","Wrapped":{}}}}
It works fine when I put the defer cancel() in comment but I suppose this is not correct?. What am I doing wrong here.
Update:
It works when I use:
result, err :=db.DB.Database("movies").Collection("movies").InsertOne(context.Background(), movie) instead of ctx. I don't fully understand the usage of the context in these use case. I'm not sure if I should do the Ctx = ctx in the Connect function either.

Related

Can I have insecure GET HTTP requests whilst having MTLS securing all other HTTP requests?

I have a HTTP REST service written in golang demonstrating what I'm attempting.
I want GET requests insecure and all other REST requests secured with MTLS.
My implementation already uses the gin web server library so I'd like to stick with that if possible.
My issue is that I have only been able to apply the tlsConfig to both groups or neither. I've been unable to find a way to apply this at the group level.
package main
import (
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"io/ioutil"
"net/http"
"log"
"github.com/gin-gonic/gin"
)
func main() {
router := gin.Default()
// Unprotected public router for GET requests
public := router.Group("/")
// Private router with MTLS
private := router.Group("/")
public.GET("/insecure-ping", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "insecure pong",
})
})
private.POST("/secure-ping", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "secure pong",
})
})
// Get the SystemCertPool, continue with an empty pool on error
rootCAs, err := x509.SystemCertPool()
if err != nil {
log.Fatal(err)
}
if rootCAs == nil {
rootCAs = x509.NewCertPool()
}
// Create a CA certificate pool and add cacert.pem to it
caCert, err := ioutil.ReadFile("cacert.pem")
if err != nil {
log.Fatal(err)
}
if ok := rootCAs.AppendCertsFromPEM(caCert); !ok {
err := errors.New("failed to append CA cert to local system certificate pool")
log.Fatal(err)
}
server := http.Server{
Addr: fmt.Sprintf(":%v", 8080),
Handler: router,
}
server.TLSConfig = &tls.Config{
RootCAs: rootCAs,
}
err = server.ListenAndServeTLS("certificate.crt", "privateKey.key")
if err != nil {
log.Fatal(err)
}
}
Just create two Server instances and run them both, one with ListenAndServe and one with ListenAndServeTLS, configured with the same routes. Because HTTP and HTTPS operate on different ports, they have to have different listeners, but both listeners can use the same (or different) handlers. For example:
publicRouter := gin.Default()
// Unprotected public router for GET requests
public := publicRouter.Group("/")
public.GET("/insecure-ping", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "insecure pong",
})
})
server := http.Server{
Addr: fmt.Sprintf(":%v", 8081), // Or whatever
Handler: publicRouter,
}
go func() {
err = tlsServer.ListenAndServe()
if err != nil {
log.Fatal(err)
}
}()
// Private router with MTLS
router := gin.Default()
private := router.Group("/")
private.POST("/secure-ping", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "secure pong",
})
})
// Get the SystemCertPool, continue with an empty pool on error
rootCAs, err := x509.SystemCertPool()
if err != nil {
log.Fatal(err)
}
if rootCAs == nil {
rootCAs = x509.NewCertPool()
}
// Create a CA certificate pool and add cacert.pem to it
caCert, err := ioutil.ReadFile("cacert.pem")
if err != nil {
log.Fatal(err)
}
if ok := rootCAs.AppendCertsFromPEM(caCert); !ok {
err := errors.New("failed to append CA cert to local system certificate pool")
log.Fatal(err)
}
tlsServer := http.Server{
Addr: fmt.Sprintf(":%v", 8080),
Handler: router,
}
tlsServer.TLSConfig = &tls.Config{
RootCAs: rootCAs,
}
err = tlsServer.ListenAndServeTLS("certificate.crt", "privateKey.key")
if err != nil {
log.Fatal(err)
}

Go: client is disconnected

It's been few weeks since I joined in Gophers team. So far so good.
I started a new project using a fiber web framework to build backend APIs.
I am using MongoDB as my database.
database/db.go
package database
import (
"context"
"log"
"time"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"go.mongodb.org/mongo-driver/mongo/readpref"
)
var DB *mongo.Database
// InitMongo : Initialize mongodb...
func connectToMongo() {
log.Printf("Initializing database")
client, err := mongo.NewClient(options.Client().ApplyURI("mongodb://localhost:27017"))
if err != nil {
log.Fatal("Could not able to connect to the database, Reason:", err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
err = client.Connect(ctx)
if err != nil {
log.Fatal("Context error, mongoDB:", err)
}
//Cancel context to avoid memory leak
defer cancel()
defer client.Disconnect(ctx)
// Ping our db connection
err = client.Ping(context.Background(), readpref.Primary())
if err != nil {
log.Fatal("Ping, mongoDB:", err)
}
log.Printf("Database connected!")
// Create a database
DB = client.Database("golang-test")
return
}
// In Golang, init() functions always initialize whenever the package is called.
// So, whenever DB variable called, the init() function initialized
func init() {
connectToMongo()
}
controllers/mongo.controller/mongo.controller.go
package mongocontroller
import (
"log"
"github.com/gofiber/fiber/v2"
service "gitlab.com/.../services/mongoservice"
)
// GetPersons godoc
// #Summary Get persons.
// #Description Get persons
// #Tags persons
// #Produce json
// #Success 200 {object} []service.Person
// #Failure 400 {object} httputil.HTTPError
// #Failure 404 {object} httputil.HTTPError
// #Failure 500 {object} httputil.HTTPError
// #Router /v1/persons [get]
func GetPersons(c *fiber.Ctx) error {
res, err := service.GetPersons()
if err != nil {
log.Fatal("ERROR: in controller...", err)
}
return c.JSON(res)
}
services/mongoservice/mongo.service.go
package mongoservice
import (
"context"
"log"
database "gitlab.com/.../database"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
)
// Person : ...
type Person struct {
ID primitive.ObjectID `bson:"_id,omitempty" json:"_id,omitempty"`
Name string `bson:"name,omitempty" json:"name,omitempty"`
Age int `bson:"age,omitempty" json:"age,omitempty"`
}
func GetPersons() ([]Person, error) {
ctx := context.Background()
persons := []Person{}
log.Printf("mongo data...", ctx)
cur, err := database.DB.Collection("persons").Find(ctx, bson.M{})
if err != nil {
log.Fatal(err)
}
// Iterate through the returned cursor.
for cur.Next(ctx) {
var person Person
cur.Decode(&person)
persons = append(persons, person)
}
defer cur.Close(ctx)
return persons, err
}
Here is my data stored in the database:
The problem is, the line cur, err := database.DB.Collection("persons").Find(ctx, bson.M{}) from service always throwing Client is disconnected.
Any help is appreciated!
Thank you.
You are calling defer client.Disconnect(ctx) in the same function which is creating the connection (connectToMongo ) so it will close the connection after calling the function. You should return the connection and close after finishing your task. I mean these parts:
defer cancel()
defer client.Disconnect(ctx)

How to properly disconnect MongoDB client

As far as I understand, you have to disconnect from MongoDB after you finish using it, but I'm not entirely sure how to do it right
var collection *mongo.Collection
var ctx = context.TODO()
func init() {
clientOptions := options.Client().ApplyURI("mongodb://localhost:27017/")
client, err := mongo.Connect(ctx, clientOptions)
if err != nil {
log.Fatal(err)
}
//defer client.Disconnect(ctx)
err = client.Ping(ctx, nil)
if err != nil {
log.Fatal(err)
}
fmt.Println("Connected successfully")
collection = client.Database("testDB").Collection("testCollection") //create DB
}
There is commented out function call
defer client.Disconnect(ctx)
Which would work fine if all code happens in main() function, but since defer gets called right after init() executes, DB in main() function is already disconnected.
So the question is - what would be the right way to approach this case?
Your application needs the connected MongoDB client in your all services or repositories and therefore it is easier, if you have separated MongoDB client connect and disconnect functions in application package. You don't need to connect MongoDB client, if your server is starting, you can connect first if your services or repositories need the MongoDB client connection.
// db.go
package application
import (
"context"
"fmt"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"log"
"os"
)
var client *mongo.Client
func ResolveClientDB() *mongo.Client {
if client != nil {
return client
}
var err error
// TODO add to your .env.yml or .config.yml MONGODB_URI: mongodb://localhost:27017
clientOptions := options.Client().ApplyURI(os.Getenv("MONGODB_URI"))
client, err = mongo.Connect(context.Background(), clientOptions)
if err != nil {
log.Fatal(err)
}
// check the connection
err = client.Ping(context.Background(), nil)
if err != nil {
log.Fatal(err)
}
// TODO optional you can log your connected MongoDB client
return client
}
func CloseClientDB() {
if client == nil {
return
}
err := client.Disconnect(context.TODO())
if err != nil {
log.Fatal(err)
}
// TODO optional you can log your closed MongoDB client
fmt.Println("Connection to MongoDB closed.")
}
In main:
func main() {
// TODO add your main code here
defer application.CloseClientDB()
}
In your repositories or services you can get now your MongoDB client easily:
// account_repository.go
// TODO add here your account repository interface
func (repository *accountRepository) getClient() *mongo.Client {
if repository.client != nil {
return repository.client
}
repository.client = application.ResolveClientDB()
return repository.client
}
func (repository *accountRepository) FindOneByFilter(filter bson.D) (*model.Account, error) {
var account *model.Account
collection := repository.getClient().Database("yourDB").Collection("account")
err := collection.FindOne(context.Background(), filter).Decode(&account)
return account, err
}

Why accepted two same 5-tuple socket when concurrent connect to the server?

server.go
package main
import (
"fmt"
"io"
"io/ioutil"
"log"
"net"
"net/http"
_ "net/http/pprof"
"sync"
"syscall"
)
type ConnSet struct {
data map[int]net.Conn
mutex sync.Mutex
}
func (m *ConnSet) Update(id int, conn net.Conn) error {
m.mutex.Lock()
defer m.mutex.Unlock()
if _, ok := m.data[id]; ok {
fmt.Printf("add: key %d existed \n", id)
return fmt.Errorf("add: key %d existed \n", id)
}
m.data[id] = conn
return nil
}
var connSet = &ConnSet{
data: make(map[int]net.Conn),
}
func main() {
setLimit()
ln, err := net.Listen("tcp", ":12345")
if err != nil {
panic(err)
}
go func() {
if err := http.ListenAndServe(":6060", nil); err != nil {
log.Fatalf("pprof failed: %v", err)
}
}()
var connections []net.Conn
defer func() {
for _, conn := range connections {
conn.Close()
}
}()
for {
conn, e := ln.Accept()
if e != nil {
if ne, ok := e.(net.Error); ok && ne.Temporary() {
log.Printf("accept temp err: %v", ne)
continue
}
log.Printf("accept err: %v", e)
return
}
port := conn.RemoteAddr().(*net.TCPAddr).Port
connSet.Update(port, conn)
go handleConn(conn)
connections = append(connections, conn)
if len(connections)%100 == 0 {
log.Printf("total number of connections: %v", len(connections))
}
}
}
func handleConn(conn net.Conn) {
io.Copy(ioutil.Discard, conn)
}
func setLimit() {
var rLimit syscall.Rlimit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {
panic(err)
}
rLimit.Cur = rLimit.Max
if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {
panic(err)
}
log.Printf("set cur limit: %d", rLimit.Cur)
}
client.go
package main
import (
"bytes"
"flag"
"fmt"
"io"
"log"
"net"
"os"
"strconv"
"sync"
"syscall"
"time"
)
var portFlag = flag.Int("port", 12345, "port")
type ConnSet struct {
data map[int]net.Conn
mutex sync.Mutex
}
func (m *ConnSet) Update(id int, conn net.Conn) error {
m.mutex.Lock()
defer m.mutex.Unlock()
if _, ok := m.data[id]; ok {
fmt.Printf("add: key %d existed \n", id)
return fmt.Errorf("add: key %d existed \n", id)
}
m.data[id] = conn
return nil
}
var connSet = &ConnSet{
data: make(map[int]net.Conn),
}
func echoClient() {
addr := fmt.Sprintf("127.0.0.1:%d", *portFlag)
dialer := net.Dialer{}
conn, err := dialer.Dial("tcp", addr)
if err != nil {
fmt.Println("ERROR", err)
os.Exit(1)
}
port := conn.LocalAddr().(*net.TCPAddr).Port
connSet.Update(port, conn)
defer conn.Close()
for i := 0; i < 10; i++ {
s := fmt.Sprintf("%s", strconv.Itoa(i))
_, err := conn.Write([]byte(s))
if err != nil {
log.Println("write error: ", err)
}
b := make([]byte, 1024)
_, err = conn.Read(b)
switch err {
case nil:
if string(bytes.Trim(b, "\x00")) != s {
log.Printf("resp req not equal, req: %d, res: %s", i, string(bytes.Trim(b, "\x00")))
}
case io.EOF:
fmt.Println("eof")
break
default:
fmt.Println("ERROR", err)
break
}
}
time.Sleep(time.Hour)
if err := conn.Close(); err != nil {
log.Printf("client conn close err: %s", err)
}
}
func main() {
flag.Parse()
setLimit()
before := time.Now()
var wg sync.WaitGroup
for i := 0; i < 20000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
echoClient()
}()
}
wg.Wait()
fmt.Println(time.Now().Sub(before))
}
func setLimit() {
var rLimit syscall.Rlimit
if err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {
panic(err)
}
rLimit.Cur = rLimit.Max
if err := syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit); err != nil {
panic(err)
}
log.Printf("set cur limit: %d", rLimit.Cur)
}
running command
go run server.go
---
go run client.go
server running screenshot
The client simultaneously initiates 20,000 connections to the server, and the server accepted two remotePort connections that are exactly the same (in a extremely short period of time).
I try to use tcpconn.py from bcc (patched from tcpconnect.py by add skc_num(aka: local_port))
tcpaccept.py
tracing the connection, and also finds that the remote port is duplicated on the server side when there is no duplicate on the client side
In my understanding, the 5-tuple of the socket will not be duplicated, Why the server accepted two sockets with exactly the same remote port?
My test environment:
Fedora 31, kernel version 5.3.15 x86_64
and
Ubuntu 18.04.3 LTS, kernel version 4.19.1 x86_64
go version go1.13.5 linux/amd64
wireshark:
server TCP Keep-Alive to both ACK & PSH+ACK
server TCP Keep-Alive to PSH+ACK only
Connection is added to the map data map[int]net.Conn when it is established, but when connection is closed it is not removed from the map. So if connection gets closed its port become free and could be reused by an Operation System for next connection. That's a reason why you can see duplicate ports.
Try to remove port from map when they are get closed.

Cannot return database object.

This is my code for working with postgres database.
package main
import (
"database/sql"
_ "github.com/lib/pq"
"fmt"
"log"
)
//Details required for connection
const (
HOST = "HOSTNAME"
USER = "USER"
PASSWORD = "PASSWORD"
DATABASE = "DB"
)
func Create() (*sql.DB) {
dbinfo := fmt.Sprintf("host=%s user=%s password=%s dbname=%s", HOST, USER, PASSWORD, DATABASE)
db,err := sql.Open("postgres", dbinfo)
defer db.Close()
if (err != nil) {
log.Fatal(err)
}
err = db.Ping()
if err != nil {
log.Fatal(err)
}
return db
}
func main() {
db := Create()
querStmt, err := db.Prepare("select count(*) from table")
if err != nil {
fmt.Printf("Cannot prepare query\n")
log.Fatal(err)
}
res, err := querStmt.Exec()
if err != nil {
fmt.Printf("Cannot execute query\n")
log.Fatal(err)
}
fmt.Printf("%v\n", res)
}
When running this code i am getting this error
Cannot prepare query
2016/03/09 16:57:23 sql: database is closed
If i run query from Create() then it works perfectly but doing same on db object returned by Create() inside main() is not working. Thanks for help.
Your database is closed the moment you return from Create because your defer is inside it and not inside main. Move the defer to main and it should work as intended.