panic: pq: sorry, too many clients already [duplicate] - postgresql

I'm trying to use Go to insert a row of data into a Postgres table for each new message received from rabbitmq, using a single connection to the DB which is opened in the init function of the code below.
Rather than opening just one connection, the code opens 497 and maxes out which causes the row inserts to stop...
I have tried using the info in these questions opening and closing DB connection in Go app and open database connection inside a function which say I should open one connection and use global db to allow the main function to pass the sql statement to the connection opened in the init function.
I thought I had done this, however a new connection is being opened for each new row so the code stops working once the postgres connection limit is reached...
I am new to Go and have limited programming experience, I have been trying to understand/resolve this issue for the past two days and I could really do with some help understanding where I am going wrong with this...
var db *sql.DB
func init() {
var err error
db, err = sql.Open ( "postgres", "postgres://postgres:postgres#SERVER/PORT/DB")
if err != nil {
log.Fatal("Invalid DB config:", err)
}
if err = db.Ping(); err != nil {
log.Fatal("DB unreachable:", err)
}
}
func main() {
// RABBITMQ CONNECTION CODE IS HERE
// EACH MESSAGE RECEIVED IS SPLIT TO LEGEND, STATUS, TIMESTAMP VARIABLES
// VARIABLES ARE PASSED TO sqlSatement
sqlStatement := `
INSERT INTO heartbeat ("Legend", "Status", "TimeStamp")
VALUES ($1, $2, $3)
`
// sqlStatement IS THEN PASSED TO db.QueryRow
db.QueryRow(sqlStatement, Legend, Status, TimeStamp)
}
}()
<-forever
}
Full code is shown below:
package main
import (
"database/sql"
"log"
_ "github.com/lib/pq"
"github.com/streadway/amqp"
"strings"
)
var db *sql.DB
func failOnError(err error, msg string) {
if err != nil {
log.Fatalf("%s: %s", msg, err)
}
}
func init() {
var err error
db, err = sql.Open ( "postgres", "postgres://postgres:postgres#192.168.1.69:5432/test?sslmode=disable")
if err != nil {
log.Fatal("Invalid DB config:", err)
}
if err = db.Ping(); err != nil {
log.Fatal("DB unreachable:", err)
}
}
func main() {
conn, err := amqp.Dial("amqp://Admin:Admin#192.168.1.69:50003/")
failOnError(err, "Failed to connect to RabbitMQ")
defer conn.Close()
ch, err := conn.Channel()
failOnError(err, "Failed to open a channel")
defer ch.Close()
q, err := ch.QueueDeclare(
"HEARTBEAT", // name
false, // durable
false, // delete when unused
false, // exclusive
false, // no-wait
nil, // arguments
)
failOnError(err, "Failed to declare a queue")
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
false, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
failOnError(err, "Failed to register a consumer")
forever := make(chan bool)
go func() {
for d := range msgs {
myString := string(d.Body[:])
result := strings.Split(myString, ",")
Legend := result[0]
Status := result[1]
TimeStamp := result[2]
sqlStatement := `
INSERT INTO heartbeat ("Legend", "Status", "TimeStamp")
VALUES ($1, $2, $3)
`
//
db.QueryRow(sqlStatement, Legend, Status, TimeStamp)
}
}()
<-forever
}

First off, *sql.DB is not a connection but a pool of connections, it will open as many connection as it needs to and as many as the postgres server allows. It only opens new connections when there is no idle one in the pool ready for use.
So the issue is that the connections that DB opens aren't being released, why? Because you're using QueryRow without calling Scan on the returned *Row value.
Under the hood *Row holds a *Rows instance which has access to its own connection and that connection is released automatically when Scan is called. If Scan is not called then the connection is not released which causes the DB pool to open a new connection on the next call to QueryRow. So since you're not releasing any connections DB keeps opening new ones until it hits the limit specified by the postgres settings and then the next call to QueryRow hangs because it waits for a connection to become idle.
So you either need to use Exec if you don't care about the output, or you need to call Scan on the returned *Row.

Related

go postgres prepare statement error - panic: runtime error: invalid memory address or nil pointer dereference [duplicate]

I have a set of functions in my web API app. They perform some operations on the data in the Postgres database.
func CreateUser () {
db, err := sql.Open("postgres", "user=postgres password=password dbname=api_dev sslmode=disable")
// Do some db operations here
}
I suppose functions should work with db independently from each other, so now I have sql.Open(...) inside each function. I don't know if it's a correct way to manage db connection.
Should I open it somewhere once the app starts and pass db as an argument to the corresponding functions instead of opening the connection in every function?
Opening a db connection every time it's needed is a waste of resources and it's slow.
Instead, you should create an sql.DB once, when your application starts (or on first demand), and either pass it where it is needed (e.g. as a function parameter or via some context), or simply make it a global variable and so everyone can access it. It's safe to call from multiple goroutines.
Quoting from the doc of sql.Open():
The returned DB is safe for concurrent use by multiple goroutines and maintains its own pool of idle connections. Thus, the Open function should be called just once. It is rarely necessary to close a DB.
You may use a package init() function to initialize it:
var db *sql.DB
func init() {
var err error
db, err = sql.Open("yourdriver", "yourDs")
if err != nil {
log.Fatal("Invalid DB config:", err)
}
}
One thing to note here is that sql.Open() may not create an actual connection to your DB, it may just validate its arguments. To test if you can actually connect to the db, use DB.Ping(), e.g.:
func init() {
var err error
db, err = sql.Open("yourdriver", "yourDs")
if err != nil {
log.Fatal("Invalid DB config:", err)
}
if err = db.Ping(); err != nil {
log.Fatal("DB unreachable:", err)
}
}
I will use a postgres example
package main
import necessary packages and don't forget the postgres driver
import (
"database/sql"
_ "github.com/lib/pq" //postgres driver
)
initialize your connection in the package scope
var db *sql.DB
have an init function for your connection
func init() {
var err error
db, err = sql.open("postgres", "connectionString")
//connectioString example => 'postgres://username:password#localhost/dbName?sslmode=disable'
if err != nil {
panic(err)
}
err = db.Ping()
if err != nil {
panic(err)
}
// note, we haven't deffered db.Close() at the init function since the connection will close after init. you could close it at main or ommit it
}
main function
func main() {
defer db.Close() //optional
//run your db functions
}
checkout this example
https://play.golang.org/p/FAiGbqeJG0H

Goroutine opening a new connection to database after each request (sqlx) and ticker

Let's consider the following goroutine:
func main(){
...
go dbGoRoutine()
...
}
And the func:
func dbGoRoutine() {
db, err := sqlx.Connect("postgres", GetPSQLInfo())
if err != nil {
panic(err)
}
defer db.Close()
ticker := time.NewTicker(10 * time.Second)
for _ = range ticker.C {
_, err := db.Queryx("SELECT * FROM table")
if err != nil {
// handle
}
}
}
Each time the function iterates on the ticker it opens a cloudSQL connection
[service... cloudsql-proxy] 2019/11/08 17:05:05 New connection for "location:exemple-db"
I can't figure out why it opens a new connection each time, since the sqlx.Connect is not in the for loop.
This issue is due to how Query function in sql package, it returns Row which are:
Rows is the result of a query. Its cursor starts before the first row of the result set.
Those cursor are stored using cache.
try using Exec().

Named prepared statement in pgx lib, how does it work?

Introduction
database/sql
In the Go standard sql library, the *Stmt type has methods defined like:
func (s *Stmt) Exec(args ...interface{}) (Result, error)
func (s *Stmt) Query(args ...interface{}) (*Rows, error)
The a new (unnamed) statement is prepared by:
func (db *DB) Prepare(query string) (*Stmt, error)
Connection pool is abstracted and not directly accessible
A transaction is prepared on a single connection
If the connection is not available at statment execution time, it will be re-prepared on a new connection.
pgx
The PreparedStatement type doesn't have any methods defined. A new named prepared statement is prepared by:
func (p *ConnPool) Prepare(name, sql string) (*PreparedStatement, error)
Operations are directly on the connection pool
The transaction gets prepared on all connections of the pool
There is no clear way how to execute the prepared statement
In a Github comment, the author explains better the differences of architecture between pgx and database/sql. The documentation on Prepare also states (emphasis mine):
Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same name and sql arguments. This allows a code path to Prepare and Query/Exec/PrepareEx without concern for if the statement has already been prepared.
Small example
package main
import (
"github.com/jackc/pgx"
)
func main() {
conf := pgx.ConnPoolConfig{
ConnConfig: pgx.ConnConfig{
Host: "/run/postgresql",
User: "postgres",
Database: "test",
},
MaxConnections: 5,
}
db, err := pgx.NewConnPool(conf)
if err != nil {
panic(err)
}
_, err = db.Prepare("my-query", "select $1")
if err != nil {
panic(err)
}
// What to do with the prepared statement?
}
Question(s)
The name argument gives me the impression it can be executed by calling it by name, but how?
The documentation gives the impression that Query/Exec methods somehow leverage the prepared statements. However, those methods don't take a name argument. How does it match them?
Presumably, matching is done by the query content. Then what's the whole point of naming statements?
Possible answers
This is how far I got myself:
There are no methods that refer to the queries by name (assumption)
Matching is done on the query body in conn.ExecEx(). If it is not yet prepared, it will be done:
ps, ok := c.preparedStatements[sql]
if !ok {
var err error
ps, err = c.prepareEx("", sql, nil)
if err != nil {
return "", err
}
}
PosgreSQL itself needs it for something (assumption).
#mkopriva pointed out that the sql text was misleading me. It has a double function here. If the sql variable does not match to a key in the c.preparedStatements[sql] map, the query contained in the sql gets prepared and a new *PreparedStatement struct is appointed to ps. If it did match a key, the ps variable will point to an entry of the map.
So effectively you can do something like:
package main
import (
"fmt"
"github.com/jackc/pgx"
)
func main() {
conf := pgx.ConnPoolConfig{
ConnConfig: pgx.ConnConfig{
Host: "/run/postgresql",
User: "postgres",
Database: "test",
},
MaxConnections: 5,
}
db, err := pgx.NewConnPool(conf)
if err != nil {
panic(err)
}
if _, err := db.Prepare("my-query", "select $1::int"); err != nil {
panic(err)
}
row := db.QueryRow("my-query", 10)
var i int
if err := row.Scan(&i); err != nil {
panic(err)
}
fmt.Println(i)
}

Ensure MongoDB expires data at dynamic time intervals and calls are idempotent

I am using MongoDB to save user-generated links in storage. The user can state how long they want the URL to be saved before it is expired.Every user id is unique too.
Ideally, I would like my requests to be idempotent. I would like to make as many calls without having to check if there was an expiry value on the last call.
My code below seems to give me:
"Index with name: creationtime_1 already exists with different options" or
index does not exist.
This is my first run with MongoDB and I would appreciate any insights.I think I might have redundant checks on my code too but I can't figure out how else to do it
```
//mongo settings
sessionTTL := mgo.Index{
Key: []string{"creationtime"},
Unique: false,
DropDups: false,
Background: true,
ExpireAfter: time.Hour * time.Duration(expires)} // Expire in expire time
// START MONGODB
session, err := mgo.Dial(tokenMongo)
if err != nil {
return "", err
}
defer session.Close()
//session.SetSafe(&mgo.Safe{})
// Optional. Switch the session to a monotonic behavior.
id := uuid.NewV4().String()
thistime := time.Now().Local()
// find index
err = session.DB("tokenresults").C("tokenurl").Find(bson.M{"id": id}).One(&result)
if err == nil{
//Drop old values if exist // cant drop if empty
if err := session.DB("tokenresults").C("tokenurl").DropIndex("creationtime"); err != nil {
return "", err
}
}
//add stuff
c := session.DB("tokenresults").C("tokenurl")
err = c.Insert(&TokenUrl{id, tokenstring, thistime}, )
if err != nil {
return "", err
}
// create index //add session ttl // cant create if exists
if err := session.DB("tokenresults").C("tokenurl").EnsureIndex(sessionTTL); err != nil {
return "", err
}
```
The Solution
The approach is documented: Use a date field, set the value to the date the document expires, create a TTL-Index with ExpireAfterSeconds set to 0 and the MongoDB background TTL purging process will delete the expired documents.
Notes
However, there is some fuzziness in using TTL indices. Since it would be too costly to have a process for each document which is to be expired, waiting for the expiration time and then deleting the document, MongoDB chose a different solution. There is a background process which checks for expired documents once a minute. So there is no guarantee that your documents will expire immediately at their expiration time and a document might exist up to slightly under 2 minutes longer than the set date of expiration (missing the first run because of overload or whatever and only being deleted in the next run). Note however that this only occurs under very special circumstances. Usually, your documents get deleted within the minute of their expiration.
Explanation
What we basically do here is to add a field ExpirationDate and create a TTL index which is set to check for this expiration date. To which value this ExpirationDate is set is totally up to you. Use a Factory pattern to generate Sessions or whatever.
Note that there are some caveats explained in the code below.
package main
import (
"flag"
"fmt"
"log"
"time"
mgo "gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
const (
// SESSION_TIMEOUT is a fixed and relatively short
// timeout for demo purposes
SESSION_TIMEOUT = 1 * time.Minute
)
// Session is just a sample session struct
// with various session related data and the
// date on which a session should expire.
type Session struct {
ID bson.ObjectId `bson:"_id"`
User string
Foo string
Bar string
ExpirationDate time.Time `bson:"expirationDate"`
}
// NewSession is just a simple helper method to
// return a session with a properly set expiration time
func NewSession(user, foo, bar string) Session {
// We use a static timeout here.
// However, you can easily adapt this to use an arbitrary timeout.
return Session{
ID: bson.NewObjectId(),
User: user,
Foo: foo,
Bar: bar,
ExpirationDate: time.Now().Add(SESSION_TIMEOUT),
}
}
var (
mgohost string
mgoport int
db string
col string
)
func init() {
flag.StringVar(&mgohost, "host", "localhost", "MongoDB host")
flag.IntVar(&mgoport, "port", 27017, "MongoDB port")
flag.StringVar(&db, "db", "test", "MongoDB database")
flag.StringVar(&col, "collection", "ttltest", "MongoDB collection")
}
func main() {
flag.Parse()
c, err := mgo.Dial(fmt.Sprintf("mongodb://%s:%d/%s", mgohost, mgoport, db))
if err != nil {
log.Fatalf("Error connecting to '%s:%d/%s': %s", mgohost, mgoport, db, err)
}
// We use a goroutine here in order to make sure
// that even when EnsureIndex blocks, our program continues
go func() {
log.Println("Ensuring sessionTTL index in background")
// Request a conncetion from the pool
m := c.DB(db).Session.Copy()
defer m.Close()
// We need to set this to 1 as 0 would fail to create the TTL index.
// See https://github.com/go-mgo/mgo/issues/103 for details
// This will expire the session within the minute after ExpirationDate.
//
// The TTL purging is done once a minute only.
// See https://docs.mongodb.com/manual/core/index-ttl/#timing-of-the-delete-operation
// for details
m.DB(db).C(col).EnsureIndex(mgo.Index{ExpireAfter: 1 * time.Second, Key: []string{"expirationDate"}})
log.Println("sessionTTL index is ready")
}()
s := NewSession("mwmahlberg", "foo", "bar")
if err := c.DB(db).C(col).Insert(&s); err != nil {
log.Fatalf("Error inserting %#v into %s.%s: %s", s, db, col, err)
}
l := Session{}
if err := c.DB(db).C(col).Find(nil).One(&l); err != nil {
log.Fatalf("Could not load session from %s.%s: %s", db, col, err)
}
log.Printf("Session with ID %s loaded for user '%s' which will expire in %s", l.ID, l.User, time.Until(l.ExpirationDate))
time.Sleep(2 * time.Minute)
// Let's check if the session is still there
if n, err := c.DB(db).C(col).Count(); err != nil {
log.Fatalf("Error counting documents in %s.%s: %s", db, col, err)
} else if n > 1 {
log.Fatalf("Uups! Someting went wrong!")
}
log.Println("All sessions were expired.")
}

Re-creating mgo sessions in case of errors (read tcp 127.0.0.1:46954->127.0.0.1:27017: i/o timeout)

I wonder about MongoDB session management in Go using mgo, especially about how to correctly ensure a session is closed and how to react on write failures.
I have read the following:
Best practice to maintain a mgo session
Should I copy session for each operation in mgo?
Still, cannot apply it to my situation.
I have two goroutines which store event after event into MongoDB sharing the same *mgo.Session, both looking essiantially like the following:
func storeEvents(session *mgo.Session) {
session_copy := session.Copy()
// *** is it correct to defer the session close here? <-----
defer session_copy.Close()
col := session_copy.DB("DB_NAME").C("COLLECTION_NAME")
for {
event := GetEvent()
err := col.Insert(&event)
if err != nil {
// *** insert FAILED - how to react properly? <-----
session_copy = session.Copy()
defer session_copy.Close()
}
}
}
col.Insert(&event) after some hours returns the error
read tcp 127.0.0.1:46954->127.0.0.1:27017: i/o timeout
and I am unsure how to react on this. After this error occurs, it occurs on all subsequent writes, hence it seems I have to create a new session. Alternatives for me seem:
1) restart the whole goroutine, i.e.
if err != nil {
go storeEvents(session)
return
}
2) create a new session copy
if err != nil {
session_copy = session.Copy()
defer session_copy.Close()
col := session_copy.DB("DB_NAME").C("COLLECTION_NAME")
continue
}
--> Is it correct how I use defer session_copy.Close()? (Note the above defer references the Close() function of another session. Anyway, those sessions will never be closed since the function never returns. I.e., with time, many sessions will be created and not closed.
Other options?
So I don't know if this is going to help you any, but I don't have any issues with this set up.
I have a mongo package that I import from. This is a template of my mongo.go file
package mongo
import (
"time"
"gopkg.in/mgo.v2"
)
var (
// MyDB ...
MyDB DataStore
)
// create the session before main starts
func init() {
MyDB.ConnectToDB()
}
// DataStore containing a pointer to a mgo session
type DataStore struct {
Session *mgo.Session
}
// ConnectToTagserver is a helper method that connections to pubgears' tagserver
// database
func (ds *DataStore) ConnectToDB() {
mongoDBDialInfo := &mgo.DialInfo{
Addrs: []string{"ip"},
Timeout: 60 * time.Second,
Database: "db",
}
sess, err := mgo.DialWithInfo(mongoDBDialInfo)
if err != nil {
panic(err)
}
sess.SetMode(mgo.Monotonic, true)
MyDB.Session = sess
}
// Close is a helper method that ensures the session is properly terminated
func (ds *DataStore) Close() {
ds.Session.Close()
}
Then in another package, for example main Updated Based on the comment below
package main
import (
"../models/mongo"
)
func main() {
// Grab the main session which was instantiated in the mongo package init function
sess := mongo.MyDB.Session
// pass that session in
storeEvents(sess)
}
func storeEvents(session *mgo.Session) {
session_copy := session.Copy()
defer session_copy.Close()
// Handle panics in a deferred fuction
// You can turn this into a wrapper (middleware)
// remove this this function, and just wrap your calls with it, using switch cases
// you can handle all types of errors
defer func(session *mgo.Session) {
if err := recover(); err != nil {
fmt.Printf("Mongo insert has caused a panic: %s\n", err)
fmt.Println("Attempting to insert again")
session_copy := session.Copy()
defer session_copy.Close()
col := session_copy.DB("DB_NAME").C("COLLECTION_NAME")
event := GetEvent()
err := col.Insert(&event)
if err != nil {
fmt.Println("Attempting to insert again failed")
return
}
fmt.Println("Attempting to insert again succesful")
}
}(session)
col := session_copy.DB("DB_NAME").C("COLLECTION_NAME")
event := GetEvent()
err := col.Insert(&event)
if err != nil {
panic(err)
}
}
I use a similar setup on my production servers on AWS. I do over 1 million inserts an hour. Hope this helps. Another things I've done to ensure that the mongo servers can handle the connections is increate the ulimit on my production machines. It's talked about in this stack