Named prepared statement in pgx lib, how does it work? - postgresql

Introduction
database/sql
In the Go standard sql library, the *Stmt type has methods defined like:
func (s *Stmt) Exec(args ...interface{}) (Result, error)
func (s *Stmt) Query(args ...interface{}) (*Rows, error)
The a new (unnamed) statement is prepared by:
func (db *DB) Prepare(query string) (*Stmt, error)
Connection pool is abstracted and not directly accessible
A transaction is prepared on a single connection
If the connection is not available at statment execution time, it will be re-prepared on a new connection.
pgx
The PreparedStatement type doesn't have any methods defined. A new named prepared statement is prepared by:
func (p *ConnPool) Prepare(name, sql string) (*PreparedStatement, error)
Operations are directly on the connection pool
The transaction gets prepared on all connections of the pool
There is no clear way how to execute the prepared statement
In a Github comment, the author explains better the differences of architecture between pgx and database/sql. The documentation on Prepare also states (emphasis mine):
Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same name and sql arguments. This allows a code path to Prepare and Query/Exec/PrepareEx without concern for if the statement has already been prepared.
Small example
package main
import (
"github.com/jackc/pgx"
)
func main() {
conf := pgx.ConnPoolConfig{
ConnConfig: pgx.ConnConfig{
Host: "/run/postgresql",
User: "postgres",
Database: "test",
},
MaxConnections: 5,
}
db, err := pgx.NewConnPool(conf)
if err != nil {
panic(err)
}
_, err = db.Prepare("my-query", "select $1")
if err != nil {
panic(err)
}
// What to do with the prepared statement?
}
Question(s)
The name argument gives me the impression it can be executed by calling it by name, but how?
The documentation gives the impression that Query/Exec methods somehow leverage the prepared statements. However, those methods don't take a name argument. How does it match them?
Presumably, matching is done by the query content. Then what's the whole point of naming statements?
Possible answers
This is how far I got myself:
There are no methods that refer to the queries by name (assumption)
Matching is done on the query body in conn.ExecEx(). If it is not yet prepared, it will be done:
ps, ok := c.preparedStatements[sql]
if !ok {
var err error
ps, err = c.prepareEx("", sql, nil)
if err != nil {
return "", err
}
}
PosgreSQL itself needs it for something (assumption).

#mkopriva pointed out that the sql text was misleading me. It has a double function here. If the sql variable does not match to a key in the c.preparedStatements[sql] map, the query contained in the sql gets prepared and a new *PreparedStatement struct is appointed to ps. If it did match a key, the ps variable will point to an entry of the map.
So effectively you can do something like:
package main
import (
"fmt"
"github.com/jackc/pgx"
)
func main() {
conf := pgx.ConnPoolConfig{
ConnConfig: pgx.ConnConfig{
Host: "/run/postgresql",
User: "postgres",
Database: "test",
},
MaxConnections: 5,
}
db, err := pgx.NewConnPool(conf)
if err != nil {
panic(err)
}
if _, err := db.Prepare("my-query", "select $1::int"); err != nil {
panic(err)
}
row := db.QueryRow("my-query", 10)
var i int
if err := row.Scan(&i); err != nil {
panic(err)
}
fmt.Println(i)
}

Related

go postgres prepare statement error - panic: runtime error: invalid memory address or nil pointer dereference [duplicate]

I have a set of functions in my web API app. They perform some operations on the data in the Postgres database.
func CreateUser () {
db, err := sql.Open("postgres", "user=postgres password=password dbname=api_dev sslmode=disable")
// Do some db operations here
}
I suppose functions should work with db independently from each other, so now I have sql.Open(...) inside each function. I don't know if it's a correct way to manage db connection.
Should I open it somewhere once the app starts and pass db as an argument to the corresponding functions instead of opening the connection in every function?
Opening a db connection every time it's needed is a waste of resources and it's slow.
Instead, you should create an sql.DB once, when your application starts (or on first demand), and either pass it where it is needed (e.g. as a function parameter or via some context), or simply make it a global variable and so everyone can access it. It's safe to call from multiple goroutines.
Quoting from the doc of sql.Open():
The returned DB is safe for concurrent use by multiple goroutines and maintains its own pool of idle connections. Thus, the Open function should be called just once. It is rarely necessary to close a DB.
You may use a package init() function to initialize it:
var db *sql.DB
func init() {
var err error
db, err = sql.Open("yourdriver", "yourDs")
if err != nil {
log.Fatal("Invalid DB config:", err)
}
}
One thing to note here is that sql.Open() may not create an actual connection to your DB, it may just validate its arguments. To test if you can actually connect to the db, use DB.Ping(), e.g.:
func init() {
var err error
db, err = sql.Open("yourdriver", "yourDs")
if err != nil {
log.Fatal("Invalid DB config:", err)
}
if err = db.Ping(); err != nil {
log.Fatal("DB unreachable:", err)
}
}
I will use a postgres example
package main
import necessary packages and don't forget the postgres driver
import (
"database/sql"
_ "github.com/lib/pq" //postgres driver
)
initialize your connection in the package scope
var db *sql.DB
have an init function for your connection
func init() {
var err error
db, err = sql.open("postgres", "connectionString")
//connectioString example => 'postgres://username:password#localhost/dbName?sslmode=disable'
if err != nil {
panic(err)
}
err = db.Ping()
if err != nil {
panic(err)
}
// note, we haven't deffered db.Close() at the init function since the connection will close after init. you could close it at main or ommit it
}
main function
func main() {
defer db.Close() //optional
//run your db functions
}
checkout this example
https://play.golang.org/p/FAiGbqeJG0H

How to unit testing with Gorm, mux, postgresql

I'm new in Go and unit test. I build a samll side projecy called "urlshortener" using Go with Gorm, mux and postgresql.
There is a qeustion annoying me after search many articles.
To make the question clean, I delete some irrelevant code like connect db, .env, etc
My code is below(main.go):
package main
type Url struct {
ID uint `gorm:"primaryKey"` // used for shortUrl index
Url string `gorm:"unique"` // prevent duplicate url
ExpireAt string
ShortUrl string
}
var db *gorm.DB
var err error
func main() {
// gain access to database by getting .env
...
// database connection string
...
// make migrations to the dbif they have not already been created
db.AutoMigrate(&Url{})
// API routes
router := mux.NewRouter()
router.HandleFunc("/{id}", getURL).Methods("GET")
router.HandleFunc("/api/v1/urls", createURL).Methods("POST")
router.HandleFunc("/create/urls", createURLs).Methods("POST")
// Listener
http.ListenAndServe(":80", router)
// close connection to db when main func finishes
defer db.Close()
}
Now I'm building unit test for getURL function, which is a GET method to get data from my postgresql database called urlshortener and the table name is urls.
Here is getURL function code:
func getURL(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
var url Url
err := db.Find(&url, params["id"]).Error
if err != nil {
w.WriteHeader(http.StatusNotFound)
} else {
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(url.Url)
}
}
This is work fine with my database. See curl command below:
I know that the unit test is not for mock data, and it aim to test a function/method is stable or not. Although I import mux and net/http for conncetion, but I think the unit test on it should be "SQL syntax". So I decide to focus on testing if gorm return the right value to the test function.
In this case, db.Find will return a *gorm.DB struct which should be exactly same with second line. (see docs https://gorm.io/docs/query.html)
db.Find(&url, params["id"])
SELECT * FROM urls WHICH id=<input_number>
My question is how to write a unit test on it for check the SQL syntax is correct or not in this case (gorm+mux)? I've check some articles, but most of them are testing the http connect status but not for SQL.
And my function do not have the return value, or I need to rewrite the function to have a return value before I can test it?
below is the test structure in my mind:
func TestGetURL(t *testing.T) {
//set const answer for this test
//set up the mock sql connection
//call getURL()
//check if equal with answer using assert
}
Update
According to #Emin Laletovic answer
Now I have a prototype of my testGetURL. Now I have new questions on it.
func TestGetURL(t *testing.T) {
//set const answer for this test
testQuery := `SELECT * FROM "urls" WHERE id=1`
id := 1
//set up the mock sql connection
testDB, mock, err := sqlmock.New()
if err != nil {
panic("sqlmock.New() occurs an error")
}
// uses "gorm.io/driver/postgres" library
dialector := postgres.New(postgres.Config{
DSN: "sqlmock_db_0",
DriverName: "postgres",
Conn: testDB,
PreferSimpleProtocol: true,
})
db, err = gorm.Open(dialector, &gorm.Config{})
if err != nil {
panic("Cannot open stub database")
}
//mock the db.Find function
rows := sqlmock.NewRows([]string{"id", "url", "expire_at", "short_url"}).
AddRow(1, "http://somelongurl.com", "some_date", "http://shorturl.com")
mock.ExpectQuery(regexp.QuoteMeta(testQuery)).
WillReturnRows(rows).WithArgs(id)
//create response writer and request for testing
mockedRequest, _ := http.NewRequest("GET", "/1", nil)
mockedWriter := httptest.NewRecorder()
//call getURL()
getURL(mockedWriter, mockedRequest)
//check values in mockedWriter using assert
}
In the code, I mock the request and respone with http, httptest libs.
I run the test, but it seems that the getURL function in main.go cannot receive the args I pass in, see the pic below.
when db.find called, mock.ExpectQuery receive it and start to compare it, so far so good.
db.Find(&url, params["id"])
mock.ExpectQuery(regexp.QuoteMeta(testQuery)).WillReturnRows(rows).WithArgs(id)
According to the testing log, it shows that when db.Find triggerd, it only excute SELECT * FROM "urls" but not I expected SELECT * FROM "urls" WHERE "urls"."id" = $1.
But when I test db.Find on local with postman and log the SQL syntax out, it can be excute properly. see pic below.
In summary, I think the problem is the responeWriter/request I put in getURL(mockedWriter, mockedRequest) are wrong, and it leads that getURL(w http.ResponseWriter, r *http.Request) cannot work as we expect.
Please let me know if I missing anything~
Any idea or way to rewrite the code would be help, thank you!
If you just want to test the SQL string that db.Find returns, you can use the DryRun feature (per documentation).
stmt := db.Session(&Session{DryRun: true}).Find(&url, params["id"]).Statement
stmt.SQL.String() //returns SQL query string without the param value
stmt.Vars // contains an array of input params
However, to write a test for the getURL function, you could use sqlmock to mock the results that would be returned when executing the db.Find call.
func TestGetURL(t *testing.T) {
//set const answer for this test
testQuery := "SELECT * FROM `urls` WHERE `id` = $1"
id := 1
//create response writer and request for testing
//set up the mock sql connection
testDB, mock, err := sqlmock.New()
//handle error
// uses "gorm.io/driver/postgres" library
dialector := postgres.New(postgres.Config{
DSN: "sqlmock_db_0",
DriverName: "postgres",
Conn: testDB,
PreferSimpleProtocol: true,
})
db, err = gorm.Open(dialector, &gorm.Config{})
//handle error
//mock the db.Find function
rows := sqlmock.NewRows([]string{"id", "url", "expire_at", "short_url"}).
AddRow(1, "http://somelongurl.com", "some_date", "http://shorturl.com")
mock.ExpectQuery(regexp.QuoteMeta(testQuery)).
WillReturnRows(rows).WithArgs(id)
//call getURL()
getUrl(mockedWriter, &mockedRequest)
//check values in mockedWriter using assert
}
This Post and Emin Laletovic are really helps me alot.
I think I get the answer to this qeustion.
Let's recap this questioon. First, I'm using gorm for postgresql and mux for http services and build a CRUD service.
I need to write a unit test to check if my database syntax is correct (we assuming that the connection is statusOK), so we focus on how to write a unit test for SQL syntax.
But the handler function in main.go don't have return value, so we need to use mock-sql/ ExpectQuery(), this function will be triggered when the db.Find() inside getURL(). By doing this, we dont have to return a value to check if it match our target or not.
The problem I met in Update is fixed by This Post, building an unit test with mux, but that post is focusing on status check and return value.
I set the const answer for this test, the id variable is what we expect to get. Noticed that $1 I don't know how to change it, and I've try many times to rewrite but SQL syntax is still return $1, maybe it is some kind of constraint I dont know.
//set const answer for this test
testQuery := `SELECT * FROM "urls" WHERE "urls"."id" = $1`
id := "1"
I set the value pass into the getURL() by doint this
//set the value send into the function
vars := map[string]string{
"id": "1",
}
//create response writer and request for testing
mockedWriter := httptest.NewRecorder()
mockedRequest := httptest.NewRequest("GET", "/{id}", nil)
mockedRequest = mux.SetURLVars(mockedRequest, vars)
Finally, we call mock.ExpectationsWereMet() to check if anything went wrong.
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("SQL syntax is not match: %s", err)
}
Below is my test code:
func TestGetURL(t *testing.T) {
//set const answer for this test
testQuery := `SELECT * FROM "urls" WHERE "urls"."id" = $1`
id := "1"
//set up the mock sql connection
testDB, mock, err := sqlmock.New()
if err != nil {
panic("sqlmock.New() occurs an error")
}
// uses "gorm.io/driver/postgres" library
dialector := postgres.New(postgres.Config{
DSN: "sqlmock_db_0",
DriverName: "postgres",
Conn: testDB,
PreferSimpleProtocol: true,
})
db, err = gorm.Open(dialector, &gorm.Config{})
if err != nil {
panic("Cannot open stub database")
}
//mock the db.Find function
rows := sqlmock.NewRows([]string{"id", "url", "expire_at", "short_url"}).
AddRow(1, "url", "date", "shorurl")
//try to match the real SQL syntax we get and testQuery
mock.ExpectQuery(regexp.QuoteMeta(testQuery)).WillReturnRows(rows).WithArgs(id)
//set the value send into the function
vars := map[string]string{
"id": "1",
}
//create response writer and request for testing
mockedWriter := httptest.NewRecorder()
mockedRequest := httptest.NewRequest("GET", "/{id}", nil)
mockedRequest = mux.SetURLVars(mockedRequest, vars)
//call getURL()
getURL(mockedWriter, mockedRequest)
//check result in mockedWriter mocksql built function
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("SQL syntax is not match: %s", err)
}
}
And I run two tests with args(1, 1) and args(1, 2), and it works fine. see pic below(please ignore the chinese words)

query is returning "expected 0 arguments, got 1"

I am trying to query a database that I know has data in it from directly querying within pgadmin. When I query using the following code it returns no results:
const DATABATE_URL = "postgres://postgres:pw#localhost:5432/postgresdb"
conn, err := pgx.Connect(context.Background(), DATABATE_URL)
defer conn.Close(context.Background())
if err != nil {
fmt.Printf("Connection failed: %v\n", err)
os.Exit(-1)
}
stmt := "SELECT * FROM nodes"
rows, err := conn.Query(context.Background(), stmt, nil)
if err != nil {
fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n", err) //error outs here "expected 0 arguments, got 1"
os.Exit(1)
}
for rows.Next() {
var results string
err = rows.Scan(&results)
if err != nil {
fmt.Fprintf(os.Stderr, "QueryRow failed: %v\n %n", err)
os.Exit(1)
}
fmt.Println(results)
}
When I directly connect to the database through goland and pgadmin and query with the same statement I can see all the data. What am I missing here?
The pgx Conn.Query accepts a context, statement and arguments:
func (c *Conn) Query(ctx context.Context, sql string, args ...interface{}) (Rows, error)
You are passing it a context, statement and nil:
rows, err := conn.Query(context.Background(), stmt, nil)
So the nil is treated as an argument but your SQL statement does not contain any argument placeholders (e.g. SELECT * FROM nodes where id=$1) hence the error. To fix this run:
rows, err := conn.Query(context.Background(), stmt)
However it would also be worth editing your sql to specify the column you want (e.g. SELECT nodename FROM nodes).
Note: When raising a question like this please include the error in the question body rather than just as a comment in the code (which is easy to miss).

Is it possible to call Go functions in one transactions?

I store a double linked list in PostgreSQL. I have a Go API to manage this list.
There is a function that creates new Node (in specific position). Let's assume there is an INSERT SQL query inside of it.
Also, there is a function that deletes Node (by id). Let's assume there is a DELETE SQL query inside of it.
It is well known that if you need to move a Node to different position you should call DeleteNode() function and CreateNode() function. So there is the third function called MoveNode()
func MoveNode() error {
if err := DeleteNode(); err != nil {
return err
}
if err := CreateNode(); err != nil {
return err
}
return nil
}
But these functions (which are inside of MoveNode() should be called in one transaction.
Is there a way to "merge" functions in Go? Or what is the way to solve this problem (except copy & paste code from 2 functions to the third)?
p.s The idea is simple: you have two functions which do some SQL queries and you need to do these queries in one transaction (or call 2 functions in one transaction)
The better way to go about this here will be to move tx.Commit() outside the query execution functions (DeleteNode() and CreateNode() here)
Suggested Solution :
func MoveNode() error {
tx, err := db.Begin()
// err handling
res, err := DeleteNode(tx)
// err handling
res, err := CreateNode(tx)
// err handling
tx.Commit()
}
func DeleteNode(transactionFromDbBegin) (responseFromExec, errorFromExec) {
//...
}
func CreateNode(transactionFromDbBegin) (responseFromExec, errorFromExec) {
//...
}
This should do the trick.

Re-creating mgo sessions in case of errors (read tcp 127.0.0.1:46954->127.0.0.1:27017: i/o timeout)

I wonder about MongoDB session management in Go using mgo, especially about how to correctly ensure a session is closed and how to react on write failures.
I have read the following:
Best practice to maintain a mgo session
Should I copy session for each operation in mgo?
Still, cannot apply it to my situation.
I have two goroutines which store event after event into MongoDB sharing the same *mgo.Session, both looking essiantially like the following:
func storeEvents(session *mgo.Session) {
session_copy := session.Copy()
// *** is it correct to defer the session close here? <-----
defer session_copy.Close()
col := session_copy.DB("DB_NAME").C("COLLECTION_NAME")
for {
event := GetEvent()
err := col.Insert(&event)
if err != nil {
// *** insert FAILED - how to react properly? <-----
session_copy = session.Copy()
defer session_copy.Close()
}
}
}
col.Insert(&event) after some hours returns the error
read tcp 127.0.0.1:46954->127.0.0.1:27017: i/o timeout
and I am unsure how to react on this. After this error occurs, it occurs on all subsequent writes, hence it seems I have to create a new session. Alternatives for me seem:
1) restart the whole goroutine, i.e.
if err != nil {
go storeEvents(session)
return
}
2) create a new session copy
if err != nil {
session_copy = session.Copy()
defer session_copy.Close()
col := session_copy.DB("DB_NAME").C("COLLECTION_NAME")
continue
}
--> Is it correct how I use defer session_copy.Close()? (Note the above defer references the Close() function of another session. Anyway, those sessions will never be closed since the function never returns. I.e., with time, many sessions will be created and not closed.
Other options?
So I don't know if this is going to help you any, but I don't have any issues with this set up.
I have a mongo package that I import from. This is a template of my mongo.go file
package mongo
import (
"time"
"gopkg.in/mgo.v2"
)
var (
// MyDB ...
MyDB DataStore
)
// create the session before main starts
func init() {
MyDB.ConnectToDB()
}
// DataStore containing a pointer to a mgo session
type DataStore struct {
Session *mgo.Session
}
// ConnectToTagserver is a helper method that connections to pubgears' tagserver
// database
func (ds *DataStore) ConnectToDB() {
mongoDBDialInfo := &mgo.DialInfo{
Addrs: []string{"ip"},
Timeout: 60 * time.Second,
Database: "db",
}
sess, err := mgo.DialWithInfo(mongoDBDialInfo)
if err != nil {
panic(err)
}
sess.SetMode(mgo.Monotonic, true)
MyDB.Session = sess
}
// Close is a helper method that ensures the session is properly terminated
func (ds *DataStore) Close() {
ds.Session.Close()
}
Then in another package, for example main Updated Based on the comment below
package main
import (
"../models/mongo"
)
func main() {
// Grab the main session which was instantiated in the mongo package init function
sess := mongo.MyDB.Session
// pass that session in
storeEvents(sess)
}
func storeEvents(session *mgo.Session) {
session_copy := session.Copy()
defer session_copy.Close()
// Handle panics in a deferred fuction
// You can turn this into a wrapper (middleware)
// remove this this function, and just wrap your calls with it, using switch cases
// you can handle all types of errors
defer func(session *mgo.Session) {
if err := recover(); err != nil {
fmt.Printf("Mongo insert has caused a panic: %s\n", err)
fmt.Println("Attempting to insert again")
session_copy := session.Copy()
defer session_copy.Close()
col := session_copy.DB("DB_NAME").C("COLLECTION_NAME")
event := GetEvent()
err := col.Insert(&event)
if err != nil {
fmt.Println("Attempting to insert again failed")
return
}
fmt.Println("Attempting to insert again succesful")
}
}(session)
col := session_copy.DB("DB_NAME").C("COLLECTION_NAME")
event := GetEvent()
err := col.Insert(&event)
if err != nil {
panic(err)
}
}
I use a similar setup on my production servers on AWS. I do over 1 million inserts an hour. Hope this helps. Another things I've done to ensure that the mongo servers can handle the connections is increate the ulimit on my production machines. It's talked about in this stack