I want to update a record while returning a field from the record. If I don't return in the same command there might be concurrency issues that is why I want it to be an atomic operation. Can somebody guide me as to how to achieve this in Gorm?
You can use func (*DB) BeginTx to start a transaction and when you are done with calculation or updates, you can use func (s *DB) Commit() *DB to update transaction. It will lock the table to ensure atomic operation(In case of error use Rollback).
For more Information of this function use this link:
https://godoc.org/github.com/jinzhu/gorm#DB.BeginTx
For example(Sample Golang Code):
db, err := mysql.SharedStore().BeginTx()
if err != nil {
return nil, err
}
defer func() { _ = db.Rollback() }()
//calculations
db.CommitTx()
Related
I am inserting some data in table using SQLx like this
func (*RideRepositoryImpl) insert(entity interface{}, tx persistence.Transaction) (sql.Result, error) {
ride := entity.(*model.Ride)
placeHolders := repository.InsertPlaceholders(len(rideColumns))
sql := fmt.Sprintf("INSERT INTO %s(%s) VALUES(%s)", TableName, strings.Join(Columns, ","), placeHolders)
return tx.Exec(sql, ride.ID.String(), ride.DeviceIotID, ride.VehicleID.String(), ride.UserID.String(),ride.AdditionComments)
}
and calling this function like this
func (p *RideRepositoryImpl) Save(ride *model.Ride, tx persistence.Transaction) error {
return repository.Save(ride, p.insert, tx)
Now I want to get UUID of saved record instantly after saving this record . Is there any clean way to do this instantly ?
PostgreSQL has the RETURNING clause for this.
Sometimes it is useful to obtain data from modified rows while they
are being manipulated. The INSERT, UPDATE, and DELETE commands
all have an optional RETURNING clause that supports this. Use of
RETURNING avoids performing an extra database query to collect the
data, and is especially valuable when it would otherwise be difficult
to identify the modified rows reliably.
// add the RETURNING clause to your INSERT query
sql := fmt.Sprintf("INSERT INTO %s(%s) VALUES(%s) RETURNING <name_of_uuid_column>", TableName, strings.Join(Columns, ","), placeHolders)
// use QueryRow instead of Exec
row := tx.QueryRow(sql, ride.ID.String(), ride.DeviceIotID, ride.VehicleID.String(), ride.UserID.String(),ride.AdditionComments)
// scan the result of the query
var uuid string
if err := row.Scan(&uuid); err != nil {
panic(err)
}
// ...
For additional INSERT-specific info related to RETURNING you can go to the INSERT documentation and search the page for "returning" with CTRL/CMD+F.
If, in addition, you need your function to still return an sql.Result value to satisfy some requirement, then you can return your own implementation.
var _ sql.Result = sqlresult{} // compiler check
type sqlresult struct { lastid, nrows int64 }
func (r sqlresult) LastInsertId() (int64, error) { return r.lastid, nil }
func (r sqlresult) RowsAffected() (int64, error) { return r.nrows, nil }
func (*RideRepositoryImpl) insert(entity interface{}, tx persistence.Transaction) (sql.Result, error) {
ride := entity.(*model.Ride)
placeHolders := repository.InsertPlaceholders(len(rideColumns))
sql := fmt.Sprintf("INSERT INTO %s(%s) VALUES(%s) RETURNING <name_of_uuid_column>", TableName, strings.Join(Columns, ","), placeHolders)
row := tx.QueryRow(sql, ride.ID.String(), ride.DeviceIotID, ride.VehicleID.String(), ride.UserID.String(),ride.AdditionComments)
if err := row.Scan(&ride.<NameOfUUIDField>); err != nil {
return nil, err
}
return sqlresult{0, 1}, nil
}
I'm trying to rollback a transaction on my unit tests, between scenarios, to keep the database empty and do not make my tests dirty. So, I'm trying:
for _, test := range tests {
db := connect()
_ = db.RunInTransaction(func() error {
t.Run(test.name, func(t *testing.T) {
for _, r := range test.objToAdd {
err := db.PutObj(&r)
require.NoError(t, err)
}
objReturned, err := db.GetObjsWithFieldEqualsXPTO()
require.NoError(t, err)
require.Equal(t, test.queryResultSize, len(objReturned))
})
return fmt.Errorf("returning error to clean up the database rolling back the transaction")
})
}
I was expecting to rollback the transaction on the end of the scenario, so the next for step will have an empty database, but when I run, the data is never been rolling back.
I believe I'm trying to do what the doc suggested: https://pg.uptrace.dev/faq/#how-to-test-mock-database, am I right?
More info: I notice that my interface is implementing a layer over RunInTransaction as:
func (gs *DB) RunInTransaction(fn func() error) error {
f := func(*pg.Tx) error { return fn() }
return gs.pgDB.RunInTransaction(f)
}
IDK what is the problem yet, but I really guess that is something related to that (because the TX is encapsulated just inside the RunInTransaction implementation.
go-pg uses connection pooling (in common with most go database packages). This means that when you call a database function (e.g. db.Exec) it will grab a connection from the pool (establishing a new one if needed), run the command and return the connection to the pool.
When running a transaction you need to run BEGIN, whatever updates etc you require, followed by COMMIT/ROLLBACK, on a single connection dedicated to the transaction (any commands sent on other connections are not part of the transaction). This is why Begin() (and effectively RunInTransaction) provide you with a pg.Tx; use this to run commands within the transaction.
example_test.go provides an example covering the usage of RunInTransaction:
incrInTx := func(db *pg.DB) error {
// Transaction is automatically rollbacked on error.
return db.RunInTransaction(func(tx *pg.Tx) error {
var counter int
_, err := tx.QueryOne(
pg.Scan(&counter), `SELECT counter FROM tx_test FOR UPDATE`)
if err != nil {
return err
}
counter++
_, err = tx.Exec(`UPDATE tx_test SET counter = ?`, counter)
return err
})
}
You will note that this only uses the pg.DB when calling RunInTransaction; all database operations use the transaction tx (a pg.Tx). tx.QueryOne will be run within the transaction; if you ran db.QueryOne then that would be run outside of the transaction.
So RunInTransaction begins a transaction and passes the relevant Tx in as a parameter to the function you provide. You wrap this with:
func (gs *DB) RunInTransaction(fn func() error) error {
f := func(*pg.Tx) error { return fn() }
return gs.pgDB.RunInTransaction(f)
}
This effectively ignores the pg.Tx and you then run commands using other connections (e.g. err := db.PutObj(&r)) (i.e. outside of the transaction). To fix this you need to use the transaction (e.g. err := tx.PutObj(&r)).
I store a double linked list in PostgreSQL. I have a Go API to manage this list.
There is a function that creates new Node (in specific position). Let's assume there is an INSERT SQL query inside of it.
Also, there is a function that deletes Node (by id). Let's assume there is a DELETE SQL query inside of it.
It is well known that if you need to move a Node to different position you should call DeleteNode() function and CreateNode() function. So there is the third function called MoveNode()
func MoveNode() error {
if err := DeleteNode(); err != nil {
return err
}
if err := CreateNode(); err != nil {
return err
}
return nil
}
But these functions (which are inside of MoveNode() should be called in one transaction.
Is there a way to "merge" functions in Go? Or what is the way to solve this problem (except copy & paste code from 2 functions to the third)?
p.s The idea is simple: you have two functions which do some SQL queries and you need to do these queries in one transaction (or call 2 functions in one transaction)
The better way to go about this here will be to move tx.Commit() outside the query execution functions (DeleteNode() and CreateNode() here)
Suggested Solution :
func MoveNode() error {
tx, err := db.Begin()
// err handling
res, err := DeleteNode(tx)
// err handling
res, err := CreateNode(tx)
// err handling
tx.Commit()
}
func DeleteNode(transactionFromDbBegin) (responseFromExec, errorFromExec) {
//...
}
func CreateNode(transactionFromDbBegin) (responseFromExec, errorFromExec) {
//...
}
This should do the trick.
Introduction
database/sql
In the Go standard sql library, the *Stmt type has methods defined like:
func (s *Stmt) Exec(args ...interface{}) (Result, error)
func (s *Stmt) Query(args ...interface{}) (*Rows, error)
The a new (unnamed) statement is prepared by:
func (db *DB) Prepare(query string) (*Stmt, error)
Connection pool is abstracted and not directly accessible
A transaction is prepared on a single connection
If the connection is not available at statment execution time, it will be re-prepared on a new connection.
pgx
The PreparedStatement type doesn't have any methods defined. A new named prepared statement is prepared by:
func (p *ConnPool) Prepare(name, sql string) (*PreparedStatement, error)
Operations are directly on the connection pool
The transaction gets prepared on all connections of the pool
There is no clear way how to execute the prepared statement
In a Github comment, the author explains better the differences of architecture between pgx and database/sql. The documentation on Prepare also states (emphasis mine):
Prepare is idempotent; i.e. it is safe to call Prepare multiple times with the same name and sql arguments. This allows a code path to Prepare and Query/Exec/PrepareEx without concern for if the statement has already been prepared.
Small example
package main
import (
"github.com/jackc/pgx"
)
func main() {
conf := pgx.ConnPoolConfig{
ConnConfig: pgx.ConnConfig{
Host: "/run/postgresql",
User: "postgres",
Database: "test",
},
MaxConnections: 5,
}
db, err := pgx.NewConnPool(conf)
if err != nil {
panic(err)
}
_, err = db.Prepare("my-query", "select $1")
if err != nil {
panic(err)
}
// What to do with the prepared statement?
}
Question(s)
The name argument gives me the impression it can be executed by calling it by name, but how?
The documentation gives the impression that Query/Exec methods somehow leverage the prepared statements. However, those methods don't take a name argument. How does it match them?
Presumably, matching is done by the query content. Then what's the whole point of naming statements?
Possible answers
This is how far I got myself:
There are no methods that refer to the queries by name (assumption)
Matching is done on the query body in conn.ExecEx(). If it is not yet prepared, it will be done:
ps, ok := c.preparedStatements[sql]
if !ok {
var err error
ps, err = c.prepareEx("", sql, nil)
if err != nil {
return "", err
}
}
PosgreSQL itself needs it for something (assumption).
#mkopriva pointed out that the sql text was misleading me. It has a double function here. If the sql variable does not match to a key in the c.preparedStatements[sql] map, the query contained in the sql gets prepared and a new *PreparedStatement struct is appointed to ps. If it did match a key, the ps variable will point to an entry of the map.
So effectively you can do something like:
package main
import (
"fmt"
"github.com/jackc/pgx"
)
func main() {
conf := pgx.ConnPoolConfig{
ConnConfig: pgx.ConnConfig{
Host: "/run/postgresql",
User: "postgres",
Database: "test",
},
MaxConnections: 5,
}
db, err := pgx.NewConnPool(conf)
if err != nil {
panic(err)
}
if _, err := db.Prepare("my-query", "select $1::int"); err != nil {
panic(err)
}
row := db.QueryRow("my-query", 10)
var i int
if err := row.Scan(&i); err != nil {
panic(err)
}
fmt.Println(i)
}
I'm quite new to both PostgreSQL and golang. Mainly, I am trying to understand the following:
Why did I need the Commit statement to close the connection and the other two Close calls didn't do the trick?
Would also appreciate pointers regarding the right/wrong way in which I'm going about working with cursors.
In the following function, I'm using gorp to make a CURSOR, query my Postgres DB row by row and write each row to a writer function:
func(txn *gorp.Transaction,
q string,
params []interface{},
myWriter func([]byte, error)) {
cursor := "DECLARE GRABDATA NO SCROLL CURSOR FOR " + q
_, err := txn.Exec(cursor, params...)
if err != nil {
myWriter(nil, err)
return
}
rows, err := txn.Query("FETCH ALL in GRABDATA")
if err != nil {
myWriter(nil, err)
return
}
defer func() {
if _, err := txn.Exec("CLOSE GRABDATA"); err != nil {
fmt.Println("Error while closing cursor:", err)
}
if err = rows.Close(); err != nil {
fmt.Println("Error while closing rows:", err)
} else {
fmt.Println("\n\n\n Closed rows without error", "\n\n\n")
}
if err = txn.Commit(); err != nil {
fmt.Println("Error on commit:", err)
}
}()
pointers := make([]interface{}, len(cols))
container := make([]sql.NullString, len(cols))
values := make([]string, len(cols))
for i := range pointers {
pointers[i] = &container[i]
}
for rows.Next() {
if err = rows.Scan(pointers...); err != nil {
myWriter(nil, err)
return
}
stringLine := strings.Join(values, ",") + "\n"
myWriter([]byte(stringLine), nil)
}
}
In the defer section, I would initially, only Close the rows, but then I saw that pg_stat_activity stay open in idle in transaction state, with the FETCH ALL in GRABDATA query.
Calling txn.Exec("CLOSE <cursor_name>") didn't help. After that, I had a CLOSE GRABDATA query in idle in transaction state...
Only when I started calling Commit() did the connection actually close. I thought that maybe I need to call Commit to execute anything on the transation, but if that's the case - how come I got the result of my queries without calling it?
you want to end transaction, not close a declared cursor. commit does it.
you can run multiple queries in one transaction - this is why you see the result without committing.
the pg_stat_activity.state values are: active when you run the statement (eg, begin transaction; or fetch cursos), idle in transaction when you don't currently run statements, but the transaction remains begun and lastly idle, after you run end or commit, so the transaction is over. After you disconnect the session ends and there's no row in pg_stat_activity at all...