I'm having trouble finding some examples that do three of the following things:
1) Allow raw sql transactions in golang.
2) Use prepared statements.
3) Rollback on query failures.
I would like to do something like this, but with prepared statements.
stmt, stmt_err := db.Prepare(`
BEGIN TRANSACTION;
-- Insert record into first table.
INSERT INTO table_1 (
thing_1,
whatever)
VALUES($1,$2);
-- Inert record into second table.
INSERT INTO table_2 (
thing_2,
whatever)
VALUES($3,$4);
END TRANSACTION;
`)
if stmt_err != nil {
return stmt_err
}
res, res_err := stmt.Exec(
thing_1,
whatever,
thing_2,
whatever)
When I run this, I get this error:
pq: cannot insert multiple commands into a prepared statement
What gives? Are ACID compliant transactions even possible in golang? I cannot find an example.
EDIT
no examples here.
Yes Go has a great implementation of sql transactions. We start the transaction with db.Begin and we can end it with tx.Commit if everything goes good or with tx.Rollback in case of error.
type Tx struct { }
Tx is an in-progress database transaction.
A transaction must end with a call to Commit or Rollback.
After a call to Commit or Rollback, all operations on the transaction fail with ErrTxDone.
The statements prepared for a transaction by calling the transaction's Prepare or Stmt methods are closed by the call to Commit or Rollback.
Also note that we prepare queries with the transaction variable tx.Prepare(...)
Your function may looks like this:
func doubleInsert(db *sql.DB) error {
tx, err := db.Begin()
if err != nil {
return err
}
{
stmt, err := tx.Prepare(`INSERT INTO table_1 (thing_1, whatever)
VALUES($1,$2);`)
if err != nil {
tx.Rollback()
return err
}
defer stmt.Close()
if _, err := stmt.Exec(thing_1, whatever); err != nil {
tx.Rollback() // return an error too, we may want to wrap them
return err
}
}
{
stmt, err := tx.Prepare(`INSERT INTO table_2 (thing_2, whatever)
VALUES($1, $2);`)
if err != nil {
tx.Rollback()
return err
}
defer stmt.Close()
if _, err := stmt.Exec(thing_2, whatever); err != nil {
tx.Rollback() // return an error too, we may want to wrap them
return err
}
}
return tx.Commit()
}
I have a full example here
I came up with a possible solution to rollback on any failure without any significant drawbacks. I am pretty new to Golang though, I could be wrong.
func CloseTransaction(tx *sql.Tx, commit *bool) {
if *commit {
log.Println("Commit sql transaction")
if err := tx.Commit(); err != nil {
log.Panic(err)
}
} else {
log.Println("Rollback sql transcation")
if err := tx.Rollback(); err != nil {
log.Panic(err)
}
}
}
func MultipleSqlQuriesWithTx(db *sql.DB, .. /* some parameter(s) */) (.. .. /* some named return parameter(s) */, err error) {
tx, err := db.Begin()
if err != nil {
return
}
commitTx := false
defer CloseTransaction(tx, &commitTx)
// First sql query
stmt, err := tx.Prepare(..) // some raw sql
if err != nil {
return
}
defer stmt.Close()
res, err := stmt.Exec(..) // some var args
if err != nil {
return
}
// Second sql query
stmt, err := tx.Prepare(..) // some raw sql
if err != nil {
return
}
defer stmt.Close()
res, err := stmt.Exec(..) // some var args
if err != nil {
return
}
/*
more tx sql statements and queries here
*/
// success, commit and return result
commitTx = true
return
}
Related
I have the code snippet below, the idea is to get the loyalty points of a customer from all our warehouses. There is a possibility that a customer will have an id but is not registered for loyalty.
Using that customer's id will cause the query below to return zero rows, which is not an error.
I would like to return an error when that happens so that I can handle it later.
I am looking for the best possible way to do this.
I am new to programming and golang, I don't really know my way around just yet.
query := `select coalesce(lc.total_points, 0),
w.machine_name from
lcustomerpoints lc
join warehouses w
on lc.machine_ip = w.machine_ip
where lc.loyaltycard_number = $1;`
rows, err := l.DB.Query(query, user_id)
if err != nil {
return nil, err
}
defer rows.Close()
loyaltyPoints := []*LoyaltyPoint{}
for rows.Next() {
l := LoyaltyPoint{}
err := rows.Scan(&l.Points, &l.Outlet)
if err != nil {
return nil, err
}
loyaltyPoints = append(loyaltyPoints, &l)
}
if err = rows.Err(); err != nil {
return nil, err
}
Count the number of times the for loop iterates:
count := 0
for rows.Next() {
count++
...
}
if count == 0 {
return fmt.Errorf("Got 0 rows from query")
}
I'm trying to implement a FindOne method in my Golang REST API. The trouble comes where i have to search by ID. I have to convert the ID into something readable by the database, so i use primitive.ObjectIDFromHex(id)
The problem is that this method throws an error :
2021/06/19 06:56:15 encoding/hex: invalid byte: U+000A
ONLY when i call it with the id that comes from my URL GET params.
I did two versions : one with hard-coded ID, and one with GET ID. See code below.
func Admin(id string) (bson.M, error) {
coll, err := db.ConnectToCollection("admin")
if err != nil {
log.Fatal(err)
}
var admin bson.M
HardCoded := "60cb275c074ab46a1aeda45e"
fmt.Println(HardCoded) // Just to be sure : the two strings seem identical
fmt.Println(id)
objetId, err := primitive.ObjectIDFromHex(id) // throws encoding error
// objetId, err := primitive.ObjectIDFromHex(HardCoded) // Doesnt throw encoding err
if err != nil {
log.Fatal(err)
}
var ctx = context.TODO()
if err := coll.FindOne(ctx, bson.M{"_id": objetId}).Decode(&admin); err != nil {
log.Fatal(err)
}
return admin, nil
}
Of course, you'll want to know where the param id comes from.
Here you go :
func GetAdmin(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
params := mux.Vars(r)
admin, err := Admin(params["id"]) // Calling the Admin function above
if err != nil {
fmt.Println(err)
http.Error(w, err.Error(), http.StatusUnauthorized)
} else {
JSON, err := json.Marshal(admin)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
w.Write(JSON)
}
}
Trim the line feed from the end of id:
id = strings.TrimSpace(id)
Use the %q format verb when debugging issues like this. The line feed is clearly visible in this output:
fmt.Printf("%q\n", HardCoded) // prints "60cb275c074ab46a1aeda45e"
fmt.Printf("%q\n", id) // prints "60cb275c074ab46a1aeda45e\n"
In my golang project that use gorm as ORM and posgress as database, in some sitution when I begin transaction to
change three tables and commiting, just one of tables changes. two other tables data does not change.
any idea how it might happen?
you can see example below
o := *gorm.DB
tx := o.Begin()
invoice.Number = 1
err := tx.Save(&invoice)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
receipt.Ref = "1331"
err = tx.Save(&receipt)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
payment.status = "succeed"
err = tx.Save(&payment)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
err = tx.Commit()
if err != nil {
err2 := tx.Rollback()
return err
}
Just payment data changed and I'm not getting any error.
Apparently you are mistakenly using save points! In PostgreSQL, we can have nested transactions, that is, defining save points make the transaction split into parts. I am not a Golang programmer and my primary language is not Go, but as I guess the problem is "tx.save" which makes a SavePoint, and does not save the data into database. SavePoints makes a new transaction save point, and thus, the last table commits.
If you are familiar with the Node.js, then any async function callback returns an error as the first argument. In Go, we follow the same norm.
https://medium.com/rungo/error-handling-in-go-f0125de052f0
I am expected to have 80% test coverage even for pushing the basic project structure. I am a bit confused how do I write unit tests for the following code to Connect to postgres db and ping postgres for health check. Can someone help me please.
var postgres *sql.DB
// ConnectToPostgres func to connect to postgres
func ConnectToPostgres(connStr string) (*sql.DB, error) {
db, err := sql.Open("postgres", connStr)
if err != nil {
log.Println("postgres-client ", err)
return nil, err
}
postgres = db
return db, nil
}
// PostgresHealthCheck to ping database and check for errors
func PostgresHealthCheck() error {
if err := postgres.Ping(); err != nil {
return err
}
return nil
}
type PostgresRepo struct {
db *sql.DB
}
// NewPostgresRepo constructor
func NewPostgresRepo(database *sql.DB) *PostgresRepo {
return &PostgresRepo{
db: database,
}
}
You need to use this : https://github.com/DATA-DOG/go-sqlmock
Its very easy to use. Here is an example where a controller is getting tested using a mocked SQL :
Implementation
func (up UserProvider) GetUsers() ([]models.User, error) {
var users = make([]models.User, 0, 10)
rows, err := up.DatabaseProvider.Query("SELECT firstname, lastname, email, age FROM Users;")
if err != nil {
return nil, err
}
for rows.Next() {
var u models.User = models.User{}
err := rows.Scan(&u.Name, &u.Lastname, &u.Email, &u.Age)
if err != nil {
return nil, err
}
users = append(users, u)
}
if err := rows.Err(); err != nil {
return nil, err
}
return users, nil
}
Test
func TestGetUsersOk(t *testing.T) {
db, mock := NewMock()
mock.ExpectQuery("SELECT firstname, lastname, email, age FROM Users;").
WillReturnRows(sqlmock.NewRows([]string{"firstname", "lastname", "email", "age"}).
AddRow("pepe", "guerra", "pepe#gmail.com", 34))
subject := UserProvider{
DatabaseProvider: repositories.NewMockDBProvider(db, nil),
}
resp, err := subject.GetUsers()
assert.Nil(t, err)
assert.NotNil(t, resp)
assert.Equal(t, 1, len(resp))
}
func NewMock() (*sql.DB, sqlmock.Sqlmock) {
db, mock, err := sqlmock.New()
if err != nil {
log.Fatalf("an error '%s' was not expected when opening a stub database connection", err)
}
return db, mock
}
I find that writing tests against a live database makes for more high quality tests. The challenge with Postgres is that there's no good in-memory fake that you can substitute in.
What I came up with is standing up the postgres Docker container and creating temporary databases in there. The PostgresContainer type in the github.com/bitcomplete/sqltestutil package does exactly this:
# Postgres version is "12"
pg, _ := sqltestutil.StartPostgresContainer(context.Background(), "12")
defer pg.Shutdown(ctx)
db, err := sql.Open("postgres", pg.ConnectionString())
// ... execute SQL
Per the docs, it's a good idea to set up your tests so that the container is only started once, as it can take a few seconds to start up (more if the image needs to be downloaded). It suggests some approaches for mitigating that problem.
I'm quite new to both PostgreSQL and golang. Mainly, I am trying to understand the following:
Why did I need the Commit statement to close the connection and the other two Close calls didn't do the trick?
Would also appreciate pointers regarding the right/wrong way in which I'm going about working with cursors.
In the following function, I'm using gorp to make a CURSOR, query my Postgres DB row by row and write each row to a writer function:
func(txn *gorp.Transaction,
q string,
params []interface{},
myWriter func([]byte, error)) {
cursor := "DECLARE GRABDATA NO SCROLL CURSOR FOR " + q
_, err := txn.Exec(cursor, params...)
if err != nil {
myWriter(nil, err)
return
}
rows, err := txn.Query("FETCH ALL in GRABDATA")
if err != nil {
myWriter(nil, err)
return
}
defer func() {
if _, err := txn.Exec("CLOSE GRABDATA"); err != nil {
fmt.Println("Error while closing cursor:", err)
}
if err = rows.Close(); err != nil {
fmt.Println("Error while closing rows:", err)
} else {
fmt.Println("\n\n\n Closed rows without error", "\n\n\n")
}
if err = txn.Commit(); err != nil {
fmt.Println("Error on commit:", err)
}
}()
pointers := make([]interface{}, len(cols))
container := make([]sql.NullString, len(cols))
values := make([]string, len(cols))
for i := range pointers {
pointers[i] = &container[i]
}
for rows.Next() {
if err = rows.Scan(pointers...); err != nil {
myWriter(nil, err)
return
}
stringLine := strings.Join(values, ",") + "\n"
myWriter([]byte(stringLine), nil)
}
}
In the defer section, I would initially, only Close the rows, but then I saw that pg_stat_activity stay open in idle in transaction state, with the FETCH ALL in GRABDATA query.
Calling txn.Exec("CLOSE <cursor_name>") didn't help. After that, I had a CLOSE GRABDATA query in idle in transaction state...
Only when I started calling Commit() did the connection actually close. I thought that maybe I need to call Commit to execute anything on the transation, but if that's the case - how come I got the result of my queries without calling it?
you want to end transaction, not close a declared cursor. commit does it.
you can run multiple queries in one transaction - this is why you see the result without committing.
the pg_stat_activity.state values are: active when you run the statement (eg, begin transaction; or fetch cursos), idle in transaction when you don't currently run statements, but the transaction remains begun and lastly idle, after you run end or commit, so the transaction is over. After you disconnect the session ends and there's no row in pg_stat_activity at all...