Rollback does not work well with Go language transactional wrapper - postgresql

I have recently started learning Go.
I found the following Github implementation of a wrapper for database transaction processing and decided to try it out.
(source) https://github.com/oreilly-japan/practical-go-programming/blob/master/ch09/transaction/wrapper/main.go
I am using PostgreSQL as the database.
Initially, it contains the following data.
testdb=> select * from products;
product_id | price
------------+-------
0001 | 200
0002 | 100
0003 | 150
0004 | 300
(4 rows)
After Process A succeeds, Process B is intentionally made to fail, and a rollback of transaction A is expected. However, when we run it, the rollback does not occur and we end up with the following
In truth, since B failed, the process A should be rolled back and there should be no change in the database value.
I have inserted Logs in places to confirm this, but I am not sure. Why is the rollback not executed?
package main
import (
"context"
"database/sql"
"fmt"
"log"
_ "github.com/jackc/pgx/v4/stdlib"
)
// transaction-wrapper-start
type txAdmin struct {
*sql.DB
}
type Service struct {
tx txAdmin
}
func (t *txAdmin) Transaction(ctx context.Context, f func(ctx context.Context) (err error)) error {
log.Printf("transaction")
tx, err := t.BeginTx(ctx, nil)
if err != nil {
return err
}
defer tx.Rollback()
if err := f(ctx); err != nil {
log.Printf("transaction err")
return fmt.Errorf("transaction query failed: %w", err)
}
log.Printf("commit")
return tx.Commit()
}
func (s *Service) UpdateProduct(ctx context.Context, productID string) error {
updateFunc := func(ctx context.Context) error {
log.Printf("first process")
// Process A
if _, err := s.tx.ExecContext(ctx, "UPDATE products SET price = 200 WHERE product_id = $1", productID); err != nil {
log.Printf("first err")
return err
}
log.Printf("second process")
// Process B(They are intentionally failing.)
if _, err := s.tx.ExecContext(ctx, "...", productID); err != nil {
log.Printf("second err")
return err
}
return nil
}
log.Printf("update")
return s.tx.Transaction(ctx, updateFunc)
}
// transaction-wrapper-end
func main() {
data, err := sql.Open("pgx", "host=localhost port=5432 user=testuser dbname=testdb password=password sslmode=disable")
if nil != err {
log.Fatal(err)
}
database := Service {tx: txAdmin{data}}
ctx := context.Background()
database.UpdateProduct(ctx, "0004")
}
output
2022/05/26 13:28:55 update
2022/05/26 13:28:55 transaction
2022/05/26 13:28:55 first process
2022/05/26 13:28:55 second process
2022/05/26 13:28:55 second err
2022/05/26 13:28:55 transaction err
database changes(If the rollback works, the PRICE for id 0004 should remain 300.)
testdb=> select * from products;
product_id | price
------------+-------
0001 | 200
0002 | 100
0003 | 150
0004 | 200
(4 rows)
Please tell me how I can use the wrapper to correctly process transactions.
=========
PS.
The following code without the wrapper worked properly.
package main
import (
"context"
"database/sql"
"log"
_ "github.com/jackc/pgx/v4/stdlib"
)
// transaction-defer-start
type Service struct {
db *sql.DB
}
func (s *Service) UpdateProduct(ctx context.Context, productID string) (err error) {
tx, err := s.db.Begin()
if err != nil {
return err
}
defer tx.Rollback()
if _, err = tx.ExecContext(ctx, "UPDATE products SET price = 200 WHERE product_id = $1", productID); err != nil {
log.Println("update err")
return err
}
if _, err = tx.ExecContext(ctx, "...", productID); err != nil {
log.Println("update err")
return err
}
return tx.Commit()
}
// transaction-defer-end
func main() {
var database Service
dbConn, err := sql.Open("pgx", "host=localhost port=5432 user=testuser dbname=testdb password=passs sslmode=disable")
if nil != err {
log.Fatal(err)
}
database.db = dbConn
ctx := context.Background()
database.UpdateProduct(ctx, "0004")
}

As #Richard Huxton said, pass tx into a function f
here are the steps:
add one field on struct txAdmin to accommodate *sql.Tx, so txAdmin have DB and Tx fields
inside Transaction set tx to *txAdmin.Tx
inside UpdateProduct use *Service.tx.Tx for every query
so the final code looks like this:
package main
import (
"context"
"database/sql"
"fmt"
"log"
_ "github.com/jackc/pgx/v4/stdlib"
)
// transaction-wrapper-start
type txAdmin struct {
*sql.DB
*sql.Tx
}
type Service struct {
tx txAdmin
}
func (t *txAdmin) Transaction(ctx context.Context, f func(ctx context.Context) (err error)) error {
log.Printf("transaction")
tx, err := t.DB.BeginTx(ctx, nil)
if err != nil {
return err
}
// set tx to Tx
t.Tx = tx
defer tx.Rollback()
if err := f(ctx); err != nil {
log.Printf("transaction err")
return fmt.Errorf("transaction query failed: %w", err)
}
log.Printf("commit")
return tx.Commit()
}
func (s *Service) UpdateProduct(ctx context.Context, productID string) error {
updateFunc := func(ctx context.Context) error {
log.Printf("first process")
// Process A
if _, err := s.tx.Tx.ExecContext(ctx, "UPDATE products SET price = 200 WHERE product_id = $1", productID); err != nil {
log.Printf("first err")
return err
}
log.Printf("second process")
// Process B(They are intentionally failing.)
if _, err := s.tx.Tx.ExecContext(ctx, "...", productID); err != nil {
log.Printf("second err")
return err
}
return nil
}
log.Printf("update")
return s.tx.Transaction(ctx, updateFunc)
}
// transaction-wrapper-end
func main() {
data, err := sql.Open("pgx", "host=localhost port=5432 user=testuser dbname=testdb password=password sslmode=disable")
if nil != err {
log.Fatal(err)
}
database := Service{tx: txAdmin{DB: data}}
ctx := context.Background()
database.UpdateProduct(ctx, "0004")
}

Related

How to Handle Dynamic Database Connections in Go?

I am currently building a Go application that needs to connect to multiple databases dynamically.
For context I have 22 Databases (db1, db2, db3...) and the dbUser, dbPass and dbPort remains the same. To determine which database to connect to, I need access to the query param in echo before database connection.
I need a solution to connect to the right database efficiently. What are some best practices and methods for achieving this in Go?
Main.go
package main
import (
"database/sql"
"fmt"
"log"
"time"
_ "github.com/go-sql-driver/mysql"
_ "github.com/lib/pq"
"github.com/labstack/echo"
"github.com/spf13/viper"
_variantHttpDelivery "backend/server/variant/delivery/http"
_variantHttpDeliveryMiddleware "backend/server/variant/delivery/http/middleware"
_variantRepo "backend/server/variant/repository/postgres"
_variantUcase "backend/server/variant/usecase"
)
func init() {
viper.SetConfigFile(`config.json`)
err := viper.ReadInConfig()
if err != nil {
panic(err)
}
if viper.GetBool(`debug`) {
log.Println("Service RUN on DEBUG mode")
}
}
func main() {
dbHost := viper.GetString(`database.host`)
dbPort := viper.GetString(`database.port`)
dbUser := viper.GetString(`database.user`)
dbPass := viper.GetString(`database.pass`)
dbName := viper.GetString(`database.name`)
connection := fmt.Sprintf("postgresql://%s:%s#%s:%s/%s", dbUser, dbPass, dbHost, dbPort, dbName)
dsn := fmt.Sprintf("%s?%s", connection)
dbConn, err := sql.Open(`postgres`, dsn)
log.Println("Connection Successful 👍")
if err != nil {
log.Fatal(err)
}
err = dbConn.Ping()
if err != nil {
log.Fatal(err)
}
defer func() {
err := dbConn.Close()
if err != nil {
log.Fatal(err)
}
}()
e := echo.New()
middL := _variantHttpDeliveryMiddleware.InitMiddleware()
e.Use(middL.CORS)
variantRepo := _variantRepo.NewPsqlVariantRepository(dbConn)
timeoutContext := time.Duration(viper.GetInt("context.timeout")) * time.Second
au := _variantUcase.NewVariantUsecase(variantRepo, timeoutContext)
_variantHttpDelivery.NewVariantHandler(e, au)
log.Fatal(e.Start(viper.GetString("server.address"))) //nolint
}
Repository which handles all the database logic
package postgres
import (
"backend/server/domain"
"context"
"database/sql"
"github.com/sirupsen/logrus"
"reflect"
)
type psqlVariantRepository struct {
Conn *sql.DB
}
// NewPsqlVariantRepository will create an object that represent the variant.Repository interface
func NewPsqlVariantRepository(conn *sql.DB) domain.VariantRepository {
return &psqlVariantRepository{conn}
}
func (m *psqlVariantRepository) GetByVCF(ctx context.Context, vcf string) (res domain.Variant, err error) {
query := `SELECT * FROM main1 WHERE variant_vcf = $1`
list, err := m.fetch(ctx, query, vcf)
if err != nil {
return domain.Variant{}, err
}
if len(list) > 0 {
res = list[0]
} else {
return res, domain.ErrNotFound
}
return
}
func (m *psqlVariantRepository) fetch(ctx context.Context, query string, args ...interface{}) (result []domain.Variant, err error) {
rows, err := m.Conn.QueryContext(ctx, query, args...)
if err != nil {
logrus.Error(err)
return nil, err
}
defer func() {
errRow := rows.Close()
if errRow != nil {
logrus.Error(errRow)
}
}()
result = make([]domain.Variant, 0)
for rows.Next() {
t := domain.Variant{}
values := make([]interface{}, 0, reflect.TypeOf(t).NumField())
v := reflect.ValueOf(&t).Elem()
for i := 0; i < v.NumField(); i++ {
if v.Type().Field(i).Type.Kind() == reflect.String {
values = append(values, new(sql.NullString))
} else {
values = append(values, v.Field(i).Addr().Interface())
}
}
err = rows.Scan(values...)
if err != nil {
logrus.Error(err)
return nil, err
}
for i, value := range values {
if ns, ok := value.(*sql.NullString); ok {
v.Field(i).SetString(ns.String)
}
}
result = append(result, t)
}
logrus.Info("Successfully fetched results from database 👍")
return result, nil
}
So far I couldn't find any solution

session was not created by this client - MongoDB, Golang

I am trying to creat transaction in MongoDB with Golang and Iris. Problem is that transaction did not accept iris context and Con, I don't know why this thing happened. Can you tell me what I am doing wrong here?
Main.go Using Iris
func main() {
app := iris.New()
app.Logger().SetLevel("debug")
app.Use(recover.New())
app.Use(logger.New())
// Resource: http://localhost:8080
app.Get("/", func(ctx iris.Context) {
ctx.JSON(iris.Map{"message": "Welcome to Woft Bank"})
})
// API endpoints
router.SetRoute(app)
app.Listen(PORT)}
Router
func SetRoute(app *iris.Application) {
userRoute := app.Party("/user")
{
userRoute.Post("/register", middleware.UserValidator, controller.CreateUser)
userRoute.Get("/detail", middleware.UserValidator, controller.GetUserBalanceWithUserID)
userRoute.Patch("/transfer", middleware.TransferValidator, controller.Transfer)
}}
Transacion function (session was not created by this client)
func Transfer(ctx iris.Context) {
senderID := ctx.URLParam("from")
receiverID := ctx.URLParam("to")
amount, _ := strconv.ParseInt(ctx.URLParam("amount"), 10, 64)
session, err := Config.DB().StartSession()
if err != nil {
handleErr(ctx, err)
return
}
defer session.EndSession(ctx)
callback := func(sessCtx mongo.SessionContext) (interface{}, error) {
upsert := false
after := options.After
opt := options.FindOneAndUpdateOptions{
ReturnDocument: &after,
Upsert: &upsert,
}
sender := Models.User{}
filter := bson.M{"username": senderID}
update := bson.M{"$inc": bson.M{"balance": -amount}}
//FindOneAndUpdate did not accept sessCtx
err := UserCollection.FindOneAndUpdate(sessCtx, filter, update, &opt).Decode(&sender)
if err != nil {
return nil, err
}
if sender.Balance < 0 {
return nil, errors.New("sender's balance is not enough")
}
filter = bson.M{"username": receiverID}
update = bson.M{"$inc": bson.M{"balance": +amount}}
_, err = UserCollection.UpdateOne(sessCtx, filter, update)
if err != nil {
return nil, err
}
return sender, nil
}
result, err := session.WithTransaction(ctx, callback)
if err != nil {
handleErr(ctx, err)
return
}
response(result, "success", ctx)}

Why does each transaction count as a client?

I am processing a bunch of files and then dumping the results in PostgreSQL. I would like to process many workers at the same time but keep getting the error "pq: sorry, too many clients already". This seems to happen whenever workers is > 100 or so. (For simplicity, the code below demonstrates the process but instead of processing a file I am simply inserting 1M rows in each table).
Since I am reusing the same *db why am I getting this error? Does each transaction count as a client or am I doing something wrong?
package main
import (
"database/sql"
"flag"
"fmt"
"log"
"sync"
"github.com/lib/pq"
)
func process(db *sql.DB, table string) error {
if _, err := db.Exec(fmt.Sprintf(`DROP TABLE IF EXISTS %v;`, table)); err != nil {
return err
}
col := "age"
s := fmt.Sprintf(`
CREATE TABLE %v (
pk serial PRIMARY KEY,
%v int NOT NULL
)`, table, col)
_, err := db.Exec(s)
if err != nil {
return err
}
tx, err := db.Begin()
if err != nil {
return err
}
defer func() {
if err != nil {
tx.Rollback()
return
}
err = tx.Commit()
}()
stmt, err := tx.Prepare(pq.CopyIn(table, col))
if err != nil {
return err
}
defer func() {
err = stmt.Close()
}()
for i := 0; i < 1e6; i++ {
if _, err = stmt.Exec(i); err != nil {
return err
}
}
return err
}
func main() {
var u string
flag.StringVar(&u, "user", "", "user")
var pass string
flag.StringVar(&pass, "pass", "", "pass")
var host string
flag.StringVar(&host, "host", "", "host")
var database string
flag.StringVar(&database, "database", "", "database")
var workers int
flag.IntVar(&workers, "workers", 10, "workers")
flag.Parse()
db, err := sql.Open("postgres",
fmt.Sprintf(
"user=%s password=%s host=%s database=%s sslmode=require",
u, pass, host, database,
),
)
if err != nil {
log.Fatalln(err)
}
defer db.Close()
db.SetMaxIdleConns(0)
var wg sync.WaitGroup
ch := make(chan int)
for i := 0; i < workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := range ch {
table := fmt.Sprintf("_table%d", i)
log.Println(table)
if err := process(db, table); err != nil {
log.Fatalln(err)
}
}
}()
}
for i := 0; i < 300; i++ {
ch <- i
}
close(ch)
wg.Wait()
}
I realize I can simply increase the posgresql settings but would like to understand the question: How to increase the max connections in postgres?
Since I am reusing the same *db why am I getting this error?
I suspect the Postgress driver is using a separate connections for each of your workers which is a smart decision for most cases.
Does each transaction count as a client or am I doing something wrong?
In your case yes each transaction count as a client, because you are calling process() as a goroutine. You are creating as many concurrent transactions as workers. Since each of your transactions is long all of them are probably using an individual connection to the database at the same time and hence you hit a limit.
go func() {
defer wg.Done()
for i := range ch {
table := fmt.Sprintf("_table%d", i)
log.Println(table)
if err := process(db, table); err != nil {
log.Fatalln(err)
}
}
}()

How do I import rows to Postgresql from STDIN? [duplicate]

This question already has an answer here:
Using PostgreSQL COPY FROM STDIN
(1 answer)
Closed 3 months ago.
In Python I have the following that will bulk-load rows to Postgresql without using a file:
import csv
import subprocess
mylist, keys = [{'name': 'fred'}, {'name': 'mary'}], ['name']
p = subprocess.Popen(['psql', 'mydb', '-U', 'openupitsme', '-h', 'my.ip.address', '--no-password', '-c',
'\COPY tester(%s) FROM STDIN (FORMAT CSV)' % ', '.join(keys),
'--set=ON_ERROR_STOP=false'
], stdin=subprocess.PIPE
)
for d in mylist:
dict_writer = csv.DictWriter(p.stdin, keys, quoting=csv.QUOTE_MINIMAL)
dict_writer.writerow(d)
p.stdin.close()
I am trying to accomplish the same in Go. I am currently writing the rows to a file then importing them and then deleting that file. I'd like to import the rows from STDIN like I do in Python. I have:
package main
import (
"database/sql"
"log"
"os"
"os/exec"
_ "github.com/lib/pq"
)
var (
err error
db *sql.DB
)
func main() {
var err error
fh := "/path/to/my/file.txt"
f, err := os.Create(fh)
if err != nil {
panic(err)
}
defer f.Close()
defer os.Remove(fh)
rows := []string{"fred", "mary"}
for _, n := range rows {
_, err = f.WriteString(n + "\n")
if err != nil {
panic(err)
}
}
// dump to postgresql
c := exec.Command("psql", "mydb", "-U", "openupitsme", "-h", "my.ip.address", "--no-password",
"-c", `\COPY tester(customer) FROM `+fh)
if out, err := c.CombinedOutput(); err != nil {
log.Println(string(out), err)
}
}
EDIT:
A bit further along but this is not inserting records:
keys := []string{"link", "domain"}
records := [][]string{
{"first_name", "last_name"},
{"Rob", "Pike"},
{"Ken", "Thompson"},
{"Robert", "Griesemer"},
}
cmd := exec.Command("psql")
stdin, err := cmd.StdinPipe()
if err != nil {
log.Println(err)
}
stdout, err := cmd.StdoutPipe()
if err != nil {
log.Println(err)
}
if err := cmd.Start(); err != nil {
log.Println(err)
}
go func() {
_, err = io.WriteString(stdin, "search -U meyo -h 1.2.3.4 -p 1111 --no-password -c ")
if err != nil {
log.Println(err)
}
_, err := io.WriteString(stdin, fmt.Sprintf("COPY links(%s) FROM STDIN (FORMAT CSV)", strings.Join(keys, ",")))
if err != nil {
log.Println(err)
}
w := csv.NewWriter(stdin)
if err := w.WriteAll(records); err != nil {
log.Fatalln("error writing record to csv:", err)
}
w.Flush()
if err := w.Error(); err != nil {
log.Fatal(err)
}
if err != nil {
log.Println(err)
}
stdin.Close()
}()
done := make(chan bool)
go func() {
_, err := io.Copy(os.Stdout, stdout)
if err != nil {
log.Fatal(err)
}
stdout.Close()
done <- true
}()
<-done
if err := cmd.Wait(); err != nil {
log.Println(err, cmd.Args, stdout)
}
No records are inserted and I get a non-helpful error:
exit status 2
The github.com/lib/pq package docs actually have an example of how to do what you want. Here is the adapted text of the whole program:
package main
import (
"database/sql"
"log"
"github.com/lib/pq"
)
func main() {
records := [][]string{
{"Rob", "Pike"},
{"Ken", "Thompson"},
{"Robert", "Griesemer"},
}
db, err := sql.Open("postgres", "dbname=postgres user=postgres password=postgres")
if err != nil {
log.Fatalf("open: %v", err)
}
if err = db.Ping(); err != nil {
log.Fatalf("open ping: %v", err)
}
defer db.Close()
txn, err := db.Begin()
if err != nil {
log.Fatalf("begin: %v", err)
}
stmt, err := txn.Prepare(pq.CopyIn("test", "first_name", "last_name"))
if err != nil {
log.Fatalf("prepare: %v", err)
}
for _, r := range records {
_, err = stmt.Exec(r[0], r[1])
if err != nil {
log.Fatalf("exec: %v", err)
}
}
_, err = stmt.Exec()
if err != nil {
log.Fatalf("exec: %v", err)
}
err = stmt.Close()
if err != nil {
log.Fatalf("stmt close: %v", err)
}
err = txn.Commit()
if err != nil {
log.Fatalf("commit: %v", err)
}
}
On my machine this imports 1 000 000 records in about 2 seconds.
The following code should point you in the direction you want to go:
package main
import (
"fmt"
"log"
"os"
"os/exec"
"strings"
)
func main() {
keys := []string{"customer"}
sqlCmd := fmt.Sprintf("COPY tester(%s) FROM STDIN (FORMAT CSV)", strings.Join(keys, ","))
cmd := exec.Command("psql", "<dbname>", "-U", "<username>", "-h", "<host_ip>", "--no-password", "-c", sqlCmd)
cmd.Stdin = os.Stdin
output, _ := cmd.CombinedOutput()
log.Println(string(output))
}
If the keys need to be dynamic you can harvest them from os.Args.
Please note that if you plan to use the psql command then you don't need to import database/sql or lib/pq. If you are interested in using lib/pq then have a look at Bulk Imports in the lib/pq documentation.

How to find by id in golang and mongodb

I need get values using ObjectIdHex and do update and also view the result. I'm using mongodb and golang.But following code doesn't work as expected
package main
import (
"fmt"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
type Person struct {
Id bson.ObjectId `json:"id" bson:"_id,omitempty"`
Name string
Phone string
}
func checkError(err error) {
if err != nil {
panic(err)
}
}
const (
DB_NAME = "gotest"
DB_COLLECTION = "pepole_new1"
)
func main() {
session, err := mgo.Dial("localhost")
checkError(err)
defer session.Close()
session.SetMode(mgo.Monotonic, true)
c := session.DB(DB_NAME).C(DB_COLLECTION)
err = c.DropCollection()
checkError(err)
ale := Person{Name:"Ale", Phone:"555-5555"}
cla := Person{Name:"Cla", Phone:"555-1234-2222"}
kasaun := Person{Name:"kasaun", Phone:"533-12554-2222"}
chamila := Person{Name:"chamila", Phone:"533-545-6784"}
fmt.Println("Inserting")
err = c.Insert(&ale, &cla, &kasaun, &chamila)
checkError(err)
fmt.Println("findbyID")
var resultsID []Person
//err = c.FindId(bson.ObjectIdHex("56bdd27ecfa93bfe3d35047d")).One(&resultsID)
err = c.FindId(bson.M{"Id": bson.ObjectIdHex("56bdd27ecfa93bfe3d35047d")}).One(&resultsID)
checkError(err)
if err != nil {
panic(err)
}
fmt.Println("Phone:", resultsID)
fmt.Println("Queryingall")
var results []Person
err = c.Find(nil).All(&results)
if err != nil {
panic(err)
}
fmt.Println("Results All: ", results)
}
FindId(bson.M{"Id": bson.ObjectIdHex("56bdd27ecfa93bfe3d35047d")}).One(&resultsID) didn't work for me and giving me following output
Inserting
Queryingall
Results All: [{ObjectIdHex("56bddee2cfa93bfe3d3504a1") Ale 555-5555} {ObjectIdHex("56bddee2cfa93bfe3d3504a2") Cla 555-1234-2222} {ObjectIdHex("56bddee2cfa93bfe3d3504a3") kasaun 533-12554-2222} {ObjectIdHex("56bddee2cfa93bfe3d3504a4") chamila 533-545-6784}]
findbyID
panic: not found
goroutine 1 [running]:
main.checkError(0x7f33d524b000, 0xc8200689b0)
How can i fix this problem? i need get value using oid and do update also how can i do that
Use can do the same with Golang official driver as follows:
// convert id string to ObjectId
objectId, err := primitive.ObjectIDFromHex("5b9223c86486b341ea76910c")
if err != nil{
log.Println("Invalid id")
}
// find
result:= client.Database(database).Collection("user").FindOne(context.Background(), bson.M{"_id": objectId})
user := model.User{}
result.Decode(user)
It should be _id not Id:
c.FindId(bson.M{"_id": bson.ObjectIdHex("56bdd27ecfa93bfe3d35047d")})
Some sample code that i use.
func (model *SomeModel) FindId(id string) error {
db, ctx, client := Drivers.MongoCollection("collection")
defer client.Disconnect(ctx)
objID, err := primitive.ObjectIDFromHex(id)
if err != nil {
return err
}
filter := bson.M{"_id": bson.M{"$eq": objID}}
if err := db.FindOne(ctx, filter).Decode(&model); err != nil {
//fmt.Println(err)
return err
}
fmt.Println(model)
return nil
}