How to cause Buffalo transaction middleware to commit? - postgresql

In trying to use the buffalo-pop/pop/popmw Transaction middleware, I am not having success writing to the database. No errors are returned, and the debug output shows the SQL statements, but the updates and inserts are not committed.
The handler looks like:
func MyHandler(c buffalo.Context) error {
tx, ok := c.Value("tx").(*pop.Connection)
if !ok {
return errors.New("no transaction found")
}
f := models.File{
Name: "file.txt",
}
if err := tx.Create(&f); err != nil {
return err
}
return nil
}
app.go:
func App() *buffalo.App {
...
app.GET("/myhandler", MyHandler)
app.Use(popmw.Transaction(models.DB))
...
}
If I use DB, _ := pop.Connect("development") for my connection, it works correctly. I also observed that the autoincrement value on the table changes each time this handler is hit.
In the real app, we can't call c.Render to report a response code because we are using gqlgen as the http handler. It looks like this:
func GQLHandler(c buffalo.Context) error {
h := handler.GraphQL(gqlgen.NewExecutableSchema(gqlgen.Config{Resolvers: &gqlgen.Resolver{}}))
newCtx := context.WithValue(c.Request().Context(), "BuffaloContext", c)
h.ServeHTTP(c.Response(), c.Request().WithContext(newCtx))
return nil
}

One of the features of the Pop Middleware for Buffalo is to wrap the action and the middlewares below in the stack inside a DB transaction. Here are the conditions for an auto-commit from the Pop Middleware:
Commit if there was no error executing the middlewares and action; and the response status is a 2xx or 3xx.
Rollback otherwise.
From Buffalo integration with Pop.
So, make sure no error is returned in either your action or in a middleware of the stack; and the produced response status is 200-ish or 300-ish.

If Buffalo receives no response code via a call to c.Render, the Transaction middleware treats the request as non-successful. Since the context of this question is GraphQL using gqlgen, and c.Render cannot be used, I found that explicitly closing the transaction works. Something like this:
func GQLHandler(c buffalo.Context) error {
gqlSuccess := true
h := handler.GraphQL(gqlgen.NewExecutableSchema(gqlgen.Config{Resolvers: &gqlgen.Resolver{}})),
handler.ErrorPresenter(
func(ctx context.Context, e error) *gqlerror.Error {
gqlSuccess = false
return graphql.DefaultErrorPresenter(ctx, e)
}))
newCtx := context.WithValue(c.Request().Context(), "BuffaloContext", c)
h.ServeHTTP(c.Response(), c.Request().WithContext(newCtx))
if !gqlSuccess {
return nil
}
tx, ok := c.Value("tx").(*pop.Connection)
if !ok {
return errors.New("no transaction found")
}
return tx.TX.Commit()
}

Related

How to unit testing with Gorm, mux, postgresql

I'm new in Go and unit test. I build a samll side projecy called "urlshortener" using Go with Gorm, mux and postgresql.
There is a qeustion annoying me after search many articles.
To make the question clean, I delete some irrelevant code like connect db, .env, etc
My code is below(main.go):
package main
type Url struct {
ID uint `gorm:"primaryKey"` // used for shortUrl index
Url string `gorm:"unique"` // prevent duplicate url
ExpireAt string
ShortUrl string
}
var db *gorm.DB
var err error
func main() {
// gain access to database by getting .env
...
// database connection string
...
// make migrations to the dbif they have not already been created
db.AutoMigrate(&Url{})
// API routes
router := mux.NewRouter()
router.HandleFunc("/{id}", getURL).Methods("GET")
router.HandleFunc("/api/v1/urls", createURL).Methods("POST")
router.HandleFunc("/create/urls", createURLs).Methods("POST")
// Listener
http.ListenAndServe(":80", router)
// close connection to db when main func finishes
defer db.Close()
}
Now I'm building unit test for getURL function, which is a GET method to get data from my postgresql database called urlshortener and the table name is urls.
Here is getURL function code:
func getURL(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
var url Url
err := db.Find(&url, params["id"]).Error
if err != nil {
w.WriteHeader(http.StatusNotFound)
} else {
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(url.Url)
}
}
This is work fine with my database. See curl command below:
I know that the unit test is not for mock data, and it aim to test a function/method is stable or not. Although I import mux and net/http for conncetion, but I think the unit test on it should be "SQL syntax". So I decide to focus on testing if gorm return the right value to the test function.
In this case, db.Find will return a *gorm.DB struct which should be exactly same with second line. (see docs https://gorm.io/docs/query.html)
db.Find(&url, params["id"])
SELECT * FROM urls WHICH id=<input_number>
My question is how to write a unit test on it for check the SQL syntax is correct or not in this case (gorm+mux)? I've check some articles, but most of them are testing the http connect status but not for SQL.
And my function do not have the return value, or I need to rewrite the function to have a return value before I can test it?
below is the test structure in my mind:
func TestGetURL(t *testing.T) {
//set const answer for this test
//set up the mock sql connection
//call getURL()
//check if equal with answer using assert
}
Update
According to #Emin Laletovic answer
Now I have a prototype of my testGetURL. Now I have new questions on it.
func TestGetURL(t *testing.T) {
//set const answer for this test
testQuery := `SELECT * FROM "urls" WHERE id=1`
id := 1
//set up the mock sql connection
testDB, mock, err := sqlmock.New()
if err != nil {
panic("sqlmock.New() occurs an error")
}
// uses "gorm.io/driver/postgres" library
dialector := postgres.New(postgres.Config{
DSN: "sqlmock_db_0",
DriverName: "postgres",
Conn: testDB,
PreferSimpleProtocol: true,
})
db, err = gorm.Open(dialector, &gorm.Config{})
if err != nil {
panic("Cannot open stub database")
}
//mock the db.Find function
rows := sqlmock.NewRows([]string{"id", "url", "expire_at", "short_url"}).
AddRow(1, "http://somelongurl.com", "some_date", "http://shorturl.com")
mock.ExpectQuery(regexp.QuoteMeta(testQuery)).
WillReturnRows(rows).WithArgs(id)
//create response writer and request for testing
mockedRequest, _ := http.NewRequest("GET", "/1", nil)
mockedWriter := httptest.NewRecorder()
//call getURL()
getURL(mockedWriter, mockedRequest)
//check values in mockedWriter using assert
}
In the code, I mock the request and respone with http, httptest libs.
I run the test, but it seems that the getURL function in main.go cannot receive the args I pass in, see the pic below.
when db.find called, mock.ExpectQuery receive it and start to compare it, so far so good.
db.Find(&url, params["id"])
mock.ExpectQuery(regexp.QuoteMeta(testQuery)).WillReturnRows(rows).WithArgs(id)
According to the testing log, it shows that when db.Find triggerd, it only excute SELECT * FROM "urls" but not I expected SELECT * FROM "urls" WHERE "urls"."id" = $1.
But when I test db.Find on local with postman and log the SQL syntax out, it can be excute properly. see pic below.
In summary, I think the problem is the responeWriter/request I put in getURL(mockedWriter, mockedRequest) are wrong, and it leads that getURL(w http.ResponseWriter, r *http.Request) cannot work as we expect.
Please let me know if I missing anything~
Any idea or way to rewrite the code would be help, thank you!
If you just want to test the SQL string that db.Find returns, you can use the DryRun feature (per documentation).
stmt := db.Session(&Session{DryRun: true}).Find(&url, params["id"]).Statement
stmt.SQL.String() //returns SQL query string without the param value
stmt.Vars // contains an array of input params
However, to write a test for the getURL function, you could use sqlmock to mock the results that would be returned when executing the db.Find call.
func TestGetURL(t *testing.T) {
//set const answer for this test
testQuery := "SELECT * FROM `urls` WHERE `id` = $1"
id := 1
//create response writer and request for testing
//set up the mock sql connection
testDB, mock, err := sqlmock.New()
//handle error
// uses "gorm.io/driver/postgres" library
dialector := postgres.New(postgres.Config{
DSN: "sqlmock_db_0",
DriverName: "postgres",
Conn: testDB,
PreferSimpleProtocol: true,
})
db, err = gorm.Open(dialector, &gorm.Config{})
//handle error
//mock the db.Find function
rows := sqlmock.NewRows([]string{"id", "url", "expire_at", "short_url"}).
AddRow(1, "http://somelongurl.com", "some_date", "http://shorturl.com")
mock.ExpectQuery(regexp.QuoteMeta(testQuery)).
WillReturnRows(rows).WithArgs(id)
//call getURL()
getUrl(mockedWriter, &mockedRequest)
//check values in mockedWriter using assert
}
This Post and Emin Laletovic are really helps me alot.
I think I get the answer to this qeustion.
Let's recap this questioon. First, I'm using gorm for postgresql and mux for http services and build a CRUD service.
I need to write a unit test to check if my database syntax is correct (we assuming that the connection is statusOK), so we focus on how to write a unit test for SQL syntax.
But the handler function in main.go don't have return value, so we need to use mock-sql/ ExpectQuery(), this function will be triggered when the db.Find() inside getURL(). By doing this, we dont have to return a value to check if it match our target or not.
The problem I met in Update is fixed by This Post, building an unit test with mux, but that post is focusing on status check and return value.
I set the const answer for this test, the id variable is what we expect to get. Noticed that $1 I don't know how to change it, and I've try many times to rewrite but SQL syntax is still return $1, maybe it is some kind of constraint I dont know.
//set const answer for this test
testQuery := `SELECT * FROM "urls" WHERE "urls"."id" = $1`
id := "1"
I set the value pass into the getURL() by doint this
//set the value send into the function
vars := map[string]string{
"id": "1",
}
//create response writer and request for testing
mockedWriter := httptest.NewRecorder()
mockedRequest := httptest.NewRequest("GET", "/{id}", nil)
mockedRequest = mux.SetURLVars(mockedRequest, vars)
Finally, we call mock.ExpectationsWereMet() to check if anything went wrong.
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("SQL syntax is not match: %s", err)
}
Below is my test code:
func TestGetURL(t *testing.T) {
//set const answer for this test
testQuery := `SELECT * FROM "urls" WHERE "urls"."id" = $1`
id := "1"
//set up the mock sql connection
testDB, mock, err := sqlmock.New()
if err != nil {
panic("sqlmock.New() occurs an error")
}
// uses "gorm.io/driver/postgres" library
dialector := postgres.New(postgres.Config{
DSN: "sqlmock_db_0",
DriverName: "postgres",
Conn: testDB,
PreferSimpleProtocol: true,
})
db, err = gorm.Open(dialector, &gorm.Config{})
if err != nil {
panic("Cannot open stub database")
}
//mock the db.Find function
rows := sqlmock.NewRows([]string{"id", "url", "expire_at", "short_url"}).
AddRow(1, "url", "date", "shorurl")
//try to match the real SQL syntax we get and testQuery
mock.ExpectQuery(regexp.QuoteMeta(testQuery)).WillReturnRows(rows).WithArgs(id)
//set the value send into the function
vars := map[string]string{
"id": "1",
}
//create response writer and request for testing
mockedWriter := httptest.NewRecorder()
mockedRequest := httptest.NewRequest("GET", "/{id}", nil)
mockedRequest = mux.SetURLVars(mockedRequest, vars)
//call getURL()
getURL(mockedWriter, mockedRequest)
//check result in mockedWriter mocksql built function
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("SQL syntax is not match: %s", err)
}
}
And I run two tests with args(1, 1) and args(1, 2), and it works fine. see pic below(please ignore the chinese words)

Request created with http.NewRequestWithContext() looses context when passed to middleware

In program bellow I have two routers. One is working at localhost:3000 and acts like a public access point. It also may send requests with data to another local address which is localhost:8000 where data is being processed. Second router is working at localhost:8000 and handles processing requests for the first router.
Problem
The first router sends a request with context to the second using http.NewRequestWithContext() function. The value is being added to the context and the context is added to request. When request arrives to the second router it does not have value that was added previously.
Some things like error handling are not being written to not post a wall of code here.
package main
import (
"bytes"
"context"
"net/http"
"github.com/go-chi/chi"
"github.com/go-chi/chi/middleware"
)
func main() {
go func() {
err := http.ListenAndServe(
"localhost:3000",
GetDataAndSolve(),
)
if err != nil {
panic(err)
}
}()
go func() {
err := http.ListenAndServe( // in GetDataAndSolve() we send requests
"localhost:8000", // with data for processing
InternalService(),
)
if err != nil {
panic(err)
}
}()
// interrupt := make(chan os.Signal, 1)
// signal.Notify(interrupt, syscall.SIGTERM, syscall.SIGINT)
// <-interrupt // just a cool way to close the program, uncomment if you need it
}
func GetDataAndSolve() http.Handler {
r := chi.NewRouter()
r.Use(middleware.Logger)
r.Get("/tasks/str", func(rw http.ResponseWriter, r *http.Request) {
// receiving data for processing...
taskCtx := context.WithValue(r.Context(), "str", "strVar") // the value is being
postReq, err := http.NewRequestWithContext( // stored to context
taskCtx, // context is being given to request
"POST",
"http://localhost:8000/tasks/solution",
bytes.NewBuffer([]byte("something")),
)
postReq.Header.Set("Content-Type", "application/json") // specifying for endpoint
if err != nil { // what we are sending
return
}
resp, err := http.DefaultClient.Do(postReq) // running actual request
// pls, proceed to Solver()
// do stuff to resp
// also despite arriving to middleware without right context
// here resp contains a request with correct context
})
return r
}
func Solver(next http.Handler) http.Handler { // here we end up after sending postReq
return http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
if r.Context().Value("str").(string) == "" {
return // the request arrive without "str" in its context
}
ctxWithResult := context.WithValue(r.Context(), "result", mockFunc(r.Context()))
next.ServeHTTP(rw, r.Clone(ctxWithResult))
})
}
func InternalService() http.Handler {
r := chi.NewRouter()
r.Use(middleware.Logger)
r.With(Solver).Post("/tasks/solution", emptyHandlerFunc)
return r
}
Your understanding of context is not correct.
Context (simplifying to an extent and in reference to NewRequestWithContext API), is just an in-memory object using which you can control the lifetime of the request (Handling/Triggering cancellations).
However your code is making a HTTP call, which goes over the wire (marshaled) using HTTP protocol. This protocol doesn't understand golang's context or its values.
In your scenario, both /tasks/str and /tasks/solution are being run on the same server. What if they were on different servers, probably different languages and application servers as well, So the context cannot be sent across.
Since the APIs are within the same server, maybe you can avoid making a full blown HTTP call and resort to directly invoking the API/Method. It might turn out to be faster as well.
If you still want to send additional values from context, then you'll have to make use of other attributes like HTTP Headers, Params, Body to send across the required information. This can provide more info on how to serialize data from context over HTTP.

pg-go RunInTransaction not rolling back the transaction

I'm trying to rollback a transaction on my unit tests, between scenarios, to keep the database empty and do not make my tests dirty. So, I'm trying:
for _, test := range tests {
db := connect()
_ = db.RunInTransaction(func() error {
t.Run(test.name, func(t *testing.T) {
for _, r := range test.objToAdd {
err := db.PutObj(&r)
require.NoError(t, err)
}
objReturned, err := db.GetObjsWithFieldEqualsXPTO()
require.NoError(t, err)
require.Equal(t, test.queryResultSize, len(objReturned))
})
return fmt.Errorf("returning error to clean up the database rolling back the transaction")
})
}
I was expecting to rollback the transaction on the end of the scenario, so the next for step will have an empty database, but when I run, the data is never been rolling back.
I believe I'm trying to do what the doc suggested: https://pg.uptrace.dev/faq/#how-to-test-mock-database, am I right?
More info: I notice that my interface is implementing a layer over RunInTransaction as:
func (gs *DB) RunInTransaction(fn func() error) error {
f := func(*pg.Tx) error { return fn() }
return gs.pgDB.RunInTransaction(f)
}
IDK what is the problem yet, but I really guess that is something related to that (because the TX is encapsulated just inside the RunInTransaction implementation.
go-pg uses connection pooling (in common with most go database packages). This means that when you call a database function (e.g. db.Exec) it will grab a connection from the pool (establishing a new one if needed), run the command and return the connection to the pool.
When running a transaction you need to run BEGIN, whatever updates etc you require, followed by COMMIT/ROLLBACK, on a single connection dedicated to the transaction (any commands sent on other connections are not part of the transaction). This is why Begin() (and effectively RunInTransaction) provide you with a pg.Tx; use this to run commands within the transaction.
example_test.go provides an example covering the usage of RunInTransaction:
incrInTx := func(db *pg.DB) error {
// Transaction is automatically rollbacked on error.
return db.RunInTransaction(func(tx *pg.Tx) error {
var counter int
_, err := tx.QueryOne(
pg.Scan(&counter), `SELECT counter FROM tx_test FOR UPDATE`)
if err != nil {
return err
}
counter++
_, err = tx.Exec(`UPDATE tx_test SET counter = ?`, counter)
return err
})
}
You will note that this only uses the pg.DB when calling RunInTransaction; all database operations use the transaction tx (a pg.Tx). tx.QueryOne will be run within the transaction; if you ran db.QueryOne then that would be run outside of the transaction.
So RunInTransaction begins a transaction and passes the relevant Tx in as a parameter to the function you provide. You wrap this with:
func (gs *DB) RunInTransaction(fn func() error) error {
f := func(*pg.Tx) error { return fn() }
return gs.pgDB.RunInTransaction(f)
}
This effectively ignores the pg.Tx and you then run commands using other connections (e.g. err := db.PutObj(&r)) (i.e. outside of the transaction). To fix this you need to use the transaction (e.g. err := tx.PutObj(&r)).

Golang - Scaling a websocket client for multiple connections to different servers

I have a websocket client. In reality, it is far more complex than the basic code shown below.
I now need to scale this client code to open connections to multiple servers. Ultimately, the tasks that need to be performed when a message is received from the servers is identical.
What would be the best approach to handle this?
As I said above the actual code performed when receiving the message is far more complex than shown in the example.
package main
import (
"flag"
"log"
"net/url"
"os"
"os/signal"
"time"
"github.com/gorilla/websocket"
)
var addr = flag.String("addr", "localhost:1234", "http service address")
func main() {
flag.Parse()
log.SetFlags(0)
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, os.Interrupt)
// u := url.URL{Scheme: "ws", Host: *addr, Path: "/echo"}
u := url.URL{Scheme: "ws", Host: *addr, Path: "/"}
log.Printf("connecting to %s", u.String())
c, _, err := websocket.DefaultDialer.Dial(u.String(), nil)
if err != nil {
log.Fatal("dial:", err)
}
defer c.Close()
done := make(chan struct{})
go func() {
defer close(done)
for {
_, message, err := c.ReadMessage()
if err != nil {
log.Println("read:", err)
return
}
log.Printf("recv: %s", message)
}
}()
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case <-done:
return
case t := <-ticker.C:
err := c.WriteMessage(websocket.TextMessage, []byte(t.String()))
if err != nil {
log.Println("write:", err)
return
}
case <-interrupt:
log.Println("interrupt")
// Cleanly close the connection by sending a close message and then
// waiting (with timeout) for the server to close the connection.
err := c.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, ""))
if err != nil {
log.Println("write close:", err)
return
}
select {
case <-done:
case <-time.After(time.Second):
}
return
}
}
}
Modify the interrupt handling to close a channel on interrupt. This allows multiple goroutines to wait on the event by waiting for the channel to close.
shutdown := make(chan struct{})
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, os.Interrupt)
go func() {
<-interrupt
log.Println("interrupt")
close(shutdown)
}()
Move the per-connection code to a function. This code is a copy and paste from the question with two changes: the interrupt channel is replaced with the shutdown channel; the function notifies a sync.WaitGroup when the function is done.
func connect(u string, shutdown chan struct{}, wg *sync.WaitGroup) {
defer wg.Done()
log.Printf("connecting to %s", u)
c, _, err := websocket.DefaultDialer.Dial(u, nil)
if err != nil {
log.Fatal("dial:", err)
}
defer c.Close()
done := make(chan struct{})
go func() {
defer close(done)
for {
_, message, err := c.ReadMessage()
if err != nil {
log.Println("read:", err)
return
}
log.Printf("recv: %s", message)
}
}()
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
for {
select {
case <-done:
return
case t := <-ticker.C:
err := c.WriteMessage(websocket.TextMessage, []byte(t.String()))
if err != nil {
log.Println("write:", err)
return
}
case <-shutdown:
// Cleanly close the connection by sending a close message and then
// waiting (with timeout) for the server to close the connection.
err := c.WriteMessage(websocket.CloseMessage, websocket.FormatCloseMessage(websocket.CloseNormalClosure, ""))
if err != nil {
log.Println("write close:", err)
return
}
select {
case <-done:
case <-time.After(time.Second):
}
return
}
}
}
Declare a sync.WaitGroup in main(). For each websocket endpoint that you want to connect to, increment the WaitGroup and start a goroutine to connect that endpoint. After starting the goroutines, wait on the WaitGroup for the goroutines to complete.
var wg sync.WaitGroup
for _, u := range endpoints { // endpoints is []string
// where elements are URLs
// of endpoints to connect to.
wg.Add(1)
go connect(u, shutdown, &wg)
}
wg.Wait()
The code above with an edit to make it run against Gorilla's echo example server is posted on the playground.
is the communication with every different server completely independendant of the other servers? if yes i would go around in a fashion like:
in main create a context with a cancellation function
create a waitgroup in main to track fired up goroutines
for every server, add to the waitgroup, fire up a new goroutine from the main function passing the context and the waitgroup references
main goes in a for/select loop listening to for signals and if one arrives calls the cancelfunc and waits on the waitgroup.
main can also listen on a result chan from the goroutines and maybe print the results itself it the goroutines shouldn't do it directly.
every goroutine has as we said has references for the wg, the context and possibly a chan to return results. now the approach splits on if the goroutine must do one and one thing only, or if it needs to do a sequence of things. for the first approach
if only one thing is to be done we follow an approach like the one descripbed here (observe that to be asyncronous he would in turn fire up a new goroutine to perform the DoSomething() step that would return the result on the channel)
That allows it to be able to accept the cancellation signal at any time. it is up to you to determine how non-blocking you want to be and how prompt you want to be to respond to cancellation signals.Also the benefit of having the a context associated being passed to the goroutines is that you can call the Context enabled versions of most library functions. For example if you want your dials to have a timeout of let's say 1 minute, you would create a new context with timeout from the one passed and then DialContext with that. This allows the dial to stop both from a timeout or the parent (the one you created in main) context's cancelfunc being called.
if more things need to be done ,i usually prefer to do one thing with the goroutine, have it invoke a new one with the next step to be performed (passing all the references down the pipeline) and exit.
this approach scales well with cancellations and being able to stop the pipeline at any step as well as support contexts with dealines easily for steps that can take too long.

Transaction stays in pg_stat_activity state after execution

I'm quite new to both PostgreSQL and golang. Mainly, I am trying to understand the following:
Why did I need the Commit statement to close the connection and the other two Close calls didn't do the trick?
Would also appreciate pointers regarding the right/wrong way in which I'm going about working with cursors.
In the following function, I'm using gorp to make a CURSOR, query my Postgres DB row by row and write each row to a writer function:
func(txn *gorp.Transaction,
q string,
params []interface{},
myWriter func([]byte, error)) {
cursor := "DECLARE GRABDATA NO SCROLL CURSOR FOR " + q
_, err := txn.Exec(cursor, params...)
if err != nil {
myWriter(nil, err)
return
}
rows, err := txn.Query("FETCH ALL in GRABDATA")
if err != nil {
myWriter(nil, err)
return
}
defer func() {
if _, err := txn.Exec("CLOSE GRABDATA"); err != nil {
fmt.Println("Error while closing cursor:", err)
}
if err = rows.Close(); err != nil {
fmt.Println("Error while closing rows:", err)
} else {
fmt.Println("\n\n\n Closed rows without error", "\n\n\n")
}
if err = txn.Commit(); err != nil {
fmt.Println("Error on commit:", err)
}
}()
pointers := make([]interface{}, len(cols))
container := make([]sql.NullString, len(cols))
values := make([]string, len(cols))
for i := range pointers {
pointers[i] = &container[i]
}
for rows.Next() {
if err = rows.Scan(pointers...); err != nil {
myWriter(nil, err)
return
}
stringLine := strings.Join(values, ",") + "\n"
myWriter([]byte(stringLine), nil)
}
}
In the defer section, I would initially, only Close the rows, but then I saw that pg_stat_activity stay open in idle in transaction state, with the FETCH ALL in GRABDATA query.
Calling txn.Exec("CLOSE <cursor_name>") didn't help. After that, I had a CLOSE GRABDATA query in idle in transaction state...
Only when I started calling Commit() did the connection actually close. I thought that maybe I need to call Commit to execute anything on the transation, but if that's the case - how come I got the result of my queries without calling it?
you want to end transaction, not close a declared cursor. commit does it.
you can run multiple queries in one transaction - this is why you see the result without committing.
the pg_stat_activity.state values are: active when you run the statement (eg, begin transaction; or fetch cursos), idle in transaction when you don't currently run statements, but the transaction remains begun and lastly idle, after you run end or commit, so the transaction is over. After you disconnect the session ends and there's no row in pg_stat_activity at all...