Temporary Postgres table gets lost prematurely - postgresql

I use a temporary table to hold a range of ID's so I can use them in several other queries without adding a long list of ID's to every query.
I'm building this in GO and this is new for me. Creating the temporary table works, fetching the ID's succeed and also adding those IDs to the temporary table is successful. But when I use the temporary table I get this error:
pq: relation "temp_id_table" does not exist
This is my code (EDITED: added transaction):
//create context
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
// create database connection
psqlInfo := fmt.Sprintf("host=%s port=%s user=%s "+
"password=%s dbname=%s sslmode=disable",
c.Database.Host, c.Database.Port, c.Database.User, c.Database.Password, c.Database.DbName)
db, err := sql.Open("postgres", psqlInfo)
err = db.PingContext(ctx)
tx, err := db.BeginTx(ctx, &sql.TxOptions{Isolation: sql.LevelSerializable})
// create temporary table to store ids
_, err = tx.ExecContext(ctx, "CREATE TEMPORARY TABLE temp_id_table (id int)")
// fetch all articles of set
newrows, err := db.QueryContext(ctx, "SELECT id FROM article WHERE setid = $1", SetId)
var tempid int
var ids []interface{}
for newrows.Next() {
err := newrows.Scan(&tempid)
ids = append(ids, tempid)
}
// adding found ids to temporary table so we can use it in other queries
var buffer bytes.Buffer
buffer.WriteString("INSERT INTO temp_id_table (id) VALUES ")
for i := 0; i < len(ids); i++ {
if i>0 {
buffer.WriteString(",")
}
buffer.WriteString("($")
buffer.WriteString(strconv.Itoa(i+1))
buffer.WriteString(")")
}
_, err = db.QueryContext(ctx, buffer.String(), ids...)
// fething article codes
currrows, err := db.QueryContext(ctx, "SELECT code FROM article_code WHERE id IN (SELECT id FROM temp_id_table)")
(I simplified the code and removed all error handling to make the code more readable)
When I change it to a normal table everything works fine. What do I do wrong?
EDIT 05-06-2019:
I created a simple test program to test new input from the comments below:
func main() {
var codes []interface{}
codes = append(codes, 111)
codes = append(codes, 222)
codes = append(codes, 333)
config := config.GetConfig();
// initialising variables
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
// create database connection
log.Printf("create database connection")
db, err := connection.Create(config, ctx)
defer db.Close()
if err != nil {
log.Fatal(err)
}
// create transaction
log.Printf("create transaction")
tx, err := db.BeginTx(ctx, &sql.TxOptions{Isolation: sql.LevelReadUncommitted})
if err != nil {
log.Fatal(err)
}
// create temporary table to store IB codes
log.Printf("create temporary table to store codes")
_, err = tx.ExecContext(ctx, "CREATE TEMPORARY TABLE tmp_codes (code int)")
if err != nil {
log.Fatal(err)
}
// adding found IB codes to temporary table so we can fetch the current articles
log.Printf("adding codes to temporary table so we can fetch the current articles")
_, err = tx.QueryContext(ctx, "INSERT INTO tmp_codes (code) VALUES ($1),($2),($3)", codes...)
if err != nil {
log.Fatal(err)
}
testcodes, err := tx.QueryContext(ctx, "SELECT * FROM tmp_codes")
if err != nil {
log.Fatal(err)
}
defer testcodes.Close()
var testcount int
for testcodes.Next() {
testcount++
}
log.Printf(fmt.Sprintf("%d items in temporary table before commit, %d ibcodes added", testcount, len(codes)))
// close transaction
log.Printf("commit transaction")
tx.Commit()
}

The problem is the connection pool. You're not guaranteed to use the same server connection for each query. To guarantee this, you can start a transaction with Begin or BeginTx.
The returned sql.Tx object is guaranteed to use the same connection for its lifetime.
Related:
SQL Server Temp Tables and Connection Pooling

Related

failed to check if row with value exists In Postgres with Golang

I'am trying to create registration in my Telegram Bot with Golang and Postgres. When user writes "register", bot has to check if user's uuid already exists in DB, and if not to create row with his uuid.
Here is my function to check if uuid already exists in DB:
func IsUserInDB(uuid int64) (bool, error) {
var exists bool
query := fmt.Sprintf("SELECT EXISTS(SELECT 1 FROM users WHERE uuid = %d);", uuid)
err := Db.QueryRow(query).Scan(&exists)
return exists, err
}
Here is my function for adding user's uuid to DB:
func AddUserToDB(column string, row interface{}) error {
query := fmt.Sprintf("INSERT INTO users (%s) VALUES (%v);", column, row)
_, err := Db.Exec(query)
return err
}
And the logic for bot:
func (b *Bot) handleMessages(message *tgbotapi.Message) error {
switch message.Text {
case "register":
exists, err := data.IsUserInDB(message.From.ID)
if err != nil {
return err
}
if !exists {
err := data.AddUserToDB("uuid", message.From.ID)
return err
}
return nil
default:
msg := tgbotapi.NewMessage(message.Chat.ID, "unknown message...")
_, err := b.bot.Send(msg)
return err
}
}
First time, when I send "register", bot successfully adds user's id to db, but the problem happens if I try to send "register" 1 more time. IsUserInDB() returns me false and bot adds 1 more row with the same uuid. So, I think problem is with my IsUserInDb() function
Why not just a unique index on your users table?
CREATE UNIQUE INDEX unq_uuid ON users (uuid);
Then you don't have to check, you just try to insert and it will return an error if it already exists.

Go is querying from the wrong database when using multiple databases with godotenv

I'm trying to query from multiple databases. Each database is connected using the following function:
func connectDB(dbEnv str) *sql.DB{
// Loading environment variables from local.env file
err1 := godotenv.Load(dbEnv)
if err1 != nil {
log.Fatalf("Some error occured. Err: %s", err1)
}
dialect := os.Getenv("DIALECT")
host := os.Getenv("HOST")
dbPort := os.Getenv("DBPORT")
user := os.Getenv("USER")
dbName := os.Getenv("NAME")
password := os.Getenv("PASSWORD")
// Database connection string
dbURI := fmt.Sprintf("port=%s host=%s user=%s "+"password=%s dbname=%s sslmode=disable", dbPort, host, user, password, dbName)
// Create database object
db, err := sql.Open(dialect,dbURI)
if err != nil {
log.Fatal(err)
}
return db
}
type order struct{
OrderID string `json:"orderID"`
Name string `json:"name"`
}
type book struct{
OrderID string `json:"orderID"`
Name string `json:"name"`
}
func getOrders(db *sql.DB) []order {
var (
orderID string
name string
)
var allRows = []order{}
query := `
SELECT orderID, name
FROM orders.orders;
`
//Get rows using the query
rows, err := db.Query(query)
if err != nil { //Log if error
log.Fatal(err)
}
defer rows.Close()
// Add each row into the "allRows" slice
for rows.Next() {
err := rows.Scan(&orderID, &name, &date)
if err != nil {
log.Fatal(err)
}
//Create new order struct with the received data
row := order{
OrderID: orderID,
Name: name,
}
allRows = append(allRows, row)
}
//Log if error
err = rows.Err()
if err != nil {
log.Fatal(err)
}
return allRows
}
func getBooks(db *sql.DB) []book{
var (
bookID string
name string
)
var allRows = []book{}
query := `
SELECT bookID, name
FROM books.books;
`
//Get rows using the query
rows, err := db.Query(query)
if err != nil { //Log if error
log.Fatal(err)
}
defer rows.Close()
// Add each row into the "allRows" slice
for rows.Next() {
err := rows.Scan(&bookID, &name)
if err != nil {
log.Fatal(err)
}
//Create new book struct with the received data
row := book{
BookID: bookID,
Name: name,
}
allRows = append(allRows, row)
}
//Log if error
err = rows.Err()
if err != nil {
log.Fatal(err)
}
return allRows
}
func main() {
ordersDB:= connectDB("ordersDB.env")
booksDB:= connectDB("booksDB.env")
orders := getOrders(ordersDB)
books := getBooks(booksDB)
}
The issue is that when I use ordersDB first, the program only recognizes the table in ordersDB. And when I use booksDB first, the program only recognizes the table in booksDB.
When I try to query a table in booksDB after using ordersDB, it is giving me "relation "books.books" does not exist" error. When I try to query a table in ordersDB after using booksDB, it gives "relation "orders.orders" does not exist"
Is there a better way to connect to multiple databases?
You are using github.com/joho/godotenv to load the database configuration from the environment. Summarising (and cutting out a lot of detail) what you are doing is:
godotenv.Load("ordersDB.env")
host := os.Getenv("HOST")
// Connect to DB
godotenv.Load("booksDB.env")
host := os.Getenv("HOST")
// Connect to DB 2
However as stated in the docs "Existing envs take precedence of envs that are loaded later". This is also stated more clearly here "It's important to note that it WILL NOT OVERRIDE an env variable that already exists".
So your code will load in the first .env file, populate the environment variables, and connect to the database. You will then load the second .env file but, because the environmental variables are already set, they will not be changed and you will connect to the same database a second time.
As a work around you could use Overload. However it's probably better to reconsider your use of environmental variables (and perhaps use different variables for the second connection).

How could I create a new struct from an unpredictable db.Query()?

I'm using SELECT * in a db.query() to return columns from a table. Typically, I would fmt.Scan() the rows into a pre-declared struct{} for further manipulation, but in this case, the table columns change frequently so I'm not able to use a declared struct{} as part of my Scan().
I've been struggling to figure out how I might dynamically build a struct{} based on the column results of the db.query() which I could subsequently call on the use of for Scan(). I've read a little about reflect but I'm struggling to determine if this is right for my use-case or if I might have to think about something else.
Any pointers would be greatly appreciated.
you can get column names from resulting rowset and prepare a slice for the scan.
Example (https://go.dev/play/p/ilYmEIWBG5S) :
package main
import (
"database/sql"
"fmt"
"log"
"github.com/DATA-DOG/go-sqlmock"
)
func main() {
// mock db
db, mock, err := sqlmock.New()
if err != nil {
log.Fatal(err)
}
columns := []string{"id", "status"}
mock.ExpectQuery("SELECT \\* FROM table").
WillReturnRows(sqlmock.NewRows(columns).AddRow(1, "ok"))
// actual code
rows, err := db.Query("SELECT * FROM table")
if err != nil {
log.Fatal(err)
}
cols, err := rows.Columns()
if err != nil {
log.Fatal(err)
}
data := make([]interface{}, len(cols))
strs := make([]sql.NullString, len(cols))
for i := range data {
data[i] = &strs[i]
}
for rows.Next() {
if err := rows.Scan(data...); err != nil {
log.Fatal(err)
}
for i, d := range data {
fmt.Printf("%s = %+v\n", cols[i], d)
}
}
}
This example reads all columns into strings. To detect column type one can use rows.ColumnTypes method.

postgres does not save all data after committing

In my golang project that use gorm as ORM and posgress as database, in some sitution when I begin transaction to
change three tables and commiting, just one of tables changes. two other tables data does not change.
any idea how it might happen?
you can see example below
o := *gorm.DB
tx := o.Begin()
invoice.Number = 1
err := tx.Save(&invoice)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
receipt.Ref = "1331"
err = tx.Save(&receipt)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
payment.status = "succeed"
err = tx.Save(&payment)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
err = tx.Commit()
if err != nil {
err2 := tx.Rollback()
return err
}
Just payment data changed and I'm not getting any error.
Apparently you are mistakenly using save points! In PostgreSQL, we can have nested transactions, that is, defining save points make the transaction split into parts. I am not a Golang programmer and my primary language is not Go, but as I guess the problem is "tx.save" which makes a SavePoint, and does not save the data into database. SavePoints makes a new transaction save point, and thus, the last table commits.
If you are familiar with the Node.js, then any async function callback returns an error as the first argument. In Go, we follow the same norm.
https://medium.com/rungo/error-handling-in-go-f0125de052f0

Goroutine opening a new connection to database after each request (sqlx) and ticker

Let's consider the following goroutine:
func main(){
...
go dbGoRoutine()
...
}
And the func:
func dbGoRoutine() {
db, err := sqlx.Connect("postgres", GetPSQLInfo())
if err != nil {
panic(err)
}
defer db.Close()
ticker := time.NewTicker(10 * time.Second)
for _ = range ticker.C {
_, err := db.Queryx("SELECT * FROM table")
if err != nil {
// handle
}
}
}
Each time the function iterates on the ticker it opens a cloudSQL connection
[service... cloudsql-proxy] 2019/11/08 17:05:05 New connection for "location:exemple-db"
I can't figure out why it opens a new connection each time, since the sqlx.Connect is not in the for loop.
This issue is due to how Query function in sql package, it returns Row which are:
Rows is the result of a query. Its cursor starts before the first row of the result set.
Those cursor are stored using cache.
try using Exec().