failed to check if row with value exists In Postgres with Golang - postgresql

I'am trying to create registration in my Telegram Bot with Golang and Postgres. When user writes "register", bot has to check if user's uuid already exists in DB, and if not to create row with his uuid.
Here is my function to check if uuid already exists in DB:
func IsUserInDB(uuid int64) (bool, error) {
var exists bool
query := fmt.Sprintf("SELECT EXISTS(SELECT 1 FROM users WHERE uuid = %d);", uuid)
err := Db.QueryRow(query).Scan(&exists)
return exists, err
}
Here is my function for adding user's uuid to DB:
func AddUserToDB(column string, row interface{}) error {
query := fmt.Sprintf("INSERT INTO users (%s) VALUES (%v);", column, row)
_, err := Db.Exec(query)
return err
}
And the logic for bot:
func (b *Bot) handleMessages(message *tgbotapi.Message) error {
switch message.Text {
case "register":
exists, err := data.IsUserInDB(message.From.ID)
if err != nil {
return err
}
if !exists {
err := data.AddUserToDB("uuid", message.From.ID)
return err
}
return nil
default:
msg := tgbotapi.NewMessage(message.Chat.ID, "unknown message...")
_, err := b.bot.Send(msg)
return err
}
}
First time, when I send "register", bot successfully adds user's id to db, but the problem happens if I try to send "register" 1 more time. IsUserInDB() returns me false and bot adds 1 more row with the same uuid. So, I think problem is with my IsUserInDb() function

Why not just a unique index on your users table?
CREATE UNIQUE INDEX unq_uuid ON users (uuid);
Then you don't have to check, you just try to insert and it will return an error if it already exists.

Related

sql.Scan not returning ErrNoRows error when it should

I have a function GetAccount which is generated by sqlc.
When I call GetAccount(/*unused id*/), An ErrNoRows error should be returned. Instead I am getting no error and an Account with default values (zeros and empty strings) returned.
GetAccount implementation:
const getAccount = `-- name: GetAccount :one
SELECT id, owner, balance, currency, created_at
FROM accounts
WHERE id = $1
`
func (q *Queries) GetAccount(ctx context.Context, id int64) (Account, error) {
row := q.db.QueryRowContext(ctx, getAccount, id)
var i Account
err := row.Scan(
&i.ID,
&i.Owner,
&i.Balance,
&i.Currency,
&i.CreatedAt,
)
return i, err
}
Why I am not getting any error when there are no rows to return?
Edit:
As requested, here is how I am calling GetAccount. It is a Gin request handler.
type getAccountRequest struct {
ID int64 `uri:"id" binding:"required,min=1"`
}
func (server *Server) getAccount(ctx *gin.Context) {
var request getAccountRequest
err := ctx.ShouldBindUri(&request)
if err != nil {
ctx.JSON(http.StatusBadRequest, errorResponse(err))
return
}
account, err := server.store.GetAccount(ctx, request.ID) //<-called here
if err == sql.ErrNoRows {
ctx.JSON(http.StatusNotFound, errorResponse(err))
return
} else if err != nil {
ctx.JSON(http.StatusInternalServerError, errorResponse(err))
return
}
ctx.JSON(http.StatusOK, account)
}
Edit 2:
For clarity, when I say
An ErrNoRows error should be returned
I state this because of the call to row.Scan which should produce the error.
Documentation:
func (r *Row) Scan(dest ...any) error
Scan copies the columns from the matched row into the values pointed at by dest. See the documentation on Rows.Scan for details. If more than one row matches the query, Scan uses the first row and discards the rest. If no row matches the query, Scan returns ErrNoRows.
You are overwriting the sql error:
account, err := server.store.GetAccount(ctx, request.ID) //<-called here
err = ctx.ShouldBindUri(&request)
if err == sql.ErrNoRows {
You should check the error immediately after the GetAccount call:
account, err := server.store.GetAccount(ctx, request.ID) //<-called here
if err == sql.ErrNoRows {

Go is querying from the wrong database when using multiple databases with godotenv

I'm trying to query from multiple databases. Each database is connected using the following function:
func connectDB(dbEnv str) *sql.DB{
// Loading environment variables from local.env file
err1 := godotenv.Load(dbEnv)
if err1 != nil {
log.Fatalf("Some error occured. Err: %s", err1)
}
dialect := os.Getenv("DIALECT")
host := os.Getenv("HOST")
dbPort := os.Getenv("DBPORT")
user := os.Getenv("USER")
dbName := os.Getenv("NAME")
password := os.Getenv("PASSWORD")
// Database connection string
dbURI := fmt.Sprintf("port=%s host=%s user=%s "+"password=%s dbname=%s sslmode=disable", dbPort, host, user, password, dbName)
// Create database object
db, err := sql.Open(dialect,dbURI)
if err != nil {
log.Fatal(err)
}
return db
}
type order struct{
OrderID string `json:"orderID"`
Name string `json:"name"`
}
type book struct{
OrderID string `json:"orderID"`
Name string `json:"name"`
}
func getOrders(db *sql.DB) []order {
var (
orderID string
name string
)
var allRows = []order{}
query := `
SELECT orderID, name
FROM orders.orders;
`
//Get rows using the query
rows, err := db.Query(query)
if err != nil { //Log if error
log.Fatal(err)
}
defer rows.Close()
// Add each row into the "allRows" slice
for rows.Next() {
err := rows.Scan(&orderID, &name, &date)
if err != nil {
log.Fatal(err)
}
//Create new order struct with the received data
row := order{
OrderID: orderID,
Name: name,
}
allRows = append(allRows, row)
}
//Log if error
err = rows.Err()
if err != nil {
log.Fatal(err)
}
return allRows
}
func getBooks(db *sql.DB) []book{
var (
bookID string
name string
)
var allRows = []book{}
query := `
SELECT bookID, name
FROM books.books;
`
//Get rows using the query
rows, err := db.Query(query)
if err != nil { //Log if error
log.Fatal(err)
}
defer rows.Close()
// Add each row into the "allRows" slice
for rows.Next() {
err := rows.Scan(&bookID, &name)
if err != nil {
log.Fatal(err)
}
//Create new book struct with the received data
row := book{
BookID: bookID,
Name: name,
}
allRows = append(allRows, row)
}
//Log if error
err = rows.Err()
if err != nil {
log.Fatal(err)
}
return allRows
}
func main() {
ordersDB:= connectDB("ordersDB.env")
booksDB:= connectDB("booksDB.env")
orders := getOrders(ordersDB)
books := getBooks(booksDB)
}
The issue is that when I use ordersDB first, the program only recognizes the table in ordersDB. And when I use booksDB first, the program only recognizes the table in booksDB.
When I try to query a table in booksDB after using ordersDB, it is giving me "relation "books.books" does not exist" error. When I try to query a table in ordersDB after using booksDB, it gives "relation "orders.orders" does not exist"
Is there a better way to connect to multiple databases?
You are using github.com/joho/godotenv to load the database configuration from the environment. Summarising (and cutting out a lot of detail) what you are doing is:
godotenv.Load("ordersDB.env")
host := os.Getenv("HOST")
// Connect to DB
godotenv.Load("booksDB.env")
host := os.Getenv("HOST")
// Connect to DB 2
However as stated in the docs "Existing envs take precedence of envs that are loaded later". This is also stated more clearly here "It's important to note that it WILL NOT OVERRIDE an env variable that already exists".
So your code will load in the first .env file, populate the environment variables, and connect to the database. You will then load the second .env file but, because the environmental variables are already set, they will not be changed and you will connect to the same database a second time.
As a work around you could use Overload. However it's probably better to reconsider your use of environmental variables (and perhaps use different variables for the second connection).

How to avoid duplicate row while gorm AutoMigrate

I want to insert to database from CSV file using gorm AutoMigrate and while inserting I want to avoid duplicate entry. How Can I achieve this? Please check the attached code.
type User struct {
gorm.Model
ID int64 `csv:"_" db:"id"`
FirstName string `csv:"First name" db:"first_name"`
LastName string `csv:"Last name" db:"last_name"`
Emails string `csv:"Emails" db:"emails"`
}
func main() {
file, err := os.Open(os.Args[1])
defer file.Close()
users := []User{}
err = gocsv.Unmarshal(file, &users)
db, err := gorm.Open(postgres.Open("host=xxx.xx.x.x user=database password=password dbname=database port=5432 sslmode=disable"))
err = db.AutoMigrate(&User{})
if err != nil {
panic(err)
}
result := db.Create(users)
if result.Error != nil {
panic(result.Error)
}
}
Example: Consider the below data
FIrst name
Last name
Emails
First
Name
first#example.com
Second
Name
second#example.com
Third
Name
Forth
Name
first#example.com
If we pass the above data, the first 3 rows should insert into the database i.e. we have to avoid duplicate email entries to the database. Thanks.
Note: If the email is empty then the row should be inserted into the database.
You have to sanitize "users" after err = gocsv.Unmarshal(file, &users)
Somethink like
func sanytize(arr []User) []User {
users := []User{}
mail := []string{}
for _, a := range arr {
if !contains(mail, a.Emails){
users = append(users, a)
}
mail = append(mail, a.Emails)
}
return users
}
func contains(arr []string, str string) bool {
for _, a := range arr {
if a == str {
return true
}
}
return false
}
....
err = gocsv.Unmarshal(file, &users)
users = sanytize(users)

Temporary Postgres table gets lost prematurely

I use a temporary table to hold a range of ID's so I can use them in several other queries without adding a long list of ID's to every query.
I'm building this in GO and this is new for me. Creating the temporary table works, fetching the ID's succeed and also adding those IDs to the temporary table is successful. But when I use the temporary table I get this error:
pq: relation "temp_id_table" does not exist
This is my code (EDITED: added transaction):
//create context
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
// create database connection
psqlInfo := fmt.Sprintf("host=%s port=%s user=%s "+
"password=%s dbname=%s sslmode=disable",
c.Database.Host, c.Database.Port, c.Database.User, c.Database.Password, c.Database.DbName)
db, err := sql.Open("postgres", psqlInfo)
err = db.PingContext(ctx)
tx, err := db.BeginTx(ctx, &sql.TxOptions{Isolation: sql.LevelSerializable})
// create temporary table to store ids
_, err = tx.ExecContext(ctx, "CREATE TEMPORARY TABLE temp_id_table (id int)")
// fetch all articles of set
newrows, err := db.QueryContext(ctx, "SELECT id FROM article WHERE setid = $1", SetId)
var tempid int
var ids []interface{}
for newrows.Next() {
err := newrows.Scan(&tempid)
ids = append(ids, tempid)
}
// adding found ids to temporary table so we can use it in other queries
var buffer bytes.Buffer
buffer.WriteString("INSERT INTO temp_id_table (id) VALUES ")
for i := 0; i < len(ids); i++ {
if i>0 {
buffer.WriteString(",")
}
buffer.WriteString("($")
buffer.WriteString(strconv.Itoa(i+1))
buffer.WriteString(")")
}
_, err = db.QueryContext(ctx, buffer.String(), ids...)
// fething article codes
currrows, err := db.QueryContext(ctx, "SELECT code FROM article_code WHERE id IN (SELECT id FROM temp_id_table)")
(I simplified the code and removed all error handling to make the code more readable)
When I change it to a normal table everything works fine. What do I do wrong?
EDIT 05-06-2019:
I created a simple test program to test new input from the comments below:
func main() {
var codes []interface{}
codes = append(codes, 111)
codes = append(codes, 222)
codes = append(codes, 333)
config := config.GetConfig();
// initialising variables
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
// create database connection
log.Printf("create database connection")
db, err := connection.Create(config, ctx)
defer db.Close()
if err != nil {
log.Fatal(err)
}
// create transaction
log.Printf("create transaction")
tx, err := db.BeginTx(ctx, &sql.TxOptions{Isolation: sql.LevelReadUncommitted})
if err != nil {
log.Fatal(err)
}
// create temporary table to store IB codes
log.Printf("create temporary table to store codes")
_, err = tx.ExecContext(ctx, "CREATE TEMPORARY TABLE tmp_codes (code int)")
if err != nil {
log.Fatal(err)
}
// adding found IB codes to temporary table so we can fetch the current articles
log.Printf("adding codes to temporary table so we can fetch the current articles")
_, err = tx.QueryContext(ctx, "INSERT INTO tmp_codes (code) VALUES ($1),($2),($3)", codes...)
if err != nil {
log.Fatal(err)
}
testcodes, err := tx.QueryContext(ctx, "SELECT * FROM tmp_codes")
if err != nil {
log.Fatal(err)
}
defer testcodes.Close()
var testcount int
for testcodes.Next() {
testcount++
}
log.Printf(fmt.Sprintf("%d items in temporary table before commit, %d ibcodes added", testcount, len(codes)))
// close transaction
log.Printf("commit transaction")
tx.Commit()
}
The problem is the connection pool. You're not guaranteed to use the same server connection for each query. To guarantee this, you can start a transaction with Begin or BeginTx.
The returned sql.Tx object is guaranteed to use the same connection for its lifetime.
Related:
SQL Server Temp Tables and Connection Pooling

How to return embedded document with ID of

I have a MongoDB collection with an example document like this:
What I want to do (as you can see from the actual code) is to update a role field in members.x.role where members.x.id equals given ID (ID is UUID so it's unique; this part of code works without problem) and then I want to return that members.x. But the problem is that it always returns first member instead of the one that has been just updated. I've tried some methods of mgo and found Distinct() be closest to my expectations, but that doesn't work as I want.
My question is how can I return member embedded document with specified ID?
I've already looked on this and this but it didn't help me.
func (r MongoRepository) UpdateMemberRole(id string, role int8) (*Member, error) {
memberQuery := &bson.M{"members": &bson.M{"$elemMatch": &bson.M{"id": id}}}
change := &bson.M{"members.$.role": role}
err := r.db.C("groups").Update(memberQuery, &bson.M{"$set": &change})
if err == mgo.ErrNotFound {
return nil, fmt.Errorf("member with ID '%s' does not exist", id)
}
// FIXME: Retrieve this member from query below. THIS ALWAYS RETURNS FIRST MEMBER!!!
var member []Member
r.db.C("groups").Find(&bson.M{"members.id": id}).Distinct("members.0", &member)
return &member[0], nil
}
I found a workaround, it's not stricte Mongo query that is returning this embedded document, but this code is IMO more clear and understandable than some fancy Mongo query that fetches whole document anyway.
func (r MongoRepository) UpdateMemberRole(id string, role int8) (*Member, error) {
change := mgo.Change{
Update: bson.M{"$set": bson.M{"members.$.role": role}},
ReturnNew: true,
}
var updatedGroup Group
_, err := r.db.C("groups").Find(bson.M{"members": bson.M{"$elemMatch": bson.M{"id": id}}}).Apply(change, &updatedGroup)
if err == mgo.ErrNotFound {
return nil, fmt.Errorf("member with ID '%s' does not exist", id)
} else if err != nil {
return nil, err
}
for _, member := range updatedGroup.Members {
if member.Id == id {
return &member, nil
}
}
return nil, fmt.Errorf("weird error, Id cannot be found")
}