Get error code number from postgres in Go - postgresql

I'm simply unable to retrieve the error code number when I get an error in postgres.
In the test of my program I know I'll get the following error
" pq: duplicate key value violates unique constraint "associations_pkey"".
Looking in the postgres docs this is most likely an pq error code of 23505.
I need to get that number in my Go program so that I can check on different types of errors and respond to the end user in a helpful way.
However, I can't seem to get hold of the error code in Go, only the error message. My code is as follows:
stmt, _ := DB.Prepare("INSERT INTO table (column_1) VALUES ($1)")
_, err = stmt.Exec("12324354")
if err != nil {
log.Println("Failed to stmt .Exec while trying to insert new association")
log.Println(err.Error())
fmt.Println(err.Code())
} else {
Render.JSON(w, 200, "New row was created succesfully")
}

You need to type assert the error to the type *pq.Error:
pqErr := err.(*pq.Error)
log.Println(pqErr.Code)

This is written in the documentation. As you see you can extract it in this way:
if err, ok := err.(*pq.Error); ok {
fmt.Println(err.Code)
}
Do not forget to remove the underscore from your import _ "github.com/lib/pq". As you see err has a lot of information about the error (not only Code but many others).
Notice that you can't compare it directly to some code (it is of ErrorCode type).
So you have to convert it to string and compare against a string.
https://godoc.org/github.com/lib/pq#Error

Related

Concurrent index creation fails when done in a Go program

I am trying to create some concurrent indexes using the command CRETAE INDEX CONCURRENTLY ..... through migrations in my golang project. But whenever I run that particular migration it just takes infinitely long and is never executed.
I went and checked for the logs of the POSTGRES DB and found this thing:
The weird thing is only in migrations I am not able to create concurrent indexes whereas in my main.go if i just directly write code to execute the query it is executing successfully and even on golang's DB query console it is able to create a index concurrently.
Here is my migration package code:
func NewGorm(d *gorm.DB) *GORM {
return &GORM{db: d}
}
func (g *GORM) Run(m Migrator, app, name, methods string, logger log.Logger) error {
g.txn = g.db.Begin()
ds := &datastore.DataStore{ORM: g.db}
if methods == UP {
err = m.Up(ds, logger)
} else {
err = m.Down(ds, logger)
}
if err != nil {
g.rollBack()
return &errors.Response{Reason: "error encountered in running the migration", Detail: err}
}
g.commit()
return nil
}
I know it has something to do with transactions, but i also tried disabling it by passing flag SkipDefaultTransaction: true when initializing the connection with GORM but that also didn't worked and results were the same.
Please help how can i create concurrent indexes in migrations using GORM.
Let me try to help you with the issue. First, you should update the question with all of the source code so we can understand better what's going on and help you in a more accurate way. Anyway, I'll try to help you with what we have so far. I was able to achieve your goal with the following code:
package main
import (
"fmt"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
type Post struct {
Id int
Title string `gorm:"index:idx_concurrent,option:CONCURRENTLY"`
}
type GORM struct {
db *gorm.DB
}
func main() {
dsn := "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable"
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
panic(err)
}
db.AutoMigrate(&Post{})
m := db.Migrator()
if idxFound := m.HasIndex(&Post{}, "idx_concurrent"); !idxFound {
fmt.Println("idx missing")
return
}
fmt.Println("idx already present")
}
To achieve what you need, it should be enough to add a gorm annotation next to the field where you want to add the index (e.g. Title). In this annotation you can specify to create this index in a concurrent to avoid locking the table. Then, I used the gorm.Migrator to check for the index existence.
If you've already the table, you can simply add the annotation to the model struct definition and gorm will take care of it when you'll run the AutoMigrate method.
Thanks to this you should be able to cover all of the possible scenario you might face.
Let me know if this solves your issue or if you need something else, thanks!

Understanding database/sql

I am playing around with the database/sql package trying to see how it works and understand what would happen if you don't call rows.Close() etc.
So I wrote the following piece of code for inserting a model to database:
func (db Database) Insert(m model.Model) (int32, error) {
var id int32
quotedTableName := m.TableName(true)
// Get insert query
q, values := model.InsertQuery(m)
rows, err := db.Conn.Query(q, values...)
if err != nil {
return id, err
}
for rows.Next() {
err = rows.Scan(&id)
if err != nil {
return id, err
}
}
return id, nil
}
I don't call rows.Close() on purpose to see the consequences. When setting up the database connection I set some properties such as:
conn.SetMaxOpenConns(50)
conn.SetMaxIdleConns(2)
conn.SetConnMaxLifetime(time.Second*60)
Then I attempt to insert 10000 records:
for i := 0; i < 10000; i++ {
lander := models.Lander{
// ...struct fields with random data on each iteration
}
go func() {
Insert(&lander)
}()
}
(It lacks error checking, context timeouts etc. but for the purpose of playing around it gets the job done). When I execute piece of code from above I expect to see at least some errors regarding database connections however the data gets inserted without problems (all 10000 records). When I check the Stats() I see the following:
{MaxOpenConnections:50 OpenConnections:1 InUse:0 Idle:1 WaitCount:9951 WaitDuration:3h9m33.896466243s MaxIdleClosed:48 MaxLifetimeClosed:2}
Since I didn't call rows.Close() I expected to see more OpenConnections or more InUse connections because I am never releasing the connection (maybe I might be wrong, but this is the purpose of Close() to release a Connection and return it to the pool).
So my question is simply what do these Stats() mean and why are there no errors whatsoever when doing the insertion. Also why aren't there more OpenConnections or InUse ones and what are the real consequences of not calling Close()?
According to the docs for Rows:
If Next is called and returns false and there are no further result sets, the Rows are closed automatically and it will suffice to check the result of Err.
Since you iterate all the results, the result set is closed.

Go Mock postgresql errors

As discussed in this answer, I have written code for checking a unique key violation:
if err, ok := err.(*pq.Error); ok {
if err.Code.Name() == "unique_violation" {
fail(w, http.StatusBadRequest, 0, "Item already exists")
return
}
}
For writing unit-testcases, I need to mock this error. I have written the mock for the error like this:
return pq.Error{Code: "unique_violation"}
But this does not matches with the code. How do I mock the pq.Error?
As noted in the Godoc, ErrorCode is a five-character error code. err.Code.Name() gets the human-friendly version of the error, but the error itself should be represented, and thus constructed, by the error code, which in this case is 23505.

Go: Create io.Writer inteface for logging to mongodb database

Using go (golang):
Is there a way to create a logger that outputs to a database?
Or more precisely, can I implement some kind of io.Writer interface that I can pass as the first argument to log.New()?
EG: (dbLogger would receive the output of the log and write it to the database)
logger := log.New(dbLogger, "dbLog: ", log.Lshortfile)
logger.Print("This message will be stored in the database")
I would assume that I should just create my own database logging function, but I was curious to see if there is already a way of doing this using the existing tools in the language.
For some context, I'm using mgo.v2 to handle my mongodb database, but I don't see any io.Writer interfaces there other than in GridFS, which I think solves a different problem.
I'm also still getting my head around the language, so I may have used some terms above incorrecly. Any corrections are very welcome.
This is easily doable, because the log.Logger type guarantees that each log message is delivered to the destination io.Writer with a single Writer.Write() call:
Each logging operation makes a single call to the Writer's Write method. A Logger can be used simultaneously from multiple goroutines; it guarantees to serialize access to the Writer.
So basically you just need to create a type which implements io.Writer, and whose Write() method creates a new document with the contents of the byte slice, and saves it in the MongoDB.
Here's a simple implementation which does that:
type MongoWriter struct {
sess *mgo.Session
}
func (mw *MongoWriter) Write(p []byte) (n int, err error) {
c := mw.sess.DB("").C("log")
err = c.Insert(bson.M{
"created": time.Now(),
"msg": string(p),
})
if err != nil {
return
}
return len(p), nil
}
Using it:
sess := ... // Get a MongoDB session
mw := &MongoWriter{sess}
log.SetOutput(mw)
// Now the default Logger of the log package uses our MongoWriter.
// Generate a log message that will be inserted into MongoDB:
log.Println("I'm the first log message.")
log.Println("I'm multi-line,\nbut will still be in a single log message.")
Obviously if you're using another log.Logger instance, set the MongoWriter to that, e.g.:
mylogger := log.New(mw, "", 0)
mylogger.Println("Custom logger")
Note that the log messages end with newline as log.Logger appends it even if the log message itself does not end with newline. If you don't want to log the ending newline, you may simply cut it, e.g.:
func (mw *MongoWriter) Write(p []byte) (n int, err error) {
origLen := len(p)
if len(p) > 0 && p[len(p)-1] == '\n' {
p = p[:len(p)-1] // Cut terminating newline
}
c := mw.sess.DB("").C("log")
// ... the rest is the same
return origLen, nil // Must return original length (we resliced p)
}

Find out result of inserting object using mgo in Go

I would like to ask you if there is a way to find out if insertion was successful when inserting new object using collection.
Insert(object) with single operation.
What I mean is that, I don't want to send another query to the db to find out if there is a record or not. I need one single atomic operation (insert -> result (isSuccessful) - pseudo code).
The Insert method returns an error object that represents it success or failure. You need to set the safe mode of the session first to enable this behaviour.
session.SetSafe(&mgo.Safe{}) // <-- first set safe mode!
c := session.DB("test").C("people")
err = c.Insert(&Person{"Ale", "+55 53 8116 9639"})
if err != nil { // <-- then check error after insert!
fmt.Printf("There was an error: %v", err)
} else {
fmt.Print("Success!")
}