Understanding database/sql - postgresql

I am playing around with the database/sql package trying to see how it works and understand what would happen if you don't call rows.Close() etc.
So I wrote the following piece of code for inserting a model to database:
func (db Database) Insert(m model.Model) (int32, error) {
var id int32
quotedTableName := m.TableName(true)
// Get insert query
q, values := model.InsertQuery(m)
rows, err := db.Conn.Query(q, values...)
if err != nil {
return id, err
}
for rows.Next() {
err = rows.Scan(&id)
if err != nil {
return id, err
}
}
return id, nil
}
I don't call rows.Close() on purpose to see the consequences. When setting up the database connection I set some properties such as:
conn.SetMaxOpenConns(50)
conn.SetMaxIdleConns(2)
conn.SetConnMaxLifetime(time.Second*60)
Then I attempt to insert 10000 records:
for i := 0; i < 10000; i++ {
lander := models.Lander{
// ...struct fields with random data on each iteration
}
go func() {
Insert(&lander)
}()
}
(It lacks error checking, context timeouts etc. but for the purpose of playing around it gets the job done). When I execute piece of code from above I expect to see at least some errors regarding database connections however the data gets inserted without problems (all 10000 records). When I check the Stats() I see the following:
{MaxOpenConnections:50 OpenConnections:1 InUse:0 Idle:1 WaitCount:9951 WaitDuration:3h9m33.896466243s MaxIdleClosed:48 MaxLifetimeClosed:2}
Since I didn't call rows.Close() I expected to see more OpenConnections or more InUse connections because I am never releasing the connection (maybe I might be wrong, but this is the purpose of Close() to release a Connection and return it to the pool).
So my question is simply what do these Stats() mean and why are there no errors whatsoever when doing the insertion. Also why aren't there more OpenConnections or InUse ones and what are the real consequences of not calling Close()?

According to the docs for Rows:
If Next is called and returns false and there are no further result sets, the Rows are closed automatically and it will suffice to check the result of Err.
Since you iterate all the results, the result set is closed.

Related

Concurrent index creation fails when done in a Go program

I am trying to create some concurrent indexes using the command CRETAE INDEX CONCURRENTLY ..... through migrations in my golang project. But whenever I run that particular migration it just takes infinitely long and is never executed.
I went and checked for the logs of the POSTGRES DB and found this thing:
The weird thing is only in migrations I am not able to create concurrent indexes whereas in my main.go if i just directly write code to execute the query it is executing successfully and even on golang's DB query console it is able to create a index concurrently.
Here is my migration package code:
func NewGorm(d *gorm.DB) *GORM {
return &GORM{db: d}
}
func (g *GORM) Run(m Migrator, app, name, methods string, logger log.Logger) error {
g.txn = g.db.Begin()
ds := &datastore.DataStore{ORM: g.db}
if methods == UP {
err = m.Up(ds, logger)
} else {
err = m.Down(ds, logger)
}
if err != nil {
g.rollBack()
return &errors.Response{Reason: "error encountered in running the migration", Detail: err}
}
g.commit()
return nil
}
I know it has something to do with transactions, but i also tried disabling it by passing flag SkipDefaultTransaction: true when initializing the connection with GORM but that also didn't worked and results were the same.
Please help how can i create concurrent indexes in migrations using GORM.
Let me try to help you with the issue. First, you should update the question with all of the source code so we can understand better what's going on and help you in a more accurate way. Anyway, I'll try to help you with what we have so far. I was able to achieve your goal with the following code:
package main
import (
"fmt"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
type Post struct {
Id int
Title string `gorm:"index:idx_concurrent,option:CONCURRENTLY"`
}
type GORM struct {
db *gorm.DB
}
func main() {
dsn := "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable"
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
panic(err)
}
db.AutoMigrate(&Post{})
m := db.Migrator()
if idxFound := m.HasIndex(&Post{}, "idx_concurrent"); !idxFound {
fmt.Println("idx missing")
return
}
fmt.Println("idx already present")
}
To achieve what you need, it should be enough to add a gorm annotation next to the field where you want to add the index (e.g. Title). In this annotation you can specify to create this index in a concurrent to avoid locking the table. Then, I used the gorm.Migrator to check for the index existence.
If you've already the table, you can simply add the annotation to the model struct definition and gorm will take care of it when you'll run the AutoMigrate method.
Thanks to this you should be able to cover all of the possible scenario you might face.
Let me know if this solves your issue or if you need something else, thanks!

Golang channel in select not receiving

I am currently working on a small script where I use the channels, select and goroutine and I really don't understand why it doesn't run as I think.
I have 2 channels that all my goroutines listen to.
I pass the channels to each goroutine where there is a select which must choose between the 2 depending on where the data comes first.
The problem is that no goroutine falls into the second case. I can have received 100 jobs one after the other, I see everything in the log. It does well what is requested in the first case and after that it sent the work in the second channel (still if it does well ...) I do not have any more logs.
I just don't understand why...
If someone can enlighten me :)
package main
func main() {
wg := new(sync.WaitGroup)
in := make(chan *Job)
out := make(chan *Job)
results := make(chan *Job)
for i := 0; i < 50; i++ {
go work(wg, in, out, results)
}
wg.Wait()
// Finally we collect all the results of the work.
for elem := range results {
fmt.Println(elem)
}
}
func Work(wg *sync.WaitGroup, in chan *Job, out chan *Job, results chan *Job) {
wg.Add(1)
defer wg.Done()
for {
select {
case job := <-in:
ticker := time.Tick(10 * time.Second)
select {
case <-ticker:
// DO stuff
if condition is true {
out <- job
}
case <-time.After(5 * time.Minute):
fmt.Println("Timeout")
}
case job := <-out:
ticker := time.Tick(1 * time.Minute)
select {
case <-ticker:
// DO stuff
if condition is true {
results <- job
}
case <-quitOut:
fmt.Println("Job completed")
}
}
}
}
I create a number of workers who listen to 2 channels and send the final results to the 3rd.
It does something with the received job and if it validates a given condition, it passes this job to the next channel and if it validates a condition it passes the job into the result channel.
So, in my head I had a pipeline like this for 5 workers for example: 3 jobs in the channel IN, directly 3 workers takes them, if the 3 job validates the condition, they are sent in the channel OUT. Directly 2 workers takes them and the 3rd job is picked up by one of the first 3 workers ...
Now I hope you have a better understanding for my first code. But in my code, I never get to the second case.
I think your solution might be a bit over complicated. Here is a simplified version. Bare in mind that there are numerous implementations. A good article to read
https://medium.com/smsjunk/handling-1-million-requests-per-minute-with-golang-f70ac505fcaa
Or even better right from the Go handbook
https://gobyexample.com/worker-pools (which I think maybe is what you were aiming for)
Anyway, below serves as a different type of example.. There are a few ways to go about solving this problem.
package main
import (
"context"
"log"
"os"
"sync"
"time"
)
type worker struct {
wg *sync.WaitGroup
in chan job
quit context.Context
}
type job struct {
message int
}
func main() {
numberOfJobs := 50
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
w := worker{
wg: &sync.WaitGroup{},
in: make(chan job),
quit: ctx,
}
for i := 0; i < numberOfJobs; i++ {
go func(i int) {
w.in <- job{message: i}
}(i)
}
counter := 0
for {
select {
case j := <-w.in:
counter++
log.Printf("Received job %+v\n", j)
// DO SOMETHING WITH THE RECEIVED JOB
// WORKING ON IT
x := j.message * j.message
log.Printf("job processed, result %d", x)
case <-w.quit.Done():
log.Printf("Recieved quit, timeout reached. Number of jobs queued: %d, Number of jobs complete: %d\n", numberOfJobs, counter)
os.Exit(0)
default:
// TODO
}
}
}
Your quitIn and quitOut channels are basically useless: You create them and try to receive from them. Which you cannot as nobody can write to these channels because nobody even knows about their existence. I cannot say more because I do not understand what the code is supposed to do.
Because your function is "Work" and you are calling "work".

Go: Create io.Writer inteface for logging to mongodb database

Using go (golang):
Is there a way to create a logger that outputs to a database?
Or more precisely, can I implement some kind of io.Writer interface that I can pass as the first argument to log.New()?
EG: (dbLogger would receive the output of the log and write it to the database)
logger := log.New(dbLogger, "dbLog: ", log.Lshortfile)
logger.Print("This message will be stored in the database")
I would assume that I should just create my own database logging function, but I was curious to see if there is already a way of doing this using the existing tools in the language.
For some context, I'm using mgo.v2 to handle my mongodb database, but I don't see any io.Writer interfaces there other than in GridFS, which I think solves a different problem.
I'm also still getting my head around the language, so I may have used some terms above incorrecly. Any corrections are very welcome.
This is easily doable, because the log.Logger type guarantees that each log message is delivered to the destination io.Writer with a single Writer.Write() call:
Each logging operation makes a single call to the Writer's Write method. A Logger can be used simultaneously from multiple goroutines; it guarantees to serialize access to the Writer.
So basically you just need to create a type which implements io.Writer, and whose Write() method creates a new document with the contents of the byte slice, and saves it in the MongoDB.
Here's a simple implementation which does that:
type MongoWriter struct {
sess *mgo.Session
}
func (mw *MongoWriter) Write(p []byte) (n int, err error) {
c := mw.sess.DB("").C("log")
err = c.Insert(bson.M{
"created": time.Now(),
"msg": string(p),
})
if err != nil {
return
}
return len(p), nil
}
Using it:
sess := ... // Get a MongoDB session
mw := &MongoWriter{sess}
log.SetOutput(mw)
// Now the default Logger of the log package uses our MongoWriter.
// Generate a log message that will be inserted into MongoDB:
log.Println("I'm the first log message.")
log.Println("I'm multi-line,\nbut will still be in a single log message.")
Obviously if you're using another log.Logger instance, set the MongoWriter to that, e.g.:
mylogger := log.New(mw, "", 0)
mylogger.Println("Custom logger")
Note that the log messages end with newline as log.Logger appends it even if the log message itself does not end with newline. If you don't want to log the ending newline, you may simply cut it, e.g.:
func (mw *MongoWriter) Write(p []byte) (n int, err error) {
origLen := len(p)
if len(p) > 0 && p[len(p)-1] == '\n' {
p = p[:len(p)-1] // Cut terminating newline
}
c := mw.sess.DB("").C("log")
// ... the rest is the same
return origLen, nil // Must return original length (we resliced p)
}

Are golang net.UDPConn and net.TCPConn thread safe?? Can i read or write of single UDPConn object in multi thread?

1.Can we call send from one thread and recv from another on the same net.UDPConn or net.TCPConn objects?
2.Can we call multiple sends parallely from different threads on the same net.UDPConn or net.TCPConn objects?
I am unable to find a good documentation also for the same.
Is golang socket api thread safe?
I find that it is hard to test if it is thread safe.
Any pointers in the direction will be helpful.
My test code is below:
package main
import (
"fmt"
"net"
"sync"
)
func udp_server() {
// create listen
conn, err := net.ListenUDP("udp", &net.UDPAddr{
IP: net.IPv4(0, 0, 0, 0),
Port: 8080,
})
if err != nil {
fmt.Println("listen fail", err)
return
}
defer conn.Close()
var wg sync.WaitGroup
for i := 0; i < 10; i = i + 1 {
wg.Add(1)
go func(socket *net.UDPConn) {
defer wg.Done()
for {
// read data
data := make([]byte, 4096)
read, remoteAddr, err := socket.ReadFromUDP(data)
if err != nil {
fmt.Println("read data fail!", err)
continue
}
fmt.Println(read, remoteAddr)
fmt.Printf("%s\n\n", data)
// send data
senddata := []byte("hello client!")
_, err = socket.WriteToUDP(senddata, remoteAddr)
if err != nil {
return
fmt.Println("send data fail!", err)
}
}
}(conn)
}
wg.Wait()
}
func main() {
udp_server()
}
Is it OK for this test code?
The documentation for net.Conn says:
Multiple goroutines may invoke methods on a Conn simultaneously.
Multiple goroutines may invoke methods on a Conn simultaneously.
My interpretation of the doc above, is that nothing catastrophic will happen if you invoke Read and Write on a net.Conn from multiple go routines, and that calls to Write on a net.Conn from multiple go routines will be serialised so that the bytes from 2 separate calls to Write will not be interleaved as they are written to the network.
The problem with the code you have presented is that there is no guarantee that Write will write the whole byte slice provided to it in one go. You are ignoring the indication of how many bytes have been written.
_, err = socket.WriteToUDP(senddata, remoteAddr)
So to make sure you write everything you would need to loop and call Write till all the senddata is sent. But net.Conn only ensures that data from a single call to Write is not interleaved. Given that you could be sending a single block of data with multiple calls to write there is no guarantee that the single block of data would reach its destination intact.
So for example 3 "hello client!" messages could arrive in the following form.
"hellohellohello client! client! client!"
So if you want reliable message writing on a net.Conn from multiple go routines you will need to synchronise those routines to ensure that single messages are written intact.
If I wanted to do this, as a first attempt I would have a single go routine reading from one or many message channels and writing to a net.Conn and then multiple go routines can write to those message channels.

Find out result of inserting object using mgo in Go

I would like to ask you if there is a way to find out if insertion was successful when inserting new object using collection.
Insert(object) with single operation.
What I mean is that, I don't want to send another query to the db to find out if there is a record or not. I need one single atomic operation (insert -> result (isSuccessful) - pseudo code).
The Insert method returns an error object that represents it success or failure. You need to set the safe mode of the session first to enable this behaviour.
session.SetSafe(&mgo.Safe{}) // <-- first set safe mode!
c := session.DB("test").C("people")
err = c.Insert(&Person{"Ale", "+55 53 8116 9639"})
if err != nil { // <-- then check error after insert!
fmt.Printf("There was an error: %v", err)
} else {
fmt.Print("Success!")
}