Invalid memory address error when running postgres queries [duplicate] - postgresql

This question already has answers here:
How to use global var across files in a package?
(3 answers)
Closed 3 years ago.
I keep getting this error when I run my Go code which makes queries to my local postgres database.
Error:
panic serving [::1]:56708: runtime error: invalid memory address or nil pointer dereference
goroutine 23 [running]:
net/http.func·011()
/usr/local/go/src/pkg/net/http/server.go:1100 +0xb7
runtime.panic(0x2ef0a0, 0x4d8ee4)
/usr/local/go/src/pkg/runtime/panic.c:248 +0x18d
database/sql.(*DB).conn(0x0, 0x277a1, 0x0, 0x0)
/usr/local/go/src/pkg/database/sql/sql.go:625 +0x751
database/sql.(*DB).Ping(0x0, 0x0, 0x0)
/usr/local/go/src/pkg/database/sql/sql.go:452 +0x39
main.firstHandler(0x58e9a8, 0xc208052320, 0xc2080284e0)
/Users/Tommy/Documents/gocode/server/server.go:122 +0x35
net/http.HandlerFunc.ServeHTTP(0x3c6be8, 0x58e9a8, 0xc208052320, 0xc2080284e0)
/usr/local/go/src/pkg/net/http/server.go:1235 +0x40
github.com/gorilla/mux.(*Router).ServeHTTP(0xc2080186e0, 0x58e9a8, 0xc208052320, 0xc2080284e0)
/Users/Audrey/gocode/src/github.com/gorilla/mux/mux.go:98 +0x292
net/http.(*ServeMux).ServeHTTP(0xc208022660, 0x58e9a8, 0xc208052320, 0xc2080284e0)
/usr/local/go/src/pkg/net/http/server.go:1511 +0x1a3
net/http.serverHandler.ServeHTTP(0xc208004660, 0x58e9a8, 0xc208052320, 0xc2080284e0)
/usr/local/go/src/pkg/net/http/server.go:1673 +0x19f
net/http.(*conn).serve(0xc208050500)
/usr/local/go/src/pkg/net/http/server.go:1174 +0xa7e
created by net/http.(*Server).Serve
/usr/local/go/src/pkg/net/http/server.go:1721 +0x313
Go:
func firstHandler(w http.ResponseWriter, r *http.Request) {
err := db.Ping()
if err != nil {
log.Fatal(err)
}
rows, err := db.Query("SELECT id, created_at, updated_at FROM script WHERE updated_at = $1", 3)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
var created_at, updated_at, id int
for rows.Next() {
err := rows.Scan(&id, &created_at, &updated_at)
if err != nil {
log.Fatal(err)
}
fmt.Fprintf("%s %s %s", id, created_at, updated_at)
}
}
var r = mux.NewRouter()
var db *sql.DB
func main() {
db, err := sql.Open("postgres", "user=Tommy host=localhost dbname=dbgo sslmode=verify-full")
if err != nil {
log.Fatal(err)
}
defer db.Close()
r.HandleFunc("/ping", firstHandler)
http.Handle("/", r)
http.ListenAndServe(":8080", nil)
}
Help. What am I doing wrong? I referred to this also: https://gophercasts.io/lessons/4-postgres-basics.

Actually, you declare the connection with:
var db *sql.DB
but you open the connection with:
db, err := sql.Open("postgres", "user=Tommy host=localhost dbname=dbgo sslmode=verify-full")
Note the := (it combines a variable declaration with an assignment). This will actually shadow the global db variable by a local one. The connection is opened but assigned to the local variable. So the value of the global db variable is nil.
When the firstHandler function is called, its value is still nil, which triggers the panic.
Replace the := by a = (and declare the err object before).

Related

Unit Testing Postgres db connection golang

I am expected to have 80% test coverage even for pushing the basic project structure. I am a bit confused how do I write unit tests for the following code to Connect to postgres db and ping postgres for health check. Can someone help me please.
var postgres *sql.DB
// ConnectToPostgres func to connect to postgres
func ConnectToPostgres(connStr string) (*sql.DB, error) {
db, err := sql.Open("postgres", connStr)
if err != nil {
log.Println("postgres-client ", err)
return nil, err
}
postgres = db
return db, nil
}
// PostgresHealthCheck to ping database and check for errors
func PostgresHealthCheck() error {
if err := postgres.Ping(); err != nil {
return err
}
return nil
}
type PostgresRepo struct {
db *sql.DB
}
// NewPostgresRepo constructor
func NewPostgresRepo(database *sql.DB) *PostgresRepo {
return &PostgresRepo{
db: database,
}
}
You need to use this : https://github.com/DATA-DOG/go-sqlmock
Its very easy to use. Here is an example where a controller is getting tested using a mocked SQL :
Implementation
func (up UserProvider) GetUsers() ([]models.User, error) {
var users = make([]models.User, 0, 10)
rows, err := up.DatabaseProvider.Query("SELECT firstname, lastname, email, age FROM Users;")
if err != nil {
return nil, err
}
for rows.Next() {
var u models.User = models.User{}
err := rows.Scan(&u.Name, &u.Lastname, &u.Email, &u.Age)
if err != nil {
return nil, err
}
users = append(users, u)
}
if err := rows.Err(); err != nil {
return nil, err
}
return users, nil
}
Test
func TestGetUsersOk(t *testing.T) {
db, mock := NewMock()
mock.ExpectQuery("SELECT firstname, lastname, email, age FROM Users;").
WillReturnRows(sqlmock.NewRows([]string{"firstname", "lastname", "email", "age"}).
AddRow("pepe", "guerra", "pepe#gmail.com", 34))
subject := UserProvider{
DatabaseProvider: repositories.NewMockDBProvider(db, nil),
}
resp, err := subject.GetUsers()
assert.Nil(t, err)
assert.NotNil(t, resp)
assert.Equal(t, 1, len(resp))
}
func NewMock() (*sql.DB, sqlmock.Sqlmock) {
db, mock, err := sqlmock.New()
if err != nil {
log.Fatalf("an error '%s' was not expected when opening a stub database connection", err)
}
return db, mock
}
I find that writing tests against a live database makes for more high quality tests. The challenge with Postgres is that there's no good in-memory fake that you can substitute in.
What I came up with is standing up the postgres Docker container and creating temporary databases in there. The PostgresContainer type in the github.com/bitcomplete/sqltestutil package does exactly this:
# Postgres version is "12"
pg, _ := sqltestutil.StartPostgresContainer(context.Background(), "12")
defer pg.Shutdown(ctx)
db, err := sql.Open("postgres", pg.ConnectionString())
// ... execute SQL
Per the docs, it's a good idea to set up your tests so that the container is only started once, as it can take a few seconds to start up (more if the image needs to be downloaded). It suggests some approaches for mitigating that problem.

pgx in a goroutine reporting connection busy

My application is using pgx to running database queries in a goroutines. However, I am getting connection busy errors. Is there a way to have a goroutine
func writeDb(dbconn *pgx.Conn) {
sqlWritePost := `QUERY_HERE`
_, err := dbconn.Exec(context.Background(), sqlWritePost, v.Url, v.Content, v.StrippedContent, v.Posthash)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
func main() {
var dbconn *pgx.Conn
dbconn, err := pgx.Connect(context.Background(), os.Getenv("database_string"))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
...
go writeDb(dbconn)
...
}
I am receiving errors conn busy. Is there a way to structure my code to avoid this issue?
Thanks!

Mongodb doesn't retrieve all documents in a collection with 2 million records using cursor

I have a collections of 2,000,000 records
> db.events.count(); │
2000000
and I use golang mongodb client to connect to the database
package main
import (
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI("mongodb://localhost:27888").SetAuth(options.Credential{
Username: "mongoadmin",
Password: "secret",
}))
if err != nil {
panic(err)
}
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
collection := client.Database("test").Collection("events")
var bs int32 = 10000
var b = true
cur, err := collection.Find(context.Background(), bson.D{}, &options.FindOptions{
BatchSize: &bs, NoCursorTimeout: &b})
if err != nil {
log.Fatal(err)
}
defer cur.Close(ctx)
s, n := runningtime("retrive db from mongo and publish to kafka")
count := 0
for cur.Next(ctx) {
var result bson.M
err := cur.Decode(&result)
if err != nil {
log.Fatal(err)
}
bytes, err := json.Marshal(result)
if err != nil {
log.Fatal(err)
}
count++
msg := &sarama.ProducerMessage{
Topic: "hello",
// Key: sarama.StringEncoder("aKey"),
Value: sarama.ByteEncoder(bytes),
}
asyncProducer.Input() <- msg
}
But the the program only retrives only about 600,000 records instead of 2,000,000 every times I ran the program.
$ go run main.go
done
count = 605426
nErrors = 0
2020/09/18 11:23:43 End: retrive db from mongo and publish to kafka took 10.080603336s
I don't know why? I want to retrives all 2,000,000 records. Thanks for any help.
Your loop fetching the results may end early because you are using the same ctx context for iterating over the results which has a 10 seconds timeout.
Which means if retrieving and processing the 2 million records (including connecting) takes more than 10 seconds, the context will be cancelled and thus the cursor will also report an error.
Note that setting FindOptions.NoCursorTimeout to true is only to prevent cursor timeout for inactivity, it does not override the used context's timeout.
Use another context for executing the query and iterating over the results, one that does not have a timeout, e.g. context.Background().
Also note that for constructing the options for find, use the helper methods, so it may look as simple and as elegant as this:
options.Find().SetBatchSize(10000).SetNoCursorTimeout(true)
So the working code:
ctx2 := context.Background()
cur, err := collection.Find(ctx2, bson.D{},
options.Find().SetBatchSize(10000).SetNoCursorTimeout(true))
// ...
for cur.Next(ctx2) {
// ...
}
// Also check error after the loop:
if err := cur.Err(); err != nil {
log.Printf("Iterating over results failed: %v", err)
}

Temporary Postgres table gets lost prematurely

I use a temporary table to hold a range of ID's so I can use them in several other queries without adding a long list of ID's to every query.
I'm building this in GO and this is new for me. Creating the temporary table works, fetching the ID's succeed and also adding those IDs to the temporary table is successful. But when I use the temporary table I get this error:
pq: relation "temp_id_table" does not exist
This is my code (EDITED: added transaction):
//create context
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
// create database connection
psqlInfo := fmt.Sprintf("host=%s port=%s user=%s "+
"password=%s dbname=%s sslmode=disable",
c.Database.Host, c.Database.Port, c.Database.User, c.Database.Password, c.Database.DbName)
db, err := sql.Open("postgres", psqlInfo)
err = db.PingContext(ctx)
tx, err := db.BeginTx(ctx, &sql.TxOptions{Isolation: sql.LevelSerializable})
// create temporary table to store ids
_, err = tx.ExecContext(ctx, "CREATE TEMPORARY TABLE temp_id_table (id int)")
// fetch all articles of set
newrows, err := db.QueryContext(ctx, "SELECT id FROM article WHERE setid = $1", SetId)
var tempid int
var ids []interface{}
for newrows.Next() {
err := newrows.Scan(&tempid)
ids = append(ids, tempid)
}
// adding found ids to temporary table so we can use it in other queries
var buffer bytes.Buffer
buffer.WriteString("INSERT INTO temp_id_table (id) VALUES ")
for i := 0; i < len(ids); i++ {
if i>0 {
buffer.WriteString(",")
}
buffer.WriteString("($")
buffer.WriteString(strconv.Itoa(i+1))
buffer.WriteString(")")
}
_, err = db.QueryContext(ctx, buffer.String(), ids...)
// fething article codes
currrows, err := db.QueryContext(ctx, "SELECT code FROM article_code WHERE id IN (SELECT id FROM temp_id_table)")
(I simplified the code and removed all error handling to make the code more readable)
When I change it to a normal table everything works fine. What do I do wrong?
EDIT 05-06-2019:
I created a simple test program to test new input from the comments below:
func main() {
var codes []interface{}
codes = append(codes, 111)
codes = append(codes, 222)
codes = append(codes, 333)
config := config.GetConfig();
// initialising variables
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
// create database connection
log.Printf("create database connection")
db, err := connection.Create(config, ctx)
defer db.Close()
if err != nil {
log.Fatal(err)
}
// create transaction
log.Printf("create transaction")
tx, err := db.BeginTx(ctx, &sql.TxOptions{Isolation: sql.LevelReadUncommitted})
if err != nil {
log.Fatal(err)
}
// create temporary table to store IB codes
log.Printf("create temporary table to store codes")
_, err = tx.ExecContext(ctx, "CREATE TEMPORARY TABLE tmp_codes (code int)")
if err != nil {
log.Fatal(err)
}
// adding found IB codes to temporary table so we can fetch the current articles
log.Printf("adding codes to temporary table so we can fetch the current articles")
_, err = tx.QueryContext(ctx, "INSERT INTO tmp_codes (code) VALUES ($1),($2),($3)", codes...)
if err != nil {
log.Fatal(err)
}
testcodes, err := tx.QueryContext(ctx, "SELECT * FROM tmp_codes")
if err != nil {
log.Fatal(err)
}
defer testcodes.Close()
var testcount int
for testcodes.Next() {
testcount++
}
log.Printf(fmt.Sprintf("%d items in temporary table before commit, %d ibcodes added", testcount, len(codes)))
// close transaction
log.Printf("commit transaction")
tx.Commit()
}
The problem is the connection pool. You're not guaranteed to use the same server connection for each query. To guarantee this, you can start a transaction with Begin or BeginTx.
The returned sql.Tx object is guaranteed to use the same connection for its lifetime.
Related:
SQL Server Temp Tables and Connection Pooling

Golang postgres driver panics when using transactions

So... this was apperantly an issue a few years ago, atleast there are several bugs with what I think is the same issue. In a method I have 2 transactions, one after the other (so they are not concurrent, they are sequencial) and the second transaction always fails.
This is roughly the code:
import (
"database/sql"
_ "github.com/lib/pq"
)
var db *sql.DB
func InitStorage() {
var err error
db, err = sql.Open(os.Getenv("DB_CONNECTION_DRIVER"), os.Getenv("DB_CONNECTION_STRING"))
if err != nil {
glog.Error(err)
}
if db == nil {
glog.Fatal(db)
}
tx, err := db.Begin()
if err != nil {
glog.Error(err)
} else {
glog.Info("ERROR ON BEGIN IS NiLL!!!")
}
_, err = tx.Query(`INSERT INTO urls(url_hash, url) VALUES($1, $2)`, `asdadsaaa`, `1313`)
if err != nil {
glog.Error(err)
tx.Rollback()
}
tx.Commit()
glog.Info("SECOND TRANSACTION!!!")
tx1, err := db.Begin()
if err != nil {
glog.Error("SECOND TRANSACTION HAS ERR: ", err)
} else {
glog.Info("ERROR ON BEGIN IS NiLL!!!")
}
_, err = tx1.Query(`INSERT INTO urls(url_hash, url) VALUES($1, $2)`, `asdadsaaa`, `1313`)
if err != nil {
glog.Error(err)
tx1.Rollback()
}
tx1.Commit()
}
And this is always the output (I have cut the end, since it doesn't bring useful information):
I0324 00:24:26.192256 11580 storage_rdb.go:33] ERROR ON BEGIN IS NiLL!!!
I0324 00:24:26.195134 11580 storage_rdb.go:42] SECOND TRANSACTION!!!
E0324 00:24:26.195197 11580 storage_rdb.go:45] SECOND TRANSACTION HAS ERRpq: unexpected transaction status idle in transaction
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x20 pc=0x5460cb]
goroutine 1 [running]:
panic(0x7c41c0, 0xc820010150)
/usr/local/go/src/runtime/panic.go:464 +0x3e6
database/sql.(*Tx).Query(0x0, 0x8b2e00, 0x2e, 0xc82011fd50, 0x2, 0x2, 0xc82001a540, 0x0, 0x0)
/usr/local/go/src/database/sql/sql.go:1404 +0x3b
main.InitStorage()
I assume I am doing something wrong(since the bug was found in 2013), but I can't figure out what it is. Please help.