I am slightly confused by the output i'm receiving from my Postgres when querying it with the use of go.
Since I am very new to this I have a hard time even forming the right question for this problem I have, so I'll just leave a code block here, with the output I'm receiving and what I expected to happen. I hope this makes it more understandable.
The connection to the postgres db seems to work fine
rows, err := db.Query("SELECT title FROM blogs;")
fmt.Println("output", rows)
However, this is the output I am receiving.
output &{0xc4200ea180 0x4c0e20 0xc42009a3c0 0x4b4f90 <nil> {{0 0} 0 0 0 0} false <nil> []}
As I said, I am new to postgres and go, and I have no Idea what I am dealing with here.
I was expecting my entire table to return in a somewhat readable format.
I was expecting my entire table to return in a somewhat readable format.
It does not come back in a "readable" format, why would it?
Query returns a struct that you can use to iterate through the rows that matched the query.
Adapting the example in the docs to your case, and assuming your title field is a VARCHAR, something like this should work for you:
rows, err := db.Query("SELECT title FROM blogs;")
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var title string
if err := rows.Scan(&title); err != nil {
log.Fatal(err)
}
fmt.Println(title)
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
Related
Struggling with unmarshalling data in Golang from mongo, may be cause I am new to this. Just started learning golang with MongoDB
Tried with map[string]interface{} to avoid any struct related errors
var data map[string]interface{}
filter := bson.M{"profile.username": username}
singleResult := u.getCollection(client).FindOne(u.ctx, filter)
err := singleResult.Decode(data)
This fails to unmarshall with error cannot Decode to nil value
Tried with exact struct structure too.
var result *models.UserData
filter := bson.M{"profile.username": username}
singleResult := u.getCollection(client).FindOne(u.ctx, filter)
err := singleResult.Decode(result)
Fails with same error cannot Decode to nil value
Tried to find all with map[string]interface{}
var result []models.UserData
cursor, _ := u.getCollection(client).Find(u.ctx, bson.M{})
err := cursor.All(u.ctx, &result)
Works perfectly as expected
Tried to find all with exact struct structure
var data []map[string]interface{}
cursor, _ := u.getCollection(client).Find(u.ctx, bson.M{})
err := cursor.All(u.ctx, &result)
Works perfectly as expected
Now I thought may be I am not finding the data in mongo but then
filter := bson.M{"profile.username": username}
singleResult := u.getCollection(client).FindOne(u.ctx, filter)
raw, _ := singleResult.DecodeBytes()
log.Print("\n\n" + raw.String()+"\n\n")
This prints the data as expected. Although one thing I noticed all non-string values are formatted as {"$numberLong":"1"}. Still don't know if it is correct or cause of the issue.
In your first 2 examples that fail, the data passed to Decode() are both nil:
// data == nil
var data map[string]interface{}
// ...
// result == nil
var result *models.UserData
Try like
var result = &models.UserData{} // init the pointer with a block of valid allocated memory
// ...
err := singleResult.Decode(result)
In order for Decode() to write the document(s) into the passed value, it must be a (non-nil) pointer. Passing any value creates a copy, and if you pass a non-pointer, only the copy could be modified. If you pass a pointer, a copy is still made, but Decode() will modify the pointed value, not the pointer.
In your first 2 examples that fail, you pass a non-pointer (or a nil pointer):
err := singleResult.Decode(result)
Modify it to pass a (non-nil) pointer:
err := singleResult.Decode(&result)
Your last 2 examples work because you're already passing (non-nil) pointers:
err := cursor.All(u.ctx, &result)
In my golang project that use gorm as ORM and posgress as database, in some sitution when I begin transaction to
change three tables and commiting, just one of tables changes. two other tables data does not change.
any idea how it might happen?
you can see example below
o := *gorm.DB
tx := o.Begin()
invoice.Number = 1
err := tx.Save(&invoice)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
receipt.Ref = "1331"
err = tx.Save(&receipt)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
payment.status = "succeed"
err = tx.Save(&payment)
if err != nil {
err2 := tx.RollBack().Error()
return err
}
err = tx.Commit()
if err != nil {
err2 := tx.Rollback()
return err
}
Just payment data changed and I'm not getting any error.
Apparently you are mistakenly using save points! In PostgreSQL, we can have nested transactions, that is, defining save points make the transaction split into parts. I am not a Golang programmer and my primary language is not Go, but as I guess the problem is "tx.save" which makes a SavePoint, and does not save the data into database. SavePoints makes a new transaction save point, and thus, the last table commits.
If you are familiar with the Node.js, then any async function callback returns an error as the first argument. In Go, we follow the same norm.
https://medium.com/rungo/error-handling-in-go-f0125de052f0
I have a small Go program which uses a a postgresql db. In it there is a query which can return no rows, and the code I'm using to deal with this isn't working correctly.
// Get the karma value for nick from the database.
func getKarma(nick string, db *sql.DB) string {
var karma int
err := db.QueryRow("SELECT SUM(delta) FROM karma WHERE nick = $1", nick).Scan(&karma)
var karmaStr string
switch {
case err == sql.ErrNoRows:
karmaStr = fmt.Sprintf("%s has no karma.", nick)
case err != nil:
log.Fatal(err)
default:
karmaStr = fmt.Sprintf("Karma for %s is %d.", nick, karma)
}
return karmaStr
}
This logic is taken directly from the Go documentation. When there are no rows corresponding to nick, the following error occurs:
2016/07/24 19:37:07 sql: Scan error on column index 0: converting driver.Value type <nil> ("<nil>") to a int: invalid syntax
I must be doing something stupid - clues appreciated.
I believe your issue is that you're getting a NULL value back from the database, which go translates into nil. However, you're scanning into an integer, which has no concept of nil. One thing you can do is scan into a type that implements the sql.Scanner interface (and can handle NULL values), e.g., sql.NullInt64.
In the example code in the documentation, I'd assume they have a NOT NULL constraint on the username column. I think the reason for this is because they didn't want to lead people to believe that you have to use NULL-able types across the board.
I reworked the code to get the results I wanted.
// Get the karma value for nick from the database.
func getKarma(nick string, db *sql.DB) string {
var karma int
rows, err := db.Query("SELECT SUM(delta) FROM karma WHERE nick = $1", nick)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
karmaStr := fmt.Sprintf("%s has no karma.", nick)
if rows.Next() {
rows.Scan(&karma)
karmaStr = fmt.Sprintf("Karma for %s is %d.", nick, karma)
}
return karmaStr
}
Tempted to submit a documentation patch of some sort to the database/sql package.
I'm just getting started with golang and I'm attempting to read several rows from a Postgres users table and store the result as an array of User structs that model the row.
type User struct {
Id int
Title string
}
func Find_users(db *sql.DB) {
// Query the DB
rows, err := db.Query(`SELECT u.id, u.title FROM users u;`)
if err != nil { log.Fatal(err) }
// Initialize array slice of all users. What size do I use here?
// I don't know the number of results beforehand
var users = make([]User, ????)
// Loop through each result record, creating a User struct for each row
defer rows.Close()
for i := 0; rows.Next(); i++ {
err := rows.Scan(&id, &title)
if err != nil { log.Fatal(err) }
log.Println(id, title)
users[i] = User{Id: id, Title: title}
}
// .... do stuff
}
As you can see, my problem is that I want to initialize an array or slice beforehand to store all the DB records, but I don't know ahead of time how many records there are going to be.
I was weighing a few different approaches, and wanted to find out which of the following was most used in the golang community -
Create a really large array beforehand (e.g. 10,000 elements). Seems wasteful
Count the rows explicitly beforehand. This could work, but I need to run 2 queries - one to count and one to get the results. If my query is complex (not shown here), that's duplicating that logic in 2 places. Alternatively I can run the same query twice, but first loop through it and count the rows. All this would work, but it just seems unclean.
I've seen examples of expanding slices. I don't quite understand slices well enough to see how it could be adapted here. Also if I'm constantly expanding a slice 10k times, it definitely seems wasteful.
Go has a built-in append function for exactly this purpose. It takes a slice and one or more elements and appends those elements to the slice, returning the new slice. Additionally, the zero value of a slice (nil) is a slice of length zero, so if you append to a nil slice, it will work. Thus, you can do:
type User struct {
Id int
Title string
}
func Find_users(db *sql.DB) {
// Query the DB
rows, err := db.Query(`SELECT u.id, u.title FROM users u;`)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
var users []User
for rows.Next() {
err := rows.Scan(&id, &title)
if err != nil {
log.Fatal(err)
}
log.Println(id, title)
users = append(users, User{Id: id, Title: title})
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
// ...
}
User appending to a slice:
type DeviceInfo struct {
DeviceName string
DeviceID string
DeviceUsername string
Token string
}
func QueryMultiple(db *sql.DB){
var device DeviceInfo
sqlStatement := `SELECT "deviceName", "deviceID", "deviceUsername",
token FROM devices LIMIT 10`
rows, err := db.Query(sqlStatement)
if err != nil {
panic(err)
}
defer rows.Close()
var deviceSlice []DeviceInfo
for rows.Next(){
rows.Scan(&device.DeviceID, &device.DeviceUsername, &device.Token,
&device.DeviceName)
deviceSlice = append(deviceSlice, device)
}
fmt.Println(deviceSlice)
}
I think that what you are looking for is the capacity.
The following allocates an array that can hold 10,000 items:
users := make([]User, 0, 10_000)
but the array is, itself, still empty (len(users) == 0).
Now you can add up to at least 10,000 items before the array needs to be grown. For that purpose the append() works as expected:
users = append(users, User{...})
Maps are grown with a x2 of the size starting with 1. So it remains a power of two. I'm not sure whether slices are grown the same way (in powers of two). If so, then the allocated size above would be:
math.Pow(2, math.Ceil(math.Log(10_000)/math.Log(2)))
which is 2^14 which is 16,384.
Note: if your query uses a compatible INDEX, i.e. the WHERE clause matches the INDEX one to one, then an extra SELECT COUNT(*) ... is free since the number of elements is known and it will return that number without the need to scan all the rows of your tables.
Mgo and golang question.
I run into problem again. I try to update record in the database, but running simple command visitors.UpdateId(v.Id, bson.M{"$set": zscore}); where zscore is a variable of type Zscore, does not work. However if I manually convert zscore to bson.M structure, everything works fine.
Does anybody know how to update the record in mongodb using mgo, without manually dumping structure values into bson.M?
Example:
type Zscore struct {
a float64 `bson:"a,omitempty" json:"a"`
b float64 `bson:"b,omitempty" json:"b"`
c float64 `bson:"c,omitempty" json:"c"`
}
v := Visitor{}
zscore := Zscore{}
visitors := updater.C("visitors")
for result.Next(&v) {
zscore.a = 1
zscore.b = 2
zscore.c = 0
//does not work
if err := visitors.UpdateId(v.Id, bson.M{"$set": zscore}); err != nil {
log.Printf("Got error while updating visitor: %v\n", err)
}
//works
set := bson.M{
"zscore.a": zscore.a,
"zscore.b": zscore.b,
"zscore.c": zscore.c,
}
if err := visitors.UpdateId(v.Id, bson.M{"$set": set}); err != nil {
log.Printf("Got error while updating visitor: %v\n", err)
}
}
All Go marshaling packages I'm aware of, including the bson package, will not marshal fields that are private (start with a lowercase letter). To fix the issue, just export the relevant fields by uppercasing the first letter of their name.
Also note that, besides the issue mentioned above, the first part of your example will not marshal in an equivalent way to the second part. bson.M{"$set": zscore} is equivalent to bson.M{"$set": bson.M{"a": ... etc ...}}.