I am using PostgreSQL as a database for my service, with a go backend.
After querying the database, I can access the rows with rows.Next() and rows.Scan(XX) :
func All(receptacle ...interface{}) (error) {
[...]
for rows.Next() {
err = rows.Scan(receptacle...)
if err != nil {
tx.Rollback()
return nil, err
}
}
rows.Close()
[...]
}
func main() {
var myInt1 int
var myInt2 int
var myString string
All(&myInt1, &myInt2, &myString)
//all 3 variables are correctly set
}
Here, receptacle is variadic.
If my query returns two int and one string, instead of doing rows.Scan(&myInt1, &myInt2, &myString), I am using the function's arguments rows.Scan(args...).
This is perfectly working when one, and only one, a row is requested from the BD.
What I would like to do is to use the same process to store multiple rows :
func main() {
var myInts1 []int
var myInts2 []int
var myStrings []string
All(&myInts1, &myInts2, &myStrings)
//myInts1 == []int{firstRowInt1, secondRowInt1, ...}
//myInts2 == []int{firstRowInt2, secondRowInt2, ...}
//myStrings == []string{firstRowString, secondRowString, ...}
}
Is there a way to tell a function to use the index X of a variadic argument, something like this: rows.Scan(receptacle[0]...) ?
My guess (and tries) is no, but wanted to confirm this hypothesis
Related
Could you help me please with my problem. I'm trying to use column encryption for postgres and there is one small question: how I can achieve transformation value of column (ex: "test_value") to PGP_SYM_ENCRYPT('test_value', 'KEY') in "insert" sql query?
As I understood, the custom types can be the solution for me, but some things isn't clear... Maybe anyone has an example for my case?
(I see this aws docs about pgcrypto using: https://docs.aws.amazon.com/dms/latest/sql-server-to-aurora-postgresql-migration-playbook/chap-sql-server-aurora-pg.security.columnencryption.html)
What I did:
type sstring struct {
string
}
var _ types.ValueAppender = (*sstring)(nil)
func (tm sstring) AppendValue(b []byte, flags int) ([]byte, error) {
if flags == 1 {
b = append(b, '\'')
}
b = []byte("PGP_SYM_ENCRYPT('123456', 'AES_KEY')")
if flags == 1 {
b = append(b, '\'')
}
return b, nil
}
var _ types.ValueScanner = (*sstring)(nil)
func (tm *sstring) ScanValue(rd types.Reader, n int) error {
if n <= 0 {
tm.string = ""
return nil
}
tmp, err := rd.ReadFullTemp()
if err != nil {
return err
}
tm.string = string(tmp)
return nil
}
type model struct {
ID uint `pg:"id"`
Name string `pg:"name"`
Crypto sstring `pg:"crypto,type:sstring"`
tableName struct{} `pg:"models"`
}
----------
_, err := r.h.ModelContext(ctx, model).Insert()
And... the process just do nothing. Do not respond, do not fall, do not create row in sql table.. Nothing.
Anyway. My question is how to implement wrap some column by sql function using pg-go orm.
I tried to use https://github.com/go-pg/pg/blob/v10/example_custom_test.go#L13-L49 as custom type handler example.. But smth went wrong. =(
I have a function for comparing two structs and making a bson document as input to mongodb updateOne()
Example struct format
type event struct {
...
Name string
StartTime int32
...
}
Diff function, please ignore that I have not checked for no difference yet.
func diffEvent(e event, u event) (bson.M, error) {
newValues := bson.M{}
if e.Name != u.Name {
newValues["name"] = u.Name
}
if e.StartTime != u.StartTime {
newValues["starttime"] = u.StartTime
}
...
return bson.M{"$set": newValues}, nil
}
Then I generated a test function like so:
func Test_diffEvent(t *testing.T) {
type args struct {
e event
u event
}
tests := []struct {
name string
args args
want bson.M
wantErr bool
}{
{
name: "update startime",
args: args{
e: event{StartTime: 1},
u: event{StartTime: 2},
},
want: bson.M{"$set": bson.M{"starttime": 2}},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := diffEvent(tt.args.e, tt.args.u)
if (err != nil) != tt.wantErr {
t.Errorf("diffEvent() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("diffEvent() = %v, want %v", got, tt.want)
}
})
}
}
This fails with a
--- FAIL: Test_diffEvent/update_startime (0.00s)
models_test.go:582: diffEvent() = map[$set:map[starttime:2]], want map[$set:map[starttime:2]]
For me this seem to be the same. I have played around with this and bool fields, string fields, enum fields, and fields as struct or fields as arrays of structs seems to work fine with deepequal, but it gives an error for int32 fields.
As a go beginner; what am I missing here? I would assume that if bool/string works then int32 would too.
This:
bson.M{"starttime": 2}
Sets the "starttime" key to the value of the literal 2. 2 is an untyped integer constant, and since no type is provided, its default type will be used which is int.
And 2 values stored in interface values are only equal if the dynamic value stored in them have identical type and value. So a value 2 with int type cannot be equal to a value 2 of type int32.
Use explicit type to tell you want to specify a value of int32 type:
bson.M{"starttime": int32(2)}
I have a struct referencing a *big.Int. When storing this struct naively into MongoDB (using the official driver) the field turns to be nil when fetching the struct back. What is the proper/best way to store a big.Int into MongoDB?
type MyStruct struct {
Number *big.Int
}
nb := MyStruct{Number: big.NewInt(42)}
r, _ := db.Collection("test").InsertOne(context.TODO(), nb)
result := &MyStruct{}
db.Collection("test").FindOne(context.TODO(), bson.D{{"_id", r.InsertedID}}).Decode(result)
fmt.Println(result) // <== Number will be 0 here
My best idea so far would be to create a wrapper around big.Int that implements MarshalBSON and UnmarshalBSON (which I am not even sure how to do properly to be honest). But that'd be quite inconvenient.
Here's a possible implementation I came up with that stores the big.Int as plain text into MongoDb. It is also possible to easily store as byte array by using methods Bytes and SetBytes of big.Int instead of MarshalText/UnmarshalText.
package common
import (
"fmt"
"math/big"
"go.mongodb.org/mongo-driver/bson"
)
type BigInt struct {
i *big.Int
}
func NewBigInt(bigint *big.Int) *BigInt {
return &BigInt{i: bigint}
}
func (bi *BigInt) Int() *big.Int {
return bi.i
}
func (bi *BigInt) MarshalBSON() ([]byte, error) {
txt, err := bi.i.MarshalText()
if err != nil {
return nil, err
}
a, err := bson.Marshal(map[string]string{"i": string(txt)})
return a, err
}
func (bi *BigInt) UnmarshalBSON(data []byte) error {
var d bson.D
err := bson.Unmarshal(data, &d)
if err != nil {
return err
}
if v, ok := d.Map()["i"]; ok {
bi.i = big.NewInt(0)
return bi.i.UnmarshalText([]byte(v.(string)))
}
return fmt.Errorf("key 'i' missing")
}
the field turns to be nil when fetching the struct back
The reason why it returns 0, is because there is no bson mapping available for big.Int. If you check the document inserted into MongoDB collection you should see something similar to below:
{
"_id": ObjectId("..."),
"number": {}
}
Where there is no value stored in number field.
What is the proper/best way to store a big.Int into MongoDB?
Let's first understand what BigInt is. Big integer data type is intended for use when integer values might exceed the range that is supported by the int data type. The range is -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807) with storage size of 8 bytes. Generally this is used in SQL.
In Go, you can use a more precise integer types. Available built-in types int8, int16, int32, and int64 (and their unsigned counterparts) are best suited for data. The counterparts for big integer in Go is int64. With range -9,223,372,036,854,775,808 through 9,223,372,036,854,775,807, and storage size of 8 bytes.
Using mongo-go-driver,you can just use int64 which will be converted to bson RawValue.Int64. For example:
type MyStruct struct {
Number int64
}
collection := db.Collection("tests")
nb := MyStruct{Number: int64(42)}
r, _ := collection.InsertOne(context.TODO(), nb)
var result MyStruct
collection.FindOne(context.TODO(), bson.D{{"_id", r.InsertedID}}).Decode(&result)
If I have a table that returns something like:
id: 1
names: {Jim, Bob, Sam}
names is a varchar array.
How do I scan that back into a []string in Go?
I'm using lib/pg
Right now I have something like
rows, err := models.Db.Query("SELECT pKey, names FROM foo")
for rows.Next() {
var pKey int
var names []string
err = rows.Scan(&pKey, &names)
}
I keep getting:
panic: sql: Scan error on column index 1: unsupported Scan, storing driver.Value type []uint8 into type *[]string
It looks like I need to use StringArray
https://godoc.org/github.com/lib/pq#StringArray
But, I think I'm too new to Go to understand exactly how to use:
func (a *StringArray) Scan(src interface{})
You are right, you can use StringArray but you don't need to call the
func (a *StringArray) Scan(src interface{})
method yourself, this will be called automatically by rows.Scan when you pass it anything that implements the Scanner interface.
So what you need to do is to convert your []string to *StringArray and pass that to rows.Scan, like so:
rows, err := models.Db.Query("SELECT pKey, names FROM foo")
for rows.Next() {
var pKey int
var names []string
err = rows.Scan(&pKey, (*pq.StringArray)(&names))
}
Long Story Short, use like this to convert pgSQL array to GO array, here 5th column is coming as a array :
var _temp3 []string
for rows.Next() {
// ScanRows scan a row into temp_tbl
err := rows.Scan(&_temp, &_temp0, &_temp1, &_temp2, pq.Array(&_temp3))
In detail :
To insert a row that contains an array value, use the pq.Array function like this:
// "ins" is the SQL insert statement
ins := "INSERT INTO posts (title, tags) VALUES ($1, $2)"
// "tags" is the list of tags, as a string slice
tags := []string{"go", "goroutines", "queues"}
// the pq.Array function is the secret sauce
_, err = db.Exec(ins, "Job Queues in Go", pq.Array(tags))
To read a Postgres array value into a Go slice, use:
func getTags(db *sql.DB, title string) (tags []string) {
// the select query, returning 1 column of array type
sel := "SELECT tags FROM posts WHERE title=$1"
// wrap the output parameter in pq.Array for receiving into it
if err := db.QueryRow(sel, title).Scan(pq.Array(&tags)); err != nil {
log.Fatal(err)
}
return
}
Note: that in lib/pq, only slices of certain Go types may be passed to pq.Array().
Another example in which varchar array in pgSQL in generated at runtime in 5th column, like :
--> predefined_allow false admin iam.create {secrets,configMap}
I converted this as,
Q := "SELECT ar.policy_name, ar.allow, ar.role_name, pro.operation_name, ARRAY_AGG(pro.resource_id) as resources FROM iam.authorization_rules ar LEFT JOIN iam.policy_rules_by_operation pro ON pro.id = ar.operation_id GROUP BY ar.policy_name, ar.allow, ar.role_name, pro.operation_name;"
tx := g.db.Raw(Q)
rows, _ := tx.Rows()
defer rows.Close()
var _temp string
var _temp0 bool
var _temp1 string
var _temp2 string
var _temp3 []string
for rows.Next() {
// ScanRows scan a row into temp_tbl
err := rows.Scan(&_temp, &_temp0, &_temp1, &_temp2, pq.Array(&_temp3))
if err != nil {
return nil, err
}
fmt.Println("Query Executed...........\n", _temp, _temp0, _temp1, _temp2, _temp3)
}
Output :
Query Executed...........
predefined_allow false admin iam.create [secrets configMap]
This question already has answers here:
Type converting slices of interfaces
(9 answers)
Closed 3 years ago.
func GetFromDB(tableName string, m *bson.M) interface{} {
var (
__session *mgo.Session = getSession()
)
//if the query arg is nil. give it the null query
if m == nil {
m = &bson.M{}
}
__result := []interface{}{}
__cs_Group := __session.DB(T_dbName).C(tableName)
__cs_Group.Find(m).All(&__result)
return __result
}
call
GetFromDB(T_cs_GroupName, &bson.M{"Name": "Alex"}).([]CS_Group)
runtime will give me panic:
panic: interface conversion: interface is []interface {}, not []mydbs.CS_Group
how convert the return value to my struct?
You can't automatically convert between a slice of two different types – that includes []interface{} to []CS_Group. In every case, you need to convert each element individually:
s := GetFromDB(T_cs_GroupName, &bson.M{"Name": "Alex"}).([]interface{})
g := make([]CS_Group, 0, len(s))
for _, i := range s {
g = append(g, i.(CS_Group))
}
You need to convert the entire hierarchy of objects:
rawResult := GetFromDB(T_cs_GroupName, &bson.M{"Name": "Alex"}).([]interface{})
var result []CS_Group
for _, m := range rawResult {
result = append(result,
CS_Group{
SomeField: m["somefield"].(typeOfSomeField),
AnotherField: m["anotherfield"].(typeOfAnotherField),
})
}
This code is for the simple case where the type returned from mgo matches the type of your struct fields. You may need to sprinkle in some type conversions and type switches on the bson.M value types.
An alternate approach is to take advantage of mgo's decoder by passing the output slice as an argument:
func GetFromDB(tableName string, m *bson.M, result interface{}) error {
var (
__session *mgo.Session = getSession()
)
//if the query arg is nil. give it the null query
if m == nil {
m = &bson.M{}
}
__result := []interface{}{}
__cs_Group := __session.DB(T_dbName).C(tableName)
return __cs_Group.Find(m).All(result)
}
With this change, you can fetch directly to your type:
var result []CS_Group
err := GetFromDB(T_cs_GroupName, bson.M{"Name": "Alex"}, &result)
See also: FAQ: Can I convert a []T to an []interface{}?