I am trying to open a connection to a postgres database using pgx and I am getting the following error:
./dbservice.go:12:26: too many arguments in call to "github.com/jackc/pgx".Connect
have (context.Context, string)
want ("github.com/jackc/pgx".ConnConfig)
./dbservice.go:13:18: too many arguments in call to conn.Close
have (context.Context)
want ()
./dbservice.go:21:44: cannot use context.Background() (type context.Context) as type string in argument to conn.Query
I am not sure what the error is asking me to do here. pgx.Connect works when I call it from the main file but here it doesn't work. Here's the code:
func initNodes(nodes *[]Node, searchNodes *[]SearchNode, storageNodes *[]StorageNode) error {
conn, err := pgx.Connect(context.Background(), DATABATE_URL)
defer conn.Close(context.Background())
if err != nil {
fmt.Printf("Connection failed: %v\n", err)
os.Exit(-1)
}
...
func main() {
a:= Arbiter{}
a.init()
}
Any ideas?
Most likely you are importing the v3 pgx API in dbservice.go, but the v4 API in your “main file”. The Connect function in github.com/jackc/pgx/v4 accepts the two arguments you're passing. The Connect function in v3 accepts a pkg.ConnConfig instead.
So, check your import statements in dbservice.go: if you intend to be using the v4 API, import it as "github.com/jackc/pgx/v4".
Related
I'm new to golang and pgx and I'm running into an issue when I try to run a simple query. I have the following table in postgres.
CREATE TABLE users (
user_id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
phone_number TEXT NOT NULL UNIQUE,
);
When I try to run the following query:
func sampleFunc(db dbClient){
number := "+111111111"
rows, err := db.Query(context.Background(), "SELECT user_id FROM users WHERE phone_number=$1::TEXT;", number)
if err != nil {
log.Println(err)
return false, err
}
}
I get the following error: cannot convert [+111111111] to Text.
EDIT 03/05/2022
I should note that I'm wrapping the pgxpool.Pool in my own struct, and for some reason there's actually no issue when I use the pgxpool directly.
type dbClient interface {
Query(ctx context.Context, query string, args ...interface{}) (pgx.Rows, error)
}
type SQLClient struct {
Conn *pgxpool.Pool
}
func (db *SQLClient) Query(ctx context.Context, query string, args ...interface{}) (pgx.Rows, error) {
return db.Conn.Query(ctx, query, args)
}
I'm assuming that then something is happening with the type information, but still can't figure out how to go about fixing it.
When I print the args type info in SQLClient.Query, by using fmt.Println(reflect.TypeOf(args)), I get the following: []interface {}
You forgot to add ... to the args argument passed to db.Conn.Query().
func (db *SQLClient) Query(ctx context.Context, query string, args ...interface{}) (pgx.Rows, error) {
return db.Conn.Query(ctx, query, args...)
}
The spec.
I saw the same problem here
I think this is caused by the combination of pgx internally using prepared statements and PostgreSQL not being able to determine the type of the placeholders. The simplest solution is to include type information with your placeholders. e.g. $1::int. Alternatively, you could configure pgx to use the simple protocol instead of prepared statements.
I'm writing a RESTful API in Golang, which also has a gRPC api. The API connects to a MongoDB database, and uses structs to map out entities. I also have a .proto definition which matches like for like the struct I'm using for MongoDB.
I just wondered if there was a way to share, or re-use the .proto defined code for the MongoDB calls also. I've noticed the strucs protoc generates has json tags for each field, but obviously there aren't bson tags etc.
I have something like...
// Menu -
type Menu struct {
ID bson.ObjectId `json:"id" bson"_id"`
Name string `json:"name" bson:"name"`
Description string `json:"description" bson:"description"`
Mixers []mixers.Mixer `json:"mixers" bson:"mixers"`
Sections []sections.Section `json:"sections" bson:"sections"`
}
But then I also have protoc generated code...
type Menu struct {
Id string `protobuf:"bytes,1,opt,name=id" json:"id,omitempty"`
Name string `protobuf:"bytes,2,opt,name=name" json:"name,omitempty"`
Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
Mixers []*Mixer `protobuf:"bytes,4,rep,name=mixers" json:"mixers,omitempty"`
Sections []*Section `protobuf:"bytes,5,rep,name=sections" json:"sections,omitempty"`
}
Currently I'm having to convert between the two structs depending what I'm doing. Which is tedious and I'm probably quite a considerable performance hit. So is there a better way of converting between the two, or re-using one of them for both tasks?
Having lived with this same issue, there's a couple methods of solving it. They fall into two general methods:
Use the same data type
Use two different struct types and map between them
If you want to use the same data type, you'll have to modify the code generation
You can use something like gogoprotobuf which has an extension to add tags. This should give you bson tags in your structs.
You could also post-process your generated files, either with regular expressions or something more complicated involving the go abstract syntax tree.
If you choose to map between them:
Use reflection. You can write a package that will take two structs and try to take the values from one and apply it to another. You'll have to deal with edge cases (slight naming differences, which types are equivalent, etc), but you'll have better control over edge cases if they ever come up.
Use JSON as an intermediary. As long as the generated json tags match, this will be a quick coding exercise and the performance hit of serializing and deserializing might be acceptable if this isn't in a tight loop in your code.
Hand-write or codegen mapping functions. Depending on how many structs you have, you could write out a bunch of functions that translate between the two.
At my workplace, we ended up doing a bit of all of them: forking the protoc generator to do some custom tags, a reflection based structs overlay package for mapping between arbitrary structs, and some hand-written ones in more performance sensitive or less automatable mappings.
I have played with it and have a working example with:
github.com/gogo/protobuf v1.3.1
go.mongodb.org/mongo-driver v1.4.0
google.golang.org/grpc v1.31.0
First of all I would like to share my proto/contract/example.proto file:
syntax = "proto2";
package protobson;
import "gogoproto/gogo.proto";
option (gogoproto.sizer_all) = true;
option (gogoproto.marshaler_all) = true;
option (gogoproto.unmarshaler_all) = true;
option go_package = "gitlab.com/8bitlife/proto/go/protobson";
service Service {
rpc SayHi(Hi) returns (Hi) {}
}
message Hi {
required bytes id = 1 [(gogoproto.customtype) = "gitlab.com/8bitlife/protobson/custom.BSONObjectID", (gogoproto.nullable) = false, (gogoproto.moretags) = "bson:\"_id\""] ;
required int64 limit = 2 [(gogoproto.nullable) = false, (gogoproto.moretags) = "bson:\"limit\""] ;
}
It contains a simple gRPC service Service that has SayHi method with request type Hi. It includes a set of options: gogoproto.sizer_all, gogoproto.marshaler_all, gogoproto.unmarshaler_all. Their meaning you can find at extensions page. The Hi itself contains two fields:
id that has additional options specified: gogoproto.customtype and gogoproto.moretags
limit with only gogoproto.moretags option
BSONObjectID used in gogoproto.customtype for id field is a custom type that I defined as custom/objectid.go:
package custom
import (
"go.mongodb.org/mongo-driver/bson/bsontype"
"go.mongodb.org/mongo-driver/bson/primitive"
)
type BSONObjectID primitive.ObjectID
func (u BSONObjectID) Marshal() ([]byte, error) {
return u[:], nil
}
func (u BSONObjectID) MarshalTo(data []byte) (int, error) {
return copy(data, (u)[:]), nil
}
func (u *BSONObjectID) Unmarshal(d []byte) error {
copy((*u)[:], d)
return nil
}
func (u *BSONObjectID) Size() int {
return len(*u)
}
func (u *BSONObjectID) UnmarshalBSONValue(t bsontype.Type, d []byte) error {
copy(u[:], d)
return nil
}
func (u BSONObjectID) MarshalBSONValue() (bsontype.Type, []byte, error) {
return bsontype.ObjectID, u[:], nil
}
It is needed because we need to define a custom marshaling and un-marshaling methods for both: protocol buffers and mongodb driver. This allows us to use this type as an object identifier in mongodb. And to "explaine" it to mongodb driver I marked it with a bson tag by using (gogoproto.moretags) = "bson:\"_id\"" option in proto file.
To generate source code from the proto file I used:
protoc \
--plugin=/Users/pstrokov/go/bin/protoc-gen-gogo \
--plugin=/Users/pstrokov/go/bin/protoc-gen-go \
-I=/Users/pstrokov/Workspace/protobson/proto/contract \
-I=/Users/pstrokov/go/pkg/mod/github.com/gogo/protobuf#v1.3.1 \
--gogo_out=plugins=grpc:. \
example.proto
I have tested it on my MacOS with running MongoDB instance: docker run --name mongo -d -p 27017:27017 mongo:
package main
import (
"context"
"log"
"net"
"time"
"gitlab.com/8bitlife/protobson/gitlab.com/8bitlife/proto/go/protobson"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"google.golang.org/grpc"
)
type hiServer struct {
mgoClient *mongo.Client
}
func (s *hiServer) SayHi(ctx context.Context, hi *protobson.Hi) (*protobson.Hi, error) {
collection := s.mgoClient.Database("local").Collection("bonjourno")
res, err := collection.InsertOne(ctx, bson.M{"limit": hi.Limit})
if err != nil { panic(err) }
log.Println("generated _id", res.InsertedID)
out := &protobson.Hi{}
if err := collection.FindOne(ctx, bson.M{"_id": res.InsertedID}).Decode(out); err != nil { return nil, err }
log.Println("found", out.String())
return out, nil
}
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
lis, err := net.Listen("tcp", "localhost:0")
if err != nil { log.Fatalf("failed to listen: %v", err) }
clientOptions := options.Client().ApplyURI("mongodb://localhost:27017")
clientOptions.SetServerSelectionTimeout(time.Second)
client, err := mongo.Connect(ctx, clientOptions)
if err != nil { log.Fatal(err) }
if err := client.Ping(ctx, nil); err != nil { log.Fatal(err) }
grpcServer := grpc.NewServer()
protobson.RegisterServiceServer(grpcServer, &hiServer{mgoClient: client})
go grpcServer.Serve(lis); defer grpcServer.Stop()
conn, err := grpc.Dial(lis.Addr().String(), grpc.WithInsecure())
if err != nil { log.Fatal(err) }; defer conn.Close()
hiClient := protobson.NewServiceClient(conn)
response, err := hiClient.SayHi(ctx, &protobson.Hi{Limit: 99})
if err != nil { log.Fatal(err) }
if response.Limit != 99 { log.Fatal("unexpected limit", response.Limit) }
if response.Id.Size() == 0 { log.Fatal("expected a valid ID of the new entity") }
log.Println(response.String())
}
Sorry for the formatting of the last code snippet :)
I hope this can help.
I'm in the process of testing and may provide code shortly, (ping me if you don't see it and you want it) but https://godoc.org/go.mongodb.org/mongo-driver/bson/bsoncodec looks like the ticket. protoc will make your structs and you don't have to mess with customizing them. Then you can customize the mongo-driver to do the mapping of certain types for you and it looks like their library for this is pretty good.
This is great because if I use the protogen structs then I'd like that to my application core / domain layer. I don't want to be concerned about mongoDB compatibility over there.
So right now, it seems to me that #Liyan Chang 's answer saying
If you want to use the same data type, you'll have to modify the code generation
doesn't necessarily have to be the case. Because you can opt to use 1 datatype.
You can use one generated type and account for seemingly whatever you need to in terms of getting and setting data to the DB with this codec system.
See https://stackoverflow.com/a/59561699/8546258 - the bson struct tags are not an end all be all. looks like codec can totally help with this.
See https://stackoverflow.com/a/58985629/8546258 fo a nice write up about codecs in general.
Please keep in mind these codecs were released in 1.3 of the mongodb go driver. I found this which directed me there: https://developer.mongodb.com/community/forums/t/mgo-setbson-to-mongo-golang-driver/2340/2?u=yehuda_makarov
In the current project we use Go and MongoDB via mgo driver.
For every entity we have to implement DAO for CRUD operations, and it's basically copy-paste, e.g.
func (thisDao ClusterDao) FindAll() ([]*entity.User, error) {
session, collection := thisDao.getCollection()
defer session.Close()
result := []*entity.User{} //create a new empty slice to return
q := bson.M{}
err := collection.Find(q).All(&result)
return result, err
}
For every other entity it's all the same but the result type.
Since Go has no generics, how could we avoid code duplication?
I've tried to pass the result interface{} param instead of creating it in the method, and call the method like this:
dao.FindAll([]*entity.User{})
but the collection.Find().All() method need a slice as the input, not just interface:
[restful] recover from panic situation: - result argument must be a slice address
/usr/local/go/src/runtime/asm_amd64.s:514
/usr/local/go/src/runtime/panic.go:489
/home/dds/gopath/src/gopkg.in/mgo.v2/session.go:3791
/home/dds/gopath/src/gopkg.in/mgo.v2/session.go:3818
Then I tried to make this param result []interface{}, but in that case it's impossible to pass []*entity.User{}:
cannot use []*entity.User literal (type []*entity.User) as type []interface {} in argument to thisDao.GenericDao.FindAll
Any idea how could I implement generic DAO in Go?
You should be able to pass a result interface{} to your FindAll function and just pass it along to mgo's Query.All method since the argument's would have the same type.
func (thisDao ClusterDao) FindAll(result interface{}) error {
session, collection := thisDao.getCollection()
defer session.Close()
q := bson.M{}
// just pass result as is, don't do & here
// because that would be a pointer to the interface not
// to the underlying slice, which mgo probably doesn't like
return collection.Find(q).All(result)
}
// ...
users := []*entity.User{}
if err := dao.FindAll(&users); err != nil { // pass pointer to slice here
panic(err)
}
log.Println(users)
I will be giving the following as the command to run my go program.
go run app.go 3001-3005
What this is supposed to do is, run my go Rest api on server ports 3001 to 3005.
This part of my main function which handles this argument.
func main() {
ipfile := os.Args[1:]
s := strings.Split(ipfile, "-")
mux := routes.New()
mux.Put("/:key1/:value1", PutData)
mux.Get("/profile/:key1", GetSingleData)
mux.Get("/profile", GetData)
http.Handle("/", mix)
Here I will run a for loop and replace the first argument with s[i].
http.ListenAndServe(":3000", nil)
}
I get the following output:
cannot use ipfile (type []string) as type string in argument to strings.Split
What datatype does os.args return?
I tried converting it to string and then splitting. Does not work.
Please let me know what is wrong?
Like the error says, ipfile is a []string. The [1:] slice operation is going to return a slice, even if there's only 1 element.
After checking that os.Args has the enough elements, use:
ipfile := os.Args[1]
s := strings.Split(ipfile, "-")
I am using go-couchbase to update data to couchbase, however, I have problem in how to use the callback function.
The function Update requires me to pass a callback function in which should be UpdateFunc
func (b *Bucket) Update(k string, exp int, callback UpdateFunc) error
So that's what I have done
First, I declared a type UpdateFunc:
type UpdateFunc func(current []byte) (updated []byte, err error)
Then in the code, I add the following lines:
fn := UpdateFunc{func(0){}}
and then call the Update function:
bucket.Update("12345", 0, fn()}
But the Go returns the following error:
syntax error: unexpected literal 0, expecting ) for this line fn := UpdateFunc{func(0){}}
So what I am doing wrong? So how can I make the callback function work ?
additional information
Thanks all of your suggestion. Now I can run the callback function as follows:
myfunc := func(current []byte)(updated []byte, err error) {return updated, err }
myb.Update("key123", 1, myfunc)
However, when I run the Update function of the bucket. I checked the couch database. The document with the key of "key123" was disappeared. It seems the Update does not update the value but delete it. What happened?
You need to create a function that matches the couchbase.UpdateFunc signature then pass it to bucket.Update.
For example:
fn := func(current []byte) (updated []byte, err error) {
updated = make([]byte, len(current))
copy(updated, current)
//modify updated
return
}
....
bucket.Update("12345",0,fn)
Notice that to pass a function you just pass fn not fn(), that would actually call the function right away and pass the return value of it.
I highly recommend stopping everything you're doing and reading Effective Go and all the posts on Go's blog starting with First Class Functions in Go.
Same answer I posted on the go-nuts list, just in case:
You don't have to define the UpdateFunc type by yourself, it's already defined here:
https://github.com/couchbaselabs/go-couchbase/blob/master/client.go#L608
just define the function normally and pass it as an argument or pass an anonymous function as in other languages.
here's a simple example:
http://play.golang.org/p/YOKxRtQqmU