I have an struct type Stat which has many-one associations with Score
type Stat struct {
gorm.Model
StatID string `json:"id" gorm:"uniqueIndex:idx_stat"`
Scores []Score `json:"scores"`
}
and Score is
type Score struct {
gorm.Model
Name string `json:"name" gorm:"uniqueIndex:idx_score"`
Score int `json:"score,string"`
ScoreStatID uint `json:"score_stat_id" gorm:"uniqueIndex:idx_score"`
}
A stat won't change so it has a unique constraint on its stat ID (different from its index key). However the associated scores may/will change with time. I am running a cron job to retrieve the latest scores for the stat objects.
What I would like my code to do is update the Score objects associated with a Stat object each time, rather than create new ones. I though that a uniqueIndex on the Score fields (as you can see above) would handle this, but to no avail.
My latest attempt is to do this
for _, v := range stats {
if err := db.Where("stat_id = ? ", v.StatID).First(&v).Error; errors.Is(err, gorm.ErrRecordNotFound) {
if err := db.Create(&v).Error; err != nil {
return err
}
continue
}
if err := db.Model(&v).Updates(Stat{Scores: v.Scores}).Error; err != nil {
fmt.Println("saving updated data failed", err)
}
}
i.e check/create the stat object and then attempt to update all the scores associated with it. However with these constraints I get the error
ERROR: duplicate key value violates unique constraint "idx_score" (SQLSTATE 23505)
So when I attempt to do this without the constraint, I get duplicated entries within the table.
How can I update or create the array of associations without creating duplicated records and in effect just updating the score values to the new ones (if they have changed)?
EDIT:
I thought I had it with the following code:
for _, v := range stats {
tmpScores := v.Scores
if err := db.Where("stat_id = ? ", v.ScoreID).Preload("Scores").First(&v).Error; errors.Is(err, gorm.ErrRecordNotFound) {
if err := db.Create(&v).Error; err != nil {
return err
}
continue
}
v.Scores = tmpScores
if err := db.Session(&gorm.Session{FullSaveAssociations: true}).Updates(&v).Error; err != nil {
fmt.Println("saving updated data failed", err)
}
}
however this isn't quite right. The issue is if I do it this way, then the v.Scores are the values from the database, not the updated ones, so I save the scores into tmpScores and then reset them however that doesn't work because now the associated ID isn't correct on the association and it creates new entries...
Related
I'am trying to create registration in my Telegram Bot with Golang and Postgres. When user writes "register", bot has to check if user's uuid already exists in DB, and if not to create row with his uuid.
Here is my function to check if uuid already exists in DB:
func IsUserInDB(uuid int64) (bool, error) {
var exists bool
query := fmt.Sprintf("SELECT EXISTS(SELECT 1 FROM users WHERE uuid = %d);", uuid)
err := Db.QueryRow(query).Scan(&exists)
return exists, err
}
Here is my function for adding user's uuid to DB:
func AddUserToDB(column string, row interface{}) error {
query := fmt.Sprintf("INSERT INTO users (%s) VALUES (%v);", column, row)
_, err := Db.Exec(query)
return err
}
And the logic for bot:
func (b *Bot) handleMessages(message *tgbotapi.Message) error {
switch message.Text {
case "register":
exists, err := data.IsUserInDB(message.From.ID)
if err != nil {
return err
}
if !exists {
err := data.AddUserToDB("uuid", message.From.ID)
return err
}
return nil
default:
msg := tgbotapi.NewMessage(message.Chat.ID, "unknown message...")
_, err := b.bot.Send(msg)
return err
}
}
First time, when I send "register", bot successfully adds user's id to db, but the problem happens if I try to send "register" 1 more time. IsUserInDB() returns me false and bot adds 1 more row with the same uuid. So, I think problem is with my IsUserInDb() function
Why not just a unique index on your users table?
CREATE UNIQUE INDEX unq_uuid ON users (uuid);
Then you don't have to check, you just try to insert and it will return an error if it already exists.
I use Go with PostgreSQL using github.com/lib/pq and able to successfully fetch the records when my structure is known.
Now my query is how to fetch records when my structure changes dynamically?
By rows.columns I am able to fetch the column names, but could you help me with fetching the values of these columns for all the rows. I referred this link answered by #Luke, still, here the person has a structure defined.
Is it possible to retrieve a column value by name using GoLang database/sql
type Person struct {
Id int
Name string
}
Meanwhile I do not have a structure that is fixed, so how will I iterate through all the columns that too again for all rows. My approach would be a pointer to loop through all columns at first, then another one for going to next row.
Still not able to code this, Could you please help me with this, like how to proceed and get the values.
Since you don't know the structure up front you can return the rows as a two dimensional slice of empty interfaces. However for the row scan to work you'll need to pre-allocate the values to the appropriate type and to do this you can use the ColumnTypes method and the reflect package. Keep in mind that not every driver provides access to the columns' types so make sure the one you use does.
rows, err := db.Query("select * from foobar")
if err != nil {
return err
}
defer rows.Close()
// get column type info
columnTypes, err := rows.ColumnTypes()
if err != nil {
return err
}
// used for allocation & dereferencing
rowValues := make([]reflect.Value, len(columnTypes))
for i := 0; i < len(columnTypes); i++ {
// allocate reflect.Value representing a **T value
rowValues[i] = reflect.New(reflect.PtrTo(columnTypes[i].ScanType()))
}
resultList := [][]interface{}{}
for rows.Next() {
// initially will hold pointers for Scan, after scanning the
// pointers will be dereferenced so that the slice holds actual values
rowResult := make([]interface{}, len(columnTypes))
for i := 0; i < len(columnTypes); i++ {
// get the **T value from the reflect.Value
rowResult[i] = rowValues[i].Interface()
}
// scan each column value into the corresponding **T value
if err := rows.Scan(rowResult...); err != nil {
return err
}
// dereference pointers
for i := 0; i < len(rowValues); i++ {
// first pointer deref to get reflect.Value representing a *T value,
// if rv.IsNil it means column value was NULL
if rv := rowValues[i].Elem(); rv.IsNil() {
rowResult[i] = nil
} else {
// second deref to get reflect.Value representing the T value
// and call Interface to get T value from the reflect.Value
rowResult[i] = rv.Elem().Interface()
}
}
resultList = append(resultList, rowResult)
}
if err := rows.Err(); err != nil {
return err
}
fmt.Println(resultList)
This function prints the result of a query without knowing anything about the column types and count. It is a variant of the previous answer without using the reflect package.
func printQueryResult(db *sql.DB, query string) error {
rows, err := db.Query(query)
if err != nil {
return fmt.Errorf("canot run query %s: %w", query, err)
}
defer rows.Close()
cols, _ := rows.Columns()
row := make([]interface{}, len(cols))
rowPtr := make([]interface{}, len(cols))
for i := range row {
rowPtr[i] = &row[i]
}
fmt.Println(cols)
for rows.Next() {
err = rows.Scan(rowPtr...)
if err != nil {
fmt.Println("cannot scan row:", err)
}
fmt.Println(row...)
}
return rows.Err()
}
The trick is that rows.Scan can scan values into *interface{} but you have to wrap it in interface{} to be able to pass it to Scan using ....
I have a postgres db that I would like to generate tables for and write to using Gorp, however I get an error message when I try to insert due to the slices contained within my structs "sql: converting argument $4 type: unsupported type []core.EmbeddedStruct, a slice of struct.
My structs look as follows:
type Struct1 struct {
ID string
Name string
Location string
EmbeddedStruct []EmbeddedStruct
}
type EmbeddedStruct struct {
ID string
Name string
struct1Id string
EmbeddedStruct2 []EmbeddedStruct2
}
type EmbeddedStruct2 struct {
ID string
Name string
embeddedStructId string
}
func (repo *PgStruct1Repo) Write(t *core.Struct1) error {
trans, err := createTransaction(repo.dbMap)
defer closeTransaction(trans)
if err != nil {
return err
}
// Check to see if struct1 item already exists
exists, err := repo.exists(t.ID, trans)
if err != nil {
return err
}
if !exists {
log.Debugf("saving new struct1 with ID %s", t.ID)
err = trans.Insert(t)
if err != nil {
return err
}
return nil
}
return nil
}
Does anyone have any experience with/or know if Gorp supports inserting slices? From what I've read it seems to only support slices for SELECT statements
Gorp supports inserting a variadic number of slices, so if you have a slice records, you can do:
err = db.Insert(records...)
However, from your question it seems you want to save a single record that has a slice struct field.
https://github.com/go-gorp/gorp
gorp doesn't know anything about the relationships between your structs (at least not yet).
So, you have to handle the relationship yourself. The way I personally would solve this issue is to have Gorp ignore the slice on the parent:
type Struct1 struct {
ID string
Name string
Location string
EmbeddedStruct []EmbeddedStruct `db:"-"`
}
And then use the PostInsert hook to save the EmbeddedStruct (side note, this is a poor name as it is not actually an embedded struct)
func (s *Struct1) PostInsert(sql gorp.SqlExecutor) error {
for i := range s.EmbeddedStruct {
s.EmbeddedStruct[i].struct1Id = s.ID
}
return sql.Insert(s.EmbeddedStruct...)
}
And then repeat the process on EmbeddedStruct2.
Take care to setup the relationships properly on the DB side to ensure referential integrity (e.g. ON DELETE CASCADE / RESTRICT), and it would probably be a good idea to wrap the whole thing in a transaction.
I try to read and write and delete data from a Go application with the official mongodb driver for go (go.mongodb.org/mongo-driver).
Here is my struct I want to use:
Contact struct {
ID xid.ID `json:"contact_id" bson:"contact_id"`
SurName string `json:"surname" bson:"surname"`
PreName string `json:"prename" bson:"prename"`
}
// xid is https://github.com/rs/xid
I omit code to add to the collection as this is working find.
I can get a list of contacts with a specific contact_id using the following code (abbreviated):
filter := bson.D{}
cursor, err := contactCollection.Find(nil, filter)
for cur.Next(context.TODO()) {
...
}
This works and returns the documents. I thought about doing the same for delete or a matched get:
// delete - abbreviated
filter := bson.M{"contact_id": id}
result, _ := contactCollection.DeleteMany(nil, filter)
// result.DeletedCount is always 0, err is nil
if err != nil {
sendError(c, err) // helper function
return
}
c.JSON(200, gin.H{
"ok": true,
"message": fmt.Sprintf("deleted %d patients", result.DeletedCount),
}) // will be called, it is part of a webservice done with gin
// get complete
func Get(c *gin.Context) {
defer c.Done()
id := c.Param("id")
filter := bson.M{"contact_id": id}
cur, err := contactCollection.Find(nil, filter)
if err != nil {
sendError(c, err) // helper function
return
} // no error
contacts := make([]types.Contact, 0)
for cur.Next(context.TODO()) { // nothing returned
// create a value into which the single document can be decoded
var elem types.Contact
err := cur.Decode(&elem)
if err != nil {
sendError(c, err) // helper function
return
}
contacts = append(contacts, elem)
}
c.JSON(200, contacts)
}
Why does the same filter does not work on delete?
Edit: Insert code looks like this:
_, _ = contactCollection.InsertOne(context.TODO(), Contact{
ID: "abcdefg",
SurName: "Demo",
PreName: "on stackoverflow",
})
Contact.ID is of type xid.ID, which is a byte array:
type ID [rawLen]byte
So the insert code you provided where you use a string literal to specify the value for the ID field would be a compile-time error:
_, _ = contactCollection.InsertOne(context.TODO(), Contact{
ID: "abcdefg",
SurName: "Demo",
PreName: "on stackoverflow",
})
Later in your comments you clarified that the above insert code was just an example, and not how you actually do it. In your real code you unmarshal the contact (or its ID field) from a request.
xid.ID has its own unmarshaling logic, which might interpret the input data differently, and might result in an ID representing a different string value than your input. ID.UnmarshalJSON() defines how the string ID will be converted to xid.ID:
func (id *ID) UnmarshalJSON(b []byte) error {
s := string(b)
if s == "null" {
*id = nilID
return nil
}
return id.UnmarshalText(b[1 : len(b)-1])
}
As you can see, the first byte is cut off, and ID.UnmarshalText() does even more "magic" on it (check the source if you're interested).
All-in-all, to avoid such "transformations" happen in the background without your knowledge, use a simple string type for your ID, and do necessary conversions yourself wherever you need to store / transmit your ID.
For the ID Field, you should use the primitive.ObjectID provided by the bson package.
"go.mongodb.org/mongo-driver/bson/primitive"
ID primitive.ObjectID `json:"_id" bson:"_id"`
I have a MongoDB collection with an example document like this:
What I want to do (as you can see from the actual code) is to update a role field in members.x.role where members.x.id equals given ID (ID is UUID so it's unique; this part of code works without problem) and then I want to return that members.x. But the problem is that it always returns first member instead of the one that has been just updated. I've tried some methods of mgo and found Distinct() be closest to my expectations, but that doesn't work as I want.
My question is how can I return member embedded document with specified ID?
I've already looked on this and this but it didn't help me.
func (r MongoRepository) UpdateMemberRole(id string, role int8) (*Member, error) {
memberQuery := &bson.M{"members": &bson.M{"$elemMatch": &bson.M{"id": id}}}
change := &bson.M{"members.$.role": role}
err := r.db.C("groups").Update(memberQuery, &bson.M{"$set": &change})
if err == mgo.ErrNotFound {
return nil, fmt.Errorf("member with ID '%s' does not exist", id)
}
// FIXME: Retrieve this member from query below. THIS ALWAYS RETURNS FIRST MEMBER!!!
var member []Member
r.db.C("groups").Find(&bson.M{"members.id": id}).Distinct("members.0", &member)
return &member[0], nil
}
I found a workaround, it's not stricte Mongo query that is returning this embedded document, but this code is IMO more clear and understandable than some fancy Mongo query that fetches whole document anyway.
func (r MongoRepository) UpdateMemberRole(id string, role int8) (*Member, error) {
change := mgo.Change{
Update: bson.M{"$set": bson.M{"members.$.role": role}},
ReturnNew: true,
}
var updatedGroup Group
_, err := r.db.C("groups").Find(bson.M{"members": bson.M{"$elemMatch": bson.M{"id": id}}}).Apply(change, &updatedGroup)
if err == mgo.ErrNotFound {
return nil, fmt.Errorf("member with ID '%s' does not exist", id)
} else if err != nil {
return nil, err
}
for _, member := range updatedGroup.Members {
if member.Id == id {
return &member, nil
}
}
return nil, fmt.Errorf("weird error, Id cannot be found")
}