Golang mgo result into simple slice - mongodb

I'm fairly new to both Go and MongoDB. Trying to select a single field from the DB and save it in an int slice without any avail.
userIDs := []int64{}
coll.Find(bson.M{"isdeleted": false}).Select(bson.M{"userid": 1}).All(&userIDs)
The above prints out an empty slice. However, if I create a struct with a single ID field that is int64 with marshalling then it works fine.
All I am trying to do is work with a simple slice containing IDs that I need instead of a struct with a single field. All help is appreciated.

Because mgo queries return documents, a few lines of code is required to accomplish the goal:
var result []struct{ UserID int64 `bson:"userid"` }
err := coll.Find(bson.M{"isdeleted": false}).Select(bson.M{"userid": 1}).All(&result)
if err != nil {
// handle error
}
userIDs := make([]int64, len(result))
for i := range result {
userIDs[i] = result.UserID
}

Related

Is there a way that can find one document and clone it with changing id/value in mongodb with Go

suppose I have code like this:
var result bson.M
err := coll.FindOne(context.TODO(), filter).Decode(&result)
if err != nil {
panic(err)
}
// somehow I can change the _id before I do insertOne
if _, err := coll.InsertOne(context.TODO(), result); err != nil {
panic(err)
}
is there a way I can insertOne without knowing the data struct? But I have to change the _id before I insert it.
The ID field in MongoDB is always _id, so you may simply do:
result["_id"] = ... // Assign new ID value
Also please note that bson.M is a map, and as such does not retain order. You may simply change only the ID and insert the clone, but field order may change. This usually isn't a problem.
If order is also important, use bson.D instead of bson.M which retains order, but finding the _id is a little more complex: you have to use a loop as bson.D is not a map but a slice.
This is how it could look like when using bson.D:
var result bson.D
// Query...
// Change ID:
for i := range result {
if result[i].Key == "_id" {
result[i].Value = ... // new id value
break
}
}
// Now you can insert result

Get a slice of json string from mongo using golang

I am trying to get a slice of json text from mongo using the below code in golang
var a []string
err := col..Find(nil).Select(bson.M{"_id": 0}).All(&a)
I get the error Unsupported document type for unmarshalling: string
May I know the right way to do this?
When you select all but _id, the return will be a document containing only the remaining fields. You can do:
type fieldDoc struct {
Field string `bson:"name"`
}
var a []fieldDoc
err := col.Find(nil).Select(bson.M{"_id": 0}).All(&a)
If you don't know the underlying structure:
var a []bson.M
err := col.Find(nil).Select(bson.M{"_id": 0}).All(&a)
That should give you the documents encoded as bson objects. That is a map[string]interface{}, so you should be able to marshal it to JSON if you want json output:
jsonDocs, err:=json.Marshal(a)

Printing MongoDB Collection Data - GoLang, results not as expected

I have mongoDB in a Docker container, I can connect to and update the DB just fine, I can see the results in Compass. However when it comes to grabbing a collection and printing the results they don't print as I expect them too.
This is a snippet of my code:
db := client.Database("maccaption")
collection := client.Database("maccaption").Collection("JobBacklog")
res, err := collection.InsertOne(context.Background(), bson.M{"hello": "world"})
if err != nil {
log.Fatal(err)
}
result := struct {
Foo string
Bar string
}{}
filter := bson.D{{"hello", "world"}}
err = collection.FindOne(context.Background(), filter).Decode(&result)
if err != nil {
log.Fatal(err)
}
fmt.Println("Results", result)
I'm using the official mongo-go-driver. and following the examples here https://godoc.org/github.com/mongodb/mongo-go-driver/mongo
I know the DB is connected, I can see the update when I add to the DB and then it shows up in Compass when I run the code, but the collection.FindOne returns Results {0} when I expect it to return hello: world.
Can anyone help me with this?
Thanks!
You've inserted a document with a field hello with value "world". You're then trying to unpack that document into a struct with fields Foo and Bar. Neither of those are named Hello and neither has a bson tag, so there is nowhere it should unmarshal your hello field to. If you define instead:
result := struct{
Hello string
}
It should unmarshal as desired.

Cannot Upsert only one value on struct interface from Mongo record [mgo]:golang

Basically I want update in one value from the mongodb document by given fully struct interface as a change parameter in collection.Upsert(selector,change). how we do this without lose other values into empty. Other(type,category.rerportby,createon,info) values should be keep on existing values only update plant and location values into PLANT07 and BAR)
NOTE: I want use completely Service Notification Struct Object for
do this.
DatabaseName:WO
CollectionName:SERVICE_NOTIFICATIONS
package models
//models.ServiceNotification
type ServiceNotification struct {
NotificationNo string `json:"notification_no" bson:"notification_no"`
Type string `json:"type" bson:"type"`
Category string `json:"category" bson:"category"`
Plant string `json:"plant" bson:"plant"`
Location string `json:"location" bson:"location"`
ReportedBy string `json:"reportedby" bson:"reportedby"`
Info map[string]interface{}`json:"info" bson:"info"`
SAPInfo SAPNotificationInfo `json:"sapinfo" bson:"sapinfo"`
CreateOn string `json:"createon" bson:"createon"`
UpdateOn string `json:"updateon" bson:"updateon"`
}
package main
func main(){
input := models.ServiceNotification{
NotificationNo:000120,
Plant:"Plant07",
Location:"BAR",
}
Change_ServiceNotification(input)
}
I want update plant and location by given complete struct interface to the mongo Upsert function. because I want to decide dynamically what should
update . But when I update plant and location other values going
to be LOST. in mongo record.
func Change_ServiceNotification(notification models.ServiceNotification) error {
session, err := commons.GetMongoSession()
if err != nil {
return errors.New("Cannot create mongodb session" + err.Error())
}
defer session.Close()
var col = session.DB(WO).C(SERVICE_NOTIFICATIONS)
selector := bson.M{"notification_no": notification.NotificationNo}
_, err = col.Upsert(selector, notification)
if err != nil {
errMsg := "Cannot update service notification " + err.Error()
return errors.New(errMsg)
}
return nil
}
Appreciate your help
Thanks in advance
You cannot do it this way, but you can use the $set operator of MongoDB (Skipping error checking):
input := Bson.M{
"$set": bson.M{
"plant": "Plant07",
// Further fields here...
}
}
selector := bson.M{"notification_no": notification.NotificationNo}
col.Upsert(selector, input)
This will update only the provided fields.

Creating an array/slice to store DB Query results in Golang

I'm just getting started with golang and I'm attempting to read several rows from a Postgres users table and store the result as an array of User structs that model the row.
type User struct {
Id int
Title string
}
func Find_users(db *sql.DB) {
// Query the DB
rows, err := db.Query(`SELECT u.id, u.title FROM users u;`)
if err != nil { log.Fatal(err) }
// Initialize array slice of all users. What size do I use here?
// I don't know the number of results beforehand
var users = make([]User, ????)
// Loop through each result record, creating a User struct for each row
defer rows.Close()
for i := 0; rows.Next(); i++ {
err := rows.Scan(&id, &title)
if err != nil { log.Fatal(err) }
log.Println(id, title)
users[i] = User{Id: id, Title: title}
}
// .... do stuff
}
As you can see, my problem is that I want to initialize an array or slice beforehand to store all the DB records, but I don't know ahead of time how many records there are going to be.
I was weighing a few different approaches, and wanted to find out which of the following was most used in the golang community -
Create a really large array beforehand (e.g. 10,000 elements). Seems wasteful
Count the rows explicitly beforehand. This could work, but I need to run 2 queries - one to count and one to get the results. If my query is complex (not shown here), that's duplicating that logic in 2 places. Alternatively I can run the same query twice, but first loop through it and count the rows. All this would work, but it just seems unclean.
I've seen examples of expanding slices. I don't quite understand slices well enough to see how it could be adapted here. Also if I'm constantly expanding a slice 10k times, it definitely seems wasteful.
Go has a built-in append function for exactly this purpose. It takes a slice and one or more elements and appends those elements to the slice, returning the new slice. Additionally, the zero value of a slice (nil) is a slice of length zero, so if you append to a nil slice, it will work. Thus, you can do:
type User struct {
Id int
Title string
}
func Find_users(db *sql.DB) {
// Query the DB
rows, err := db.Query(`SELECT u.id, u.title FROM users u;`)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
var users []User
for rows.Next() {
err := rows.Scan(&id, &title)
if err != nil {
log.Fatal(err)
}
log.Println(id, title)
users = append(users, User{Id: id, Title: title})
}
if err := rows.Err(); err != nil {
log.Fatal(err)
}
// ...
}
User appending to a slice:
type DeviceInfo struct {
DeviceName string
DeviceID string
DeviceUsername string
Token string
}
func QueryMultiple(db *sql.DB){
var device DeviceInfo
sqlStatement := `SELECT "deviceName", "deviceID", "deviceUsername",
token FROM devices LIMIT 10`
rows, err := db.Query(sqlStatement)
if err != nil {
panic(err)
}
defer rows.Close()
var deviceSlice []DeviceInfo
for rows.Next(){
rows.Scan(&device.DeviceID, &device.DeviceUsername, &device.Token,
&device.DeviceName)
deviceSlice = append(deviceSlice, device)
}
fmt.Println(deviceSlice)
}
I think that what you are looking for is the capacity.
The following allocates an array that can hold 10,000 items:
users := make([]User, 0, 10_000)
but the array is, itself, still empty (len(users) == 0).
Now you can add up to at least 10,000 items before the array needs to be grown. For that purpose the append() works as expected:
users = append(users, User{...})
Maps are grown with a x2 of the size starting with 1. So it remains a power of two. I'm not sure whether slices are grown the same way (in powers of two). If so, then the allocated size above would be:
math.Pow(2, math.Ceil(math.Log(10_000)/math.Log(2)))
which is 2^14 which is 16,384.
Note: if your query uses a compatible INDEX, i.e. the WHERE clause matches the INDEX one to one, then an extra SELECT COUNT(*) ... is free since the number of elements is known and it will return that number without the need to scan all the rows of your tables.