I have the logic to query data from MongoDB to produce Kafka messages. I want to exclude a specific key in MongoDB. Please help. (ex. key name: def)
//1. search mongodb
connetionMongodbInfo := common.MongodbIpInfo()
clientOptions := options.Client().ApplyURI(connetionMongodbInfo)
client, err := mongo.Connect(context.TODO(), clientOptions)
if err != nil {return result, common.CCErrServiceUnavailable}
defer client.Disconnect(context.TODO())
backupCollection := client.Database(common.MongodbName()).Collection(common.MongodbCollection())
In the logic below, it seems that a statement excluding key def should be added.
cursor, err := backupCollection.Find(context.TODO(), bson.D{{"abc",abc}})
if err != nil {return result, common.CCErrServiceUnavailable}
Below is the logic to receive mongodb data and produce kafka message.
//2. produce kafka message
for cursor.Next(context.TODO()) {
var elem bson.M
err := cursor.Decode(&elem)
if err != nil {return result, common.CCErrServiceUnavailable}
JsonData, jsonErr := json.Marshal(elem)
if jsonErr != nil {return result, common.CCErrServiceUnavailable}
var data map[string]interface{}
json.Unmarshal([]byte(JsonData), &data)enter code here
str := fmt.Sprint(data["abc"])
common.SendMessageAsync(topic, str, string(JsonData))
}
Related
I have a collections of 2,000,000 records
> db.events.count(); │
2000000
and I use golang mongodb client to connect to the database
package main
import (
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI("mongodb://localhost:27888").SetAuth(options.Credential{
Username: "mongoadmin",
Password: "secret",
}))
if err != nil {
panic(err)
}
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
collection := client.Database("test").Collection("events")
var bs int32 = 10000
var b = true
cur, err := collection.Find(context.Background(), bson.D{}, &options.FindOptions{
BatchSize: &bs, NoCursorTimeout: &b})
if err != nil {
log.Fatal(err)
}
defer cur.Close(ctx)
s, n := runningtime("retrive db from mongo and publish to kafka")
count := 0
for cur.Next(ctx) {
var result bson.M
err := cur.Decode(&result)
if err != nil {
log.Fatal(err)
}
bytes, err := json.Marshal(result)
if err != nil {
log.Fatal(err)
}
count++
msg := &sarama.ProducerMessage{
Topic: "hello",
// Key: sarama.StringEncoder("aKey"),
Value: sarama.ByteEncoder(bytes),
}
asyncProducer.Input() <- msg
}
But the the program only retrives only about 600,000 records instead of 2,000,000 every times I ran the program.
$ go run main.go
done
count = 605426
nErrors = 0
2020/09/18 11:23:43 End: retrive db from mongo and publish to kafka took 10.080603336s
I don't know why? I want to retrives all 2,000,000 records. Thanks for any help.
Your loop fetching the results may end early because you are using the same ctx context for iterating over the results which has a 10 seconds timeout.
Which means if retrieving and processing the 2 million records (including connecting) takes more than 10 seconds, the context will be cancelled and thus the cursor will also report an error.
Note that setting FindOptions.NoCursorTimeout to true is only to prevent cursor timeout for inactivity, it does not override the used context's timeout.
Use another context for executing the query and iterating over the results, one that does not have a timeout, e.g. context.Background().
Also note that for constructing the options for find, use the helper methods, so it may look as simple and as elegant as this:
options.Find().SetBatchSize(10000).SetNoCursorTimeout(true)
So the working code:
ctx2 := context.Background()
cur, err := collection.Find(ctx2, bson.D{},
options.Find().SetBatchSize(10000).SetNoCursorTimeout(true))
// ...
for cur.Next(ctx2) {
// ...
}
// Also check error after the loop:
if err := cur.Err(); err != nil {
log.Printf("Iterating over results failed: %v", err)
}
Trying to add some json data from an API to a database but get this error when trying
cannot transform type bson.Raw to a BSON Document: length read exceeds number of bytes available. length=259839 bytes=1919951
I know the json is well below mongodb limit of 16mb, ive even tried importing just some small data from this api but get the same error. I was able to import just a test struct to see it was working but my api data doesnt seem to be going through. Is there some type of conversion i need to do with my api data? Here is my golang code
func main(i int) {
url := "http://api.open-notify.org/astros.json"
resp, err := http.Get(url)
if err != nil {
log.Fatalln(err)
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalln(err)
}
// _ = body
log.Println(string(body))
clientOptions := options.Client().ApplyURI("mongodb+srv://username:password#cluster0-slmxe.mongodb.net/dbtest?retryWrites=true&w=majority")
// Connect to MongoDB
client, err := mongo.Connect(context.TODO(), clientOptions)
if err != nil {
log.Fatal(err)
}
err = client.Ping(context.TODO(), nil)
if err != nil {
log.Fatal(err)
}
fmt.Println("Connected to database")
collection := client.Database("dbtest").Collection("test")
insertResult, err := collection.InsertOne(context.TODO(), body)
if err != nil {
log.Fatal(err)
}
fmt.Println("Inserted", insertResult.InsertedID)
}
You need to wrap your json with bson.D to able to send the data to Mongodb. This is to build representation for native go types. Example below:
// insert the document {name: "Alice"}
res, err := coll.InsertOne(context.TODO(), bson.D{{"name", "Alice"}})
if err != nil {
log.Fatal(err)
}
Please refer to following documentation:
https://pkg.go.dev/go.mongodb.org/mongo-driver#v1.3.4/mongo?tab=doc#Collection.InsertOne
I checked out the answer here but this uses the old and unmaintained mgo. How can I find all documents in a collection using the mongo-go-driver?
I tried passing a nil filter, but this does not return any documents and instead returns nil. I also checked the documentation but did not see any mention of returning all documents. Here is what I've tried with aforementioned result.
client, err := mongo.Connect(context.TODO(), "mongodb://localhost:27017")
coll := client.Database("test").Collection("albums")
if err != nil { fmt.Println(err) }
// we can assume we're connected...right?
fmt.Println("connected to mongodb")
var results []*Album
findOptions := options.Find()
cursor, err := coll.Find(context.TODO(), nil, findOptions)
if err != nil {
fmt.Println(err) // prints 'document is nil'
}
Also, I'm about confused about why I need to specify findOptions when I've called the Find() function on the collection (or do I not need to specify?).
Here is what I came up with using the official MongoDB driver for golang. I am using godotenv (https://github.com/joho/godotenv) to pass the database parameters.
//Find multiple documents
func FindRecords() {
err := godotenv.Load()
if err != nil {
fmt.Println(err)
}
//Get database settings from env file
//dbUser := os.Getenv("db_username")
//dbPass := os.Getenv("db_pass")
dbName := os.Getenv("db_name")
docCollection := "retailMembers"
dbHost := os.Getenv("db_host")
dbPort := os.Getenv("db_port")
dbEngine := os.Getenv("db_type")
//set client options
clientOptions := options.Client().ApplyURI("mongodb://" + dbHost + ":" + dbPort)
//connect to MongoDB
client, err := mongo.Connect(context.TODO(), clientOptions)
if err != nil {
log.Fatal(err)
}
//check the connection
err = client.Ping(context.TODO(), nil)
if err != nil {
log.Fatal(err)
}
fmt.Println("Connected to " + dbEngine)
db := client.Database(dbName).Collection(docCollection)
//find records
//pass these options to the Find method
findOptions := options.Find()
//Set the limit of the number of record to find
findOptions.SetLimit(5)
//Define an array in which you can store the decoded documents
var results []Member
//Passing the bson.D{{}} as the filter matches documents in the collection
cur, err := db.Find(context.TODO(), bson.D{{}}, findOptions)
if err !=nil {
log.Fatal(err)
}
//Finding multiple documents returns a cursor
//Iterate through the cursor allows us to decode documents one at a time
for cur.Next(context.TODO()) {
//Create a value into which the single document can be decoded
var elem Member
err := cur.Decode(&elem)
if err != nil {
log.Fatal(err)
}
results =append(results, elem)
}
if err := cur.Err(); err != nil {
log.Fatal(err)
}
//Close the cursor once finished
cur.Close(context.TODO())
fmt.Printf("Found multiple documents: %+v\n", results)
}
Try passing an empty bson.D instead of nil:
cursor, err := coll.Find(context.TODO(), bson.D{})
Also, FindOptions is optional.
Disclaimer: I've never used the official driver, but there are a few examples at https://godoc.org/go.mongodb.org/mongo-driver/mongo
Seems like their tutorial is outdated :/
I need to build a query using comparison operators, equivalent of db.inventory.find( { qty: { $gt: 20 } using the official driver. Any idea how to do that?
Connecting to a server is something like:
client, err := mongo.NewClient("mongodb://foo:bar#localhost:27017")
if err != nil { log.Fatal(err) }
err = client.Connect(context.TODO())
if err != nil { log.Fatal(err) }
Then obtain the inventory mongo.Collection like:
coll := client.Database("baz").Collection("inventory")
Then you can execute your query using Collection.Find() like:
ctx := context.Background()
cursor, err := coll.Find(ctx,
bson.NewDocument(
bson.EC.SubDocumentFromElements("qty",
bson.EC.Int32("$gt", 20),
),
),
)
defer cursor.Close(ctx) // Make sure you close the cursor!
Reading the results using the mongo.Cursor:
doc := bson.NewDocument()
for cursor.Next(ctx) {
doc.Reset()
if err := cursor.Decode(doc); err != nil {
// Handle error
log.Printf("cursor.Decode failed: %v", err)
return
}
// Do something with doc:
log.Printf("Result: %v", doc)
}
if err := cursor.Err(); err != nil {
log.Printf("cursor.Err: %v", err)
}
Note: I used a single bson.Document value to read all documents, and used its Document.Reset() in the beginning of each iteration to clear it and "prepare" it to read a new document into it. If you want to store the documents (e.g. in a slice), then you can obviously not do this. In that case just create a new doc in each iteration like doc := bson.NewDocument().
I have tried to put a form from a request (I don't know the structure of the data that I get for the moment) in a mongo database.
Here is my code :
fmt.Println(r.Form)
for key, values := range r.Form { // range over map
for _, value := range values { // range over []string
fmt.Println(key, value)
}
}
fmt.Println(r.Form)
decoder := json.NewDecoder(r.Body)
session, err := mgo.Dial("127.0.0.1")
if err != nil {
panic(err)
}
defer session.Close()
// Optional. Switch the session to a monotonic behavior.
session.SetMode(mgo.Monotonic, true)
c2 := session.DB("finger_bag").C("finger")
data, err := bson.Marshal(decoder)
err2 := c2.Insert(data)
if (err2 != nil){
Info.Println("error")
Info.Println(err2)
}
If anyone have any idea how to do it.
If you want to store the contents of r.Form, then store r.Form, rather than trying to unmarshal and remarshal the request body:
c2.Insert(r.Form)