Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 months ago.
Improve this question
Currently building an API in go using Postgres as the data source. For pooled connections I'm using jackc/pgx/pgxpool/v4. I have a route similar to /api/player/abilities/{id} that should respond with an array of abilities. The function used to query->scan->return the array of abilities works perfectly 9 times out of 10. However, at random the function will return nil with no error using the same input data ( id ).
This is what the function looks like:
func (r *player) GetAbilitiesByID(id uint64) ([]Ability, error) {
var abilities []Ability
query := `SELECT * FROM x_abilities WHERE id = $1 ORDER BY slot ASC;`
rows, err := r.conn.Query(
context.Background(),
query,
id,
)
if err != nil {
if err == pgx.ErrNoRows {
return nil, errors.New("no rows returned")
}
return nil, err
}
for rows.Next() {
var ability Ability
err = rows.Scan(
&ability.ID,
&ability.Name,
&ability.Description,
&ability.IsSpecial,
&ability.Slot,
&ability.SpecialWeight,
&ability.Created,
&ability.Updated,
&ability.Deleted,
)
if err != nil {
return nil, err
}
abilities = append(abilities, ability)
}
if err := rows.Err(); err != nil {
return nil, err
}
return abilities, nil
}
It's being called in the controller as such:
repo := abilitiesrepo.NewAbilitiesRepo(app.connection)
n, err := repo.GetAbilitiesByID(id)
if err != nil {
app.serverError(w, err)
return
}
I had added logging to different stages of the function to see if it wasn't firing, or throwing an unknown error but it just looks like the query returns 0.
This is what the log looks like hitting the API 3 times ( /api/players/abilities/1 ):
INFO 2022/11/10 15:42:12 Attempting Abilities Retrieval
INFO 2022/11/10 15:42:12 Running Abilities Query
INFO 2022/11/10 15:42:12 Scanned 4 abilities
INFO 2022/11/10 15:42:12 No errors, returning
INFO 2022/11/10 15:42:12 Abilities Retrieved!
INFO 2022/11/10 15:42:13 Attempting Abilities Retrieval
INFO 2022/11/10 15:42:13 Running Abilities Query
INFO 2022/11/10 15:42:13 Scanned 4 abilities
INFO 2022/11/10 15:42:13 No errors, returning
INFO 2022/11/10 15:42:13 Abilities Retrieved!
INFO 2022/11/10 15:42:14 Attempting Abilities Retrieval
INFO 2022/11/10 15:42:14 Running Abilities Query
INFO 2022/11/10 15:42:14 Scanned 0 abilities
INFO 2022/11/10 15:42:14 No errors, returning
INFO 2022/11/10 15:42:14 Abilities Retrieved!
Not really sure if I'm missing something.
Related
I'm trying to get an array of recent user messages for each chat room. But in my version I get just an array of messages that have been sent for all the chats.
func (r *Mongo) findLastMessages(ctx context.Context, chatIds []string) ([]*Message, error) {
if len(chatIds) == 0 {
return nil, nil
}
query := bson.M{"chat_id": bson.M{"$in": chatIds}}
cursor, err := r.colMessage.Find(ctx, query, nil)
if err != nil {
return nil, err
}
var messages []*Message
if err = cursor.All(ctx, &messages); err != nil {
return nil, err
}
err = cursor.Close(ctx)
if err != nil {
return nil, ErrInternal
}
return messages, err
}
Is there any way I can filter the sample so that I get only one last message for each chat?
And
Perhaps you should use aggregations for such purposes? If so, is it better to cycle Find or use aggregations?
Assuming that by "last" you mean the one with the most recent timestamp, I can think of 2 ways to do it.
Both will perform better if there is an index on chat_id:1, timestamp:1
Find matching a single chat_it, sort by timestamp descending, with a limit of 1. This would require loading only the desired document, for a 1:1 scanned returned ratio. Repeat for each chat
Aggregation to match an array ofchats at once, sort by timestamp, and then group by chat_id selecting only the first message from each. This would require loading many messages from each chat. However, this method would return all of the documents in a single operation with a single network round trip.
Which method is better for you would depend on:
how expensive is the network round trip
how much delay will there be due to the resource overhead of scanning all of the extra documents
how often the query will run
how many instances of the query will be run simultaneously
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
My question is that should I connect to MongoDB in all of my handlers using goroutines and then disconnect the connection.
Or I should just connect to MongoDB when app starts and keep connection alive for a long time and use that connection in my handlers.
What is the best approach?
I would be thankful if you explain the advantages and disadvantages.
The latter is better
Connect to MongoDB when app starts and keep connection alive for a long time and use that connection in my handlers
it prevents you from having to connect to the database when you need to interact with the database all the time and having to deal with cases where connection to the database is inconsistent, the former might lead to a lot of complexity.
Conventionally, connecting to your db should occur once (probably in your main.go file) and you can reference the connection in other parts of the project.
My approach that has proven to be quite performant.
Define an internal data package like so :
package data
import (
"context"
"log"
"os"
"time"
"github.com/joho/godotenv"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
func MongoConnect() (*mongo.Client, error) {
godotenv.Load(".env")
str := os.Getenv("CONNECTION_STRING")
client, err := mongo.NewClient(options.Client().ApplyURI(str))
if err != nil {
log.Fatal(err)
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
err = client.Connect(ctx)
if err != nil {
log.Fatal(err)
}
return client, nil
}
func ConnectToCollection(collectionName string) (*mongo.Collection, error) {
client, err := MongoConnect()
if err != nil {
return nil, err
}
collection := client.Database("DATABASE_NAME").Collection(collectionName)
return collection, nil
}
Once this is done, you can basically use this ConnectToCollection in every endpoint group (package) that needs it in a global variable. E.G:
package xxx
import "app_name/whatever/data"
var coll, _ = data.ConnectToCollection("xxx_collection_name")
func myFunction() {
coll.InsertOne(...)
}
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
I am facing the above issue with the code below
stmt, err2 := db.Prepare( "SELECT COUNT(*) FROM xyz WHERE product_id=? and chart_number=?")
rows, err2 := stmt.Query( bidStatusReqVal.ProductId,bidStatusReqVal.ChartNumber).Scan(&count)
Query(...).Scan(...) is not valid because Query returns two values and chaining of calls requires that the previous call returns only one value. Call Scan on the returned rows, or use QueryRow(...).Scan(...) with only err as the return destination.
rows, err := stmt.Query(bidStatusReqVal.ProductId, bidStatusReqVal.ChartNumber)
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
if err := rows.Scan(&count); err != nil {
return err
}
}
if err := rows.Err(); err != nil {
return err
}
// ...
In cases where the query returns only a single row, e.g. SELECT ... LIMIT 1, or SELECT COUNT(*) ... like in your case, it is much more convenient to use QueryRow.
err := stmt.QueryRow(bidStatusReqVal.ProductId, bidStatusReqVal.ChartNumber).Scan(&count)
if err != nil {
return err
}
// ...
This question already has answers here:
Scan function by reference or by value
(1 answer)
I want to check if record exist and if not exist then i want to insert that record to database using golang
(4 answers)
Closed 2 years ago.
I currently have:
func foo (w http.ResponseWriter, req *http.Request) {
chekr := `SELECT FROM public."Users" WHERE email=$1`
err = db.QueryRow(chekr, usr.Email).Scan()
if err != sql.ErrNoRows {
data, err := json.Marshal("There is already a user with this email")
if err != nil { w.Write(data) }
}
// code that should run if email isn't found
}
However, I find it never working and always passing the if block.
As the above comment stated, I forgot the */1. QueryRow works, I just had another error somewhere. As others have stated there's others errors, this is just for one case to test.
So I'm having some trouble figuring out best practices for using concurrency with a MongoDB in go. My first implementation of getting a session looked like this:
var globalSession *mgo.Session
func getSession() (*mgo.Session, error) {
//Establish our database connection
if globalSession == nil {
var err error
globalSession, err = mgo.Dial(":27017")
if err != nil {
return nil, err
}
//Optional. Switch the session to a monotonic behavior.
globalSession.SetMode(mgo.Monotonic, true)
}
return globalSession.Copy(), nil
}
This works great the trouble I'm running into is that mongo has a limit of 204 connections then it starts refusing connections connection refused because too many open connections: 204;however, the issue is since I'm calling session.Copy() it only returns a session and not an error. So event though the connection refused my program never thrown an error.
Now what I though about doing is just having one session and using that instead of copy so I can have access to a connection error like so:
var session *mgo.Session = nil
func NewSession() (*mgo.Session, error) {
if session == nil {
session, err = mgo.Dial(url)
if err != nil {
return nil, err
}
}
return session, nil
}
Now the problem I have with this is that I don't know what would happen if I try to make concurrent usage of that same session.
The key is to duplicate the session and then close it when you've finished with it.
func GetMyData() []myMongoDoc {
sessionCopy, _ := getSession() // from the question above
defer sessionCopy.Close() // this is the important bit
results := make([]myMongoDoc, 0)
sessionCopy.DB("myDB").C("myCollection").Find(nil).All(&results)
return results
}
Having said that it looks like mgo doesn't actually expose control over the underlying connections (see the comment from Gustavo Niemeyer who maintains the library). A session pretty much equates to a connection, but even if you call Close() on a session mgo keeps the connection alive. From reading around it seems that Clone() might be the way to go, as it reuses the underlying socket, this will avoid the 3 way handshake of creating a new socket (see here for more discussion on the difference).
Also see this SO answer describing a standard pattern to handle sessions.