Can i use Query() func twice in pgx Golang lib? - postgresql

I use query to Find All DB data, and one of tables need to be array, so i integrated the loop.
EveryTime errors is conn is busy.
`
for rows.Next() {
var ord Order
err = rows.Scan(
and after one more Query() which is uses to find all Item with ord.UID. So the question is, what is wrong in my code? And how to use this. Here is my FindAll Func:
rows, err := r.client.Query(ctx, q)
if err != nil {
return nil, err
}
defer rows.Close()
orders := make([]Order, 0)
for rows.Next() {
var ord Order
err = rows.Scan(
&ord.OrderUID,
&ord.TrackNumber,
&ord.Entry,
&ord.Delivery.Name,
&ord.Delivery.Phone,
&ord.Delivery.Zip,
&ord.Delivery.City,
&ord.Delivery.Address,
&ord.Delivery.Region,
&ord.Delivery.Email,
&ord.Payment.Transaction,
&ord.Payment.RequestID,
&ord.Payment.Currency,
&ord.Payment.Provider,
&ord.Payment.Amount,
&ord.Payment.PaymentDT,
&ord.Payment.Bank,
&ord.Payment.DeliveryCost,
&ord.Payment.GoodsTotal,
&ord.Payment.CustomFee,
&ord.Locale,
&ord.InternalSignature,
&ord.CustomerID,
&ord.DeliveryService,
&ord.ShardKey,
&ord.SmID,
&ord.DateCreated,
&ord.OofShard,
)
if err != nil {
return nil, err
}
iq := ` ... `
itemRows, err := r.client.Query(ctx, iq, ord.OrderUID)
if err != nil {
return nil, err
}
items := make([]item.Item, 0)
for itemRows.Next() {
var item item.Item
err = itemRows.Scan(
&item.ID,
&item.ChrtID,
&item.TrackNumber,
&item.Price,
&item.Rid,
&item.Name,
&item.Sale,
&item.Size,
&item.TotalPrice,
&item.NmID,
&item.Brand,
&item.Status,
)
if err != nil {
return nil, err
}
items = append(items, item)
}
ord.Items = items
orders = append(orders, ord)
}
`
I tried to rows.Close, but then i Can't use row anymore.

Related

Receiving error(*errors.errorString) *{s: "pq: unexpected DataRow in simple query execution"}

The error
(*errors.errorString) *{s: "pq: unexpected DataRow in simple query execution"}
appears after the line with the commentary. Didn't find any solution online. Since stackoverflow asks for more details, this is an update query that is supposed to update a todo and a list of subtasks in the database. The exact error is in the question topic. I post the complete code for the function that returns the error.
func (t *TodoTable) UpdateTodo(ctx context.Context, todo *Todo, t_id int) error {
tx, err := t.sqlxdb.BeginTxx(ctx, &sql.TxOptions{})
if err != nil {
return err
}
rollback_err := func(err error) error {
if err2 := tx.Rollback(); err2 != nil {
return fmt.Errorf("%v; %v", err, err2)
}
return err
}
row := tx.QueryRowxContext(ctx, "UPDATE todos SET todo_name=$1, deadline=$2, updated_at=$3 WHERE todo_id=$4 returning todo_id", todo.TodoName, todo.Deadline, todo.UpdatedAt, t_id)
if row.Err() != nil {
return rollback_err(err)
}
var subs_ids []int
// Getting subs ids from database
query := fmt.Sprintf("SELECT sub_id FROM subs WHERE todo_id=%d", t_id)
// THE ERROR COMES AFTER EXECUTING THE LINE BELOW
rows, err := tx.Query(query)
if err != nil {
rollback_err(err)
}
if rows != nil {
for rows.Next() {
var sub_id int
err = rows.Scan(&sub_id)
if err != nil {
rollback_err(err)
}
subs_ids = append(subs_ids, sub_id)
}
if err := tx.Commit(); err != nil {
return rollback_err(err)
}
}
// Updating subs
for i, sub := range todo.Subs {
_, err = tx.ExecContext(ctx, fmt.Sprintf("UPDATE subs SET sub_name='%s' WHERE sub_id=%d", sub.Sub_name, subs_ids[i]))
if err != nil {
return rollback_err(err)
}
}
return nil
}

Postgres pgx driver hangs on commit

I have a function that updates records in table and requests it using pgx Postgres driver. This function hangs on commit. Is there any ideas why does it happen? Why I can't use transactions in this case?
Of course, as the query is atomic I can remove transactions. But it's still unclear—why it happens and what to do if I need a transaction.
func (r *Repository) GetUpdatedItems(ctx context.Context, filters []string) ([]Item, error) {
conn, err := r.pool.Acquire(ctx)
// error handling
defer conn.Release()
tx, err := conn.Begin(ctx)
// error handling
defer func() {
closeCtx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
_ = tx.Rollback(closeCtx)
}()
query := fmt.Sprintf(`UPDATE %s
SET fieldOne = $1, fieldTwo = $2
WHERE otheField = '' OR otherField IS NULL
RETURNING fieldOne, fieldTwo, otheField, someMoreField;`,
r.tableName, sqlArray(aggregatesTypes))
rows, err := conn.Query(ctx, query, filters[0], filters[1])
// error handling
defer rows.Close()
var retItems []reaper.Item
for rows.Next() {
var fieldOne string
var fieldTwo string
var otheField string
var someMoreField string
if err := rows.Scan(&id, &fieldOne, &fieldTwo, &otheField, &someMoreField); err != nil {
return nil, fmt.Errorf("failed to scan item: %w", err)
}
item := Item{
FieldOne: fieldOne,
FieldTwo: fieldTwo,
OtheField: otheField,
SomeMoreField: someMoreField
}
retItems = append(retItems, item)
}
if err := tx.Commit(ctx); err != nil {
return nil, fmt.Errorf("failed to commit transaction: %w", err)
}
return retItems, nil
}
tx.Query must be used instead of conn.Query.
Thanks for the help!

Is there a way with GRPC to notify the stream of CRUD operations to give realtime updates to the client?

I am new to GRPC and I am trying to implement a basic CRUD + listing. I use unary rpc's for the CRUD and a server stream for the listing. What I would like to do however is update the client whenever someone changes a record in the database that you are listing.
So for example user A is listing 10 companies. And user B is updating one of those companies. I want user A's client to be updated once the update rpc is called.
This is what I have for now
func RegisterCompanyServer(l hclog.Logger, gs *grpc.Server) {
r := postgres.NewPostgresCompanyRepository()
cs := NewCompanyServer(l, r)
pb.RegisterCompanyServiceServer(gs, cs)
}
type CompanyServer struct {
logger hclog.Logger
repo repo.CompanyRepository
pb.UnimplementedCompanyServiceServer
}
func NewCompanyServer(l hclog.Logger, r repo.CompanyRepository) *CompanyServer {
return &CompanyServer{
logger: l,
repo: r,
}
}
func (c *CompanyServer) ListCompany(req *pb.CompanyListRequest, stream pb.CompanyService_ListCompanyServer) error {
//Somehow listen to CreateCompany() and update the client
companies, err := c.repo.List(req.Query)
if err != nil {
return err
}
for _, c := range companies {
bytes, err := json.Marshal(c)
if err != nil {
return err
}
out := &pb.Company{}
if err = jsonEnc.Unmarshal(bytes, out); err != nil {
return err
}
res := &pb.CompanyListResponse{
Company: out,
}
err = stream.Send(res)
if err != nil {
return err
}
}
return nil
}
func (c *CompanyServer) CreateCompany(context context.Context, req *pb.CompanyCreateRequest) (*pb.CompanyCreateResponse, error) {
input := req.GetCompany()
if input == nil {
return nil, errors.New("Parsing Error")
}
bytes, err := jsonEnc.Marshal(input)
if err != nil {
return nil, err
}
company := &myCompany.Company{}
if err = json.Unmarshal(bytes, company); err != nil {
return nil, err
}
result, err := c.repo.Create(company)
if err != nil {
return nil, err
}
res := &pb.CompanyCreateResponse{
Id: result,
}
//Somehow notify the stream that a company was created
return res, nil
}
Is this even feasable with GRPC? What techniques are out there to do this? I am currently working with a postgresql database.

Deserialize cursor into array with mongo-go-driver and interface

I create an api using golang, i would like to create some functionnal test, for that i create an interface to abstract my database. But for that i need to be able to convert the cursor to an array without knowing the type.
func (self *KeyController) GetKey(c echo.Context) (err error) {
var res []dto.Key
err = db.Keys.Find(bson.M{}, 10, 0, &res)
if err != nil {
fmt.Println(err)
return c.String(http.StatusInternalServerError, "internal error")
}
c.JSON(http.StatusOK, res)
return
}
//THE FIND FUNCTION ON THE DB PACKAGE
func (s MongoCollection) Find(filter bson.M, limit int, offset int, res interface{}) (err error) {
ctx := context.Background()
var cursor *mongo.Cursor
l := int64(limit)
o := int64(offset)
objectType := reflect.TypeOf(res).Elem()
cursor, err = s.c.Find(ctx, filter, &options.FindOptions{
Limit: &l,
Skip: &o,
})
if err != nil {
return
}
defer cursor.Close(ctx)
for cursor.Next(ctx) {
result := reflect.New(objectType).Interface()
err := cursor.Decode(&result)
if err != nil {
panic(err)
}
res = append(res.([]interface{}), result)
}
return
}
Does someone have an idea?
You can call directly the "All" method:
ctx := context.Background()
err = cursor.All(ctx, res)
if err != nil {
fmt.Println(err.Error())
}
For reference:
https://godoc.org/go.mongodb.org/mongo-driver/mongo#Cursor.All
i think you want to encapsulate the Find method for mongo query.
Using the reflect package i have improved your code by adding an additional parameter that serves as a template to instantiate new instances of slice items.
func (m *MongoDbModel) FindAll(database string, colname string, obj interface{}, parameter map[string]interface{}) ([]interface{}, error) {
var list = make([]interface{}, 0)
collection, err := m.Client.Database(database).Collection(colname).Clone()
objectType := reflect.TypeOf(obj).Elem()
fmt.Println("objectype", objectType)
if err != nil {
log.Println(err)
return nil, err
}
filter := bson.M{}
filter["$and"] = []bson.M{}
for key, value := range parameter {
filter["$and"] = append(filter["$and"].([]bson.M), bson.M{key: value})
}
cur, err := collection.Find(context.Background(), filter)
if err != nil {
log.Fatal(err)
}
defer cur.Close(context.Background())
for cur.Next(context.Background()) {
result := reflect.New(objectType).Interface()
err := cur.Decode(result)
if err != nil {
log.Println(err)
return nil, err
}
list = append(list, result)
}
if err := cur.Err(); err != nil {
return nil, err
}
return list, nil
}
The difference is that FindAll method returns []interface{}, where err := cur.Decode(result) directly consumes a pointer like the result variable.

Why does each transaction count as a client?

I am processing a bunch of files and then dumping the results in PostgreSQL. I would like to process many workers at the same time but keep getting the error "pq: sorry, too many clients already". This seems to happen whenever workers is > 100 or so. (For simplicity, the code below demonstrates the process but instead of processing a file I am simply inserting 1M rows in each table).
Since I am reusing the same *db why am I getting this error? Does each transaction count as a client or am I doing something wrong?
package main
import (
"database/sql"
"flag"
"fmt"
"log"
"sync"
"github.com/lib/pq"
)
func process(db *sql.DB, table string) error {
if _, err := db.Exec(fmt.Sprintf(`DROP TABLE IF EXISTS %v;`, table)); err != nil {
return err
}
col := "age"
s := fmt.Sprintf(`
CREATE TABLE %v (
pk serial PRIMARY KEY,
%v int NOT NULL
)`, table, col)
_, err := db.Exec(s)
if err != nil {
return err
}
tx, err := db.Begin()
if err != nil {
return err
}
defer func() {
if err != nil {
tx.Rollback()
return
}
err = tx.Commit()
}()
stmt, err := tx.Prepare(pq.CopyIn(table, col))
if err != nil {
return err
}
defer func() {
err = stmt.Close()
}()
for i := 0; i < 1e6; i++ {
if _, err = stmt.Exec(i); err != nil {
return err
}
}
return err
}
func main() {
var u string
flag.StringVar(&u, "user", "", "user")
var pass string
flag.StringVar(&pass, "pass", "", "pass")
var host string
flag.StringVar(&host, "host", "", "host")
var database string
flag.StringVar(&database, "database", "", "database")
var workers int
flag.IntVar(&workers, "workers", 10, "workers")
flag.Parse()
db, err := sql.Open("postgres",
fmt.Sprintf(
"user=%s password=%s host=%s database=%s sslmode=require",
u, pass, host, database,
),
)
if err != nil {
log.Fatalln(err)
}
defer db.Close()
db.SetMaxIdleConns(0)
var wg sync.WaitGroup
ch := make(chan int)
for i := 0; i < workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for i := range ch {
table := fmt.Sprintf("_table%d", i)
log.Println(table)
if err := process(db, table); err != nil {
log.Fatalln(err)
}
}
}()
}
for i := 0; i < 300; i++ {
ch <- i
}
close(ch)
wg.Wait()
}
I realize I can simply increase the posgresql settings but would like to understand the question: How to increase the max connections in postgres?
Since I am reusing the same *db why am I getting this error?
I suspect the Postgress driver is using a separate connections for each of your workers which is a smart decision for most cases.
Does each transaction count as a client or am I doing something wrong?
In your case yes each transaction count as a client, because you are calling process() as a goroutine. You are creating as many concurrent transactions as workers. Since each of your transactions is long all of them are probably using an individual connection to the database at the same time and hence you hit a limit.
go func() {
defer wg.Done()
for i := range ch {
table := fmt.Sprintf("_table%d", i)
log.Println(table)
if err := process(db, table); err != nil {
log.Fatalln(err)
}
}
}()