I've found the way to make a bulk insert with classic postgres types from this post and it works like a charm.
But for whatever reason, i struggle to make it work when trying to insert geometry points:
using pgx.CopyFromRows
rows := [][]interface{}{
{"John", "ST_SetSRID(ST_MakePoint(1.23,2.34), 4326)"},
{"Jane", "ST_SetSRID(ST_MakePoint(1.10,2.12), 4326)"},
}
// GetSession return a *pgxpool.Pool
copyCount, err := postgres.GetSession().CopyFrom(
context.Background(),
pgx.Identifier{"test_db"},
[]string{"first_name", "location"},
pgx.CopyFromRows(rows),
)
this gives me back
"ERROR: Invalid endian flag value encountered. (SQLSTATE XX000)"
using pgx.Batch:
batch := &pgx.Batch{}
batch.Queue("insert into people(first_name, location) values($1, $2)", "Bob", "ST_SetSRID(ST_MakePoint(1.23,1.34), 4326)")
batch.Queue("insert into people(first_name, location) values($1, $2)", "John", "ST_SetSRID(ST_MakePoint(1.23,1.34), 4326)")
batchResult := postgres.GetSession().SendBatch(context.Background(), batch)
_, err := batchResult.Exec()
if err != nil {
return rest_errors.NewInternalServerError("Error processing batch insert", err)
}
batchResult.Close()
I get this error
Error processing batch insert - parse error - invalid geometry (SQLSTATE XX000)
db creation script
CREATE TABLE IF NOT EXISTS public.test_db
(
first_name text COLLATE pg_catalog."default",
location geometry
)
Thank you so much
Related
These are the tables in my database
CREATE TABLE vehicles
(
id VARCHAR PRIMARY KEY,
make VARCHAR NOT NULL,
model VARCHAR NOT NULL,
)
CREATE TABLE collisions
(
id VARCHAR PRIMARY KEY,
longitude FLOAT NOT NULL,
latitude FLOAT NOT NULL,
)
CREATE TABLE vehicle_collisions
(
vehicle_id VARCHAR NOT NULL,
collision_id VARCHAR NOT NULL,
PRIMARY KEY (vehicle_id, collision_id)
)
So i need to find list of vehicles with a particular collision_id. I am using gorm .
I tried to implement it in a way
var vehicles []entities.Vehicles
err := r.db.Joins("JOIN vehicles as vh on vh.id=vehicle_collisions.vehicle_id").Where("vehicle_collisions.collision_id=?",
id).Find(&vehicles).Error
if err != nil {
fmt.Println(err)
}
But it is throwing me error
ERROR: missing FROM-clause entry for table "vehicle_collisions" (SQLSTATE 42P01)
Any help would really be appreciated.
Thank you mkopriva as pointed
when you pass &vehicles which is []entities.Vehicles to Find the query generated would be as below:
SELECT
*
FROM
vehicles
JOIN
vehicles vh
ON vh.id = vehicle_collisions.vehicle_id
WHERE vehicle_collisions.collision_id=1
which won't be correct to solve the problem modify the query as:
err := r.db.
Joins("JOIN vehicle_collisions AS vc ON vc.vehicle_id=vehicles.id").
Where("vc.collision_id = ?", id).
Find(&vehicles).Error
As the question lacks some details, I tried to guess them. I hope that the answer provided is relevant to you! Let me present the code that is working on my side:
package main
import (
"gorm.io/driver/postgres"
"gorm.io/gorm"
_ "github.com/lib/pq"
)
type Vehicle struct {
Id string
Make string
Model string
Collisions []Collision `gorm:"many2many:vehicle_collisions"`
}
type Collision struct {
Id string
Longitude float64
Latitude float64
}
func main() {
dsn := "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable"
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
if err != nil {
panic(err)
}
db.AutoMigrate(&Vehicle{})
db.AutoMigrate(&Collision{})
// add dummy data to the db
collision := &Collision{"1", 4.4, 4.5}
db.Create(collision)
db.Create(&Collision{"2", 1.1, 1.5})
db.Create(&Vehicle{Id: "1", Make: "ford", Model: "fiesta", Collisions: []Collision{*collision}})
db.Create(&Vehicle{Id: "2", Make: "fiat", Model: "punto", Collisions: []Collision{*collision}})
// get all vehicles for collision 1
var vehicles []Vehicle
db.Debug().Joins("inner join vehicle_collisions vc on vehicles.id = vc.vehicle_id").Find(&vehicles, "vc.collision_id = ?", "1")
}
The code starts with the structs' definitions.
Please note the Gorm annotation on the field Collisions.
After adding some data, the query should be pretty straightforward: we use the Joins method to load data from the table vehicle_collisions and in the Find method we filter out only records with collision_id equal to "1".
Let me know if this helps you or you need something else!
I have a table with a unique index called user_conn_unique and I perform an 'upsert' on that table however intermittently I am getting an error pq: duplicate key value violates unique constraint "user_unique_connection"
I have tried recreating the issue locally by running concurrent request for both insert and update scenarios and also executing the statements directly on the db with many records seeded but cannot recreate it.
From what I see in the logs we receive two requests microseconds apart and one is successful and one fails with the above error
Example Data -
user_id = 592c70b4-48b0-11ec-81d3-0242ac130003
location = EU
group_id = 592c70b4-48b0-11ec-81d3-0242ac131111
In the database there are no duplicates and all the data looks correct.
Below is the table and Go code
PostgreSQL 10.14 - table around 200k rows
create table user_conn (
user_id uuid not null,
location text not null,
group_id uuid not null,
created_at timestamp with time zone not null default current_timestamp,
connected_at timestamp with time zone not null default current_timestamp,
disconnected_at timestamp with time zone,
primary key (user_id, group_id, location)
);
create unique index user_unique_connection on user_conn (location, user_id, group_id, coalesce(disconnected_at, '1970-01-01'));
alter table user_conn add column unlinked_at timestamp with time zone default null;
Go 1.16
"database/sql"
"github.com/lib/pq" // v1.10.3
"github.com/jmoiron/sqlx" // v1.3.4
func (pg *PG) Upsert(userID *uuid.UUID, location string, groupID *uuid.UUID) error {
db := pg.DB() // returns *sqlx.DB
tx, err := db.Beginx()
if err != nil {
return err
}
defer func() {
if err != nil {
tx.Rollback()
} else {
err = tx.Commit()
}
}()
stmt := `
insert into user_conn (
user_id, location, group_id
) values (
$1, $2, $3
)
on conflict (user_id, location, group_id)
do update set disconnected_at=null, unlinked_at=null, connected_at=now()
returning *
`
err = tx.Get(cl, stmt, userID, location, groupID)
if err != nil {
return err
}
_, err = tx.ExecContext(another statement on another table)
if err != nil {
return err
}
return nil
}
What could be causing this issue?
I've created an upsert like this in go-pg:
db.Model(myModel).Returning("id").
OnConflict("(fieldA) DO UPDATE set fieldB=EXCLUDED.fieldB").Insert()
and now I'd like to read the returned id. How would I do that? All the examples I've seen ignore the result returned by the insert/update queries.
Judging from the example, the ID will be in myModel.
myModel := &MyModel{
FieldA: `Something something something`
}
_, err := db.Model(myModel).
OnConflict("(fieldA) DO UPDATE").
Set("fieldB = EXCLUDED.fieldB").
Insert()
if err != nil {
panic(err)
}
fmt.Println(myModel.Id)
Looking at the Postgres log, it is doing insert into ... returning "id" to get the ID.
I am trying to fetch some data from postgress table using prepared statements
If I try with database.Get() everything is returned.
Table:
create table accounts
(
id bigserial not null
constraint accounts_pkey
primary key,
identificator text not null,
password text not null,
salt text not null,
type smallint not null,
level smallint not null,
created_at timestamp not null,
updated timestamp not null,
expiry_date timestamp,
qr_key text
);
Account struct:
type Account struct {
ID string `db:"id"`
Identificator string `db:"identificator"`
Password string `db:"password"`
Salt string `db:"salt"`
Type int `db:"type"`
Level int `db:"level"`
ExpiryDate time.Time `db:"expiry_date"`
CreatedAt time.Time `db:"created_at"`
UpdateAt time.Time `db:"updated_at"`
QrKey sql.NullString `db:"qr_key"`
}
BTW i tried using ? instead of $1 & $2
stmt, err := database.Preparex(`SELECT * FROM accounts where identificator = $1 and type = $2`)
if err != nil {
panic(err)
}
accounts := []account.Account{}
err = stmt.Get(&accounts, "asd", 123)
if err != nil {
panic(err)
}
The error I get is
"errorMessage": "scannable dest type slice with \u003e1 columns (10) in result",
In the table there are no records I tried to remove all fields except the ID from Account (struct), however it does not work.
Documentation for sqlx described Get and Select as:
Get and Select use rows.Scan on scannable types and rows.StructScan on
non-scannable types. They are roughly analagous to QueryRow and Query,
where Get is useful for fetching a single result and scanning it, and
Select is useful for fetching a slice of results:
For fetching a single record use Get.
stmt, err := database.Preparex(`SELECT * FROM accounts where identificator = $1 and type = $2`)
var account Account
err = stmt.Get(&account, "asd", 123)
If your query returns more than a single record use Select with statement as:
stmt, err := database.Preparex(`SELECT * FROM accounts where identificator = $1 and type = $2`)
var accounts []Account
err = stmt.Select(&accounts, "asd", 123)
In your case if you use stmt.Select instead if stmt.Get. It will work.
According to the documentation (http://godoc.org/launchpad.net/mgo/v2) you can obtain the ID of your "Upserted" document if you use the Upsert method.
There is also an Insert method that does not provide this functionality.
Why is that? What if I want to perform an Insert instead of an Upsert? (or wouldn't ever be any valid reason to want to do that? I'm starting to wonder.)
You use bson.NewObjectId() to generate an ID to be inserted.
This is how you'd insert a new document:
i := bson.NewObjectId()
c.Insert(bson.M{"_id": i, "foo": "bar"})
Since you don't know if you're going to insert or update when you issue an Upsert, it would be superfluous to generate an ID just to drop it right after the query (in case an update happens). That's why it's generated db-side and returned to you when applicable.
This should not happen at all, the mgo should insert and return the Id, since, if we generated the ObjectId from the application itself, If the application is restarted, the Object Id generator will start from the beginning generating the same IDs again and again, thus updating existing records in the database.
That is wrong, MGO should rely on the database in generating those IDs and updating the object or returning the objectId of the inserted object immediately like what other languages binding to MongoDB does like in Python or Java.
You can always try the Upsert function to get the generated ID.
db := service.ConnectDb()
sessionCopy := db.Copy()
defer sessionCopy.Close() // clean up
collection := sessionCopy.DB(service.MongoDB.DTB).C(MessageCol.tbl)
log.Println("before to write: ", msg)
// Update record inserts and creates an ID if wasn't set (Returns created record with new Id)
info, err := collection.Upsert(nil, msg)
if err != nil {
log.Println("Error write message upsert collection: ", err)
return MessageMgo{}, err
}
if info.UpsertedId != nil {
msg.Id = info.UpsertedId.(bson.ObjectId)
}
// gets room from mongo
room, err := GetRoom(msg.Rid)
if err != nil {
return msg, err
}
// increments the msgcount and update it
room.MsgCount = room.MsgCount + 1
err = UpdateRoom(room)
if err != nil {
return msg, err
}
return msg, err
This is a sample code I have and works fine.....