How to extract postgres timestamp range with Go? - postgresql

I have a database calendar with tsrange type from postgres. It allows me to have multiple appointments and time range such as :
["2018-11-08 10:00:00","2018-11-08 10:45:00"]
How do I store this value in a Go variable ?
I tried
var tsrange []string
And when I log tsrange[0] it is empty. What is the proper type for it ?
More code :
rows, err := db.Query("SELECT * FROM appointments")
utils.CheckErr(err)
var id int
var userID int
var tsrange []string
rows.Next()
err = rows.Scan(&id, &userID, &tsrange)
fmt.Println(tsrange[0])
When I replace var tsrange []string with var tsrange string the log is ["2018-11-08 10:00:00","2018-11-08 10:45:00"].

You should be able to retrieve the individual bounds of a range at the sql level.
// replace tsrange_col with the name of your tsrange column
rows, err := db.Query("SELECT id, user_id, lower(tsrange_col), upper(tsrange_col) FROM appointments")
utils.CheckErr(err)
var id int
var userID int
var tsrange [2]time.Time
rows.Next()
err = rows.Scan(&id, &userID, &tsrange[0], &tsrange[1])
fmt.Println(tsrange[0]) // from
fmt.Println(tsrange[1]) // to

Related

Scan a Sql Query sqlx.Rows into a Nested Structure in golang

I have three following tables:
create table A (
a_id varchar(256) not null unique,
a_name varchar(256)
);
create table B (
b_id varchar(256) not null,
b_a_id varchar(256) not null,
b_name varchar(256),
FOREIGN KEY (b_a_id) REFERENCES a (a_id)
);
create table C (
c_id varchar(256) not null,
c_a_id varchar(256) not null,
c_name varchar(256),
FOREIGN KEY (c_a_id) REFERENCES a (a_id)
);
insert into A(a_id, a_name) values('1234', 'a_name_1');
insert into B(b_id, b_a_id, b_name) values('B1','1234', 'b_name_1');
insert into B(b_id, b_a_id, b_name) values('B2','1234', 'b_name_2');
insert into C(c_id, c_a_id, c_name) values('C1','1234', 'c_name_1');
insert into C(c_id, c_a_id, c_name) values('C2','1234', 'c_name_2');
insert into C(c_id, c_a_id, c_name) values('C3','1234', 'c_name_3');
I have the following Structs in golang:
type A struct {
a_id string `db:"a_id"`
a_name string `db:"a_name"`
b *B `db:"b"`
c *C `db:"c"`
}
type B struct {
b_id string `db:"b_id"`
b_name string `db:"b_name"`
b_a_id string `db:"b_a_id"`
}
type C struct {
c_id string `db:"c_id"`
c_name string `db:"c_name"`
c_a_id string `db:"c_a_id"`
}
I want to scan the rows I get from executing the join query:
SELECT * from A INNER JOIN B ON a_id=b_a_id inner join C on c_a_id=a_id;
Into Struct A using rows.StructScan() in Golang but I am not able to do that. How Do I scan
a join query result into a nested struct and I don't want to individually scan each column as there are a lot of columns as a result of the join query going forward.
After some investigation, I came up with a solution that should work also for you. First, let me present the working code, and then I'll explain the relevant parts:
package main
import (
"fmt"
"github.com/jmoiron/sqlx"
_ "github.com/lib/pq"
)
type A struct {
Id string `db:"a_id"`
Name string `db:"a_name"`
B
C
}
type B struct {
Id string `db:"b_id"`
AId string `db:"b_a_id"`
Name string `db:"b_name"`
}
type C struct {
Id string `db:"c_id"`
AId string `db:"c_a_id"`
Name string `db:"c_name"`
}
func main() {
db, err := sqlx.Open("postgres", "host=localhost user=postgres password=postgres dbname=postgres port=5432 sslmode=disable")
if err != nil {
panic(err)
}
defer db.Close()
a := []A{}
rows, err := db.Queryx("SELECT * from A INNER JOIN B ON a_id=b_a_id inner join C on c_a_id=a_id;")
if err != nil {
panic(err)
}
defer rows.Close()
for rows.Next() {
var record A
if err := rows.StructScan(&record); err != nil {
panic(err)
}
a = append(a, record)
}
if err := rows.Err(); err != nil {
panic(err)
}
for _, v := range a {
fmt.Println(v)
}
}
Structs definition
Here, I fixed the structs' definition by embedding B and C into the A struct. Furthermore, I also fixed the name as long as they didn't collide with the others (e.g. it's completely safe to have the Id field in all of the three structs).
Fetching data
The other relevant part is fetching the data from Postgres. Here you've to use the Queryx method and provide to it the SQL query you've correctly written.
However, you should use the methods provided by the database/sql package when dealing with multiple rows (as in our case). Next and Err are self-explanatory so I won't spend any time over them.
StructScan is the method that does the trick. It puts the current query into a loop-scoped variable called record of type A (which is our parent struct).
If you give a try to this code it should work also for you, if not let me know!

How to get row value(s) back after db insert?

I am using Golang to insert data into a DB. basically my query looks like below
var cols = "(col1, col2, col3)"
var values = "($1, $2, $3)"
var query = fmt.Sprintf("INSERT INTO %s %s VALUES %s", myTable, cols, values)
res, err := db.Exec(query, thing.val1, thing.val2, thing.val3)
The only things available from res are lastInsertId and # of rows affected. But what I need is the rows affected. The reason being is that I insert data into a psql database which has an AUTOINCREMENT id column - so I want the data back with that.
For example - with Java hibernate I can do what this answer explains. I don't have to re-query the DB for the ID.
EDIT: I tried to use the lastInsertId method and got this error
LastInsertId is not supported by this driver
Assuming you just want the auto-incremented value(s) in a column called id and this is an insert with the pq driver
var cols = "(col1, col2, col3)"
var values = "($1, $2, $3)"
var query = fmt.Sprintf(
"INSERT INTO %s %s VALUES %s RETURNING id",
myTable, cols, values,
)
var id int
if err := db.QueryRow(
query,
thing.val1, thing.val2, thing.val3,
).Scan(&id); err != nil {
panic(err)
}
fmt.Println("ID: ", id)
For multiple inserts:
var cols = "(col1, col2, col3)"
var values = "($1, $2, $3),($4, $5, $6)"
var query = fmt.Sprintf(
"INSERT INTO %s %s VALUES %s RETURNING id",
myTable, cols, values,
)
var ids []int
rows, err := db.Query(
query,
thing.val1, thing.val2, thing.val3,
thing.val4, thing.val5, thing.val6,
)
if err != nil {
panic(err)
}
for rows.Next() {
var id int
if err := rows.Scan(&id); err != nil {
panic(err)
}
ids = append(ids, id)
}
fmt.Println("IDs: ", ids)
res.LastInsertId() is not supported in Postgres Driver. However, It is supported in MySQL Driver.
db.Exec() doesn't return last inserted id but db.QueryRow() does.
For better understanding you can refer this link.
Here, I added one example which might help you.
var id int
err := db.QueryRow("INSERT INTO user (name) VALUES ('John') RETURNING id").Scan(&id)
if err != nil {
...
}

mongodb get items inserted in last 30 min ago

i looking to check if exist item added in last 30 min in golang with mongodb.
this is my type models:
type PayCoin struct {
ID bson.ObjectId `json:"id" bson:"_id"`
OwnerID bson.ObjectId `json:"owner_id" bson:"owner_id"`
PublicKey string `json:"public_key" bson:"public_key"`
PrivateKey string `json:"-" bson:"private_key"`
QrCode string `json:"qrcode" bson:"-"`
ExchangeRate uint64 `json:"exchange_rate" bson:"exchange_rate"`
DepositAmount float32 `json:"deposit_amount" bson:"deposit_amount"`
Received uint64 `json:"received" bson:"received"`
Completed bool `json:"-" bson:"completed"`
CreatedAt time.Time `json:"created_at" bson:"created_at"`
UpdatedAt time.Time `json:"updated_at" bson:"updated_at"`
}
this is my current function :
func (s *Storage) CoinPayExistOperation(ownerID bson.ObjectId) (*models.PayCoin, error) {
collection := s.getCoinPay()
var lt models.PayCoin
timeFormat := "2006-01-02 15:04:05"
now := time.Now()
after := now.Add(-30*time.Minute)
nowFormated := after.Format(timeFormat)
err := collection.Find(bson.M{"owner_id": ownerID, "created_at": nowFormated}).One(&lt)
return &lt, err
}
i want to check if exist items in database added in last 30 min, my current code not return any item, and in database exist. How i can do this ?
You have two small things to fix here.
If you want to fetch various records you should change the word one by all
you are doing a filter where your data time is Greater than, for this, you have to use a comparison query operator $gt
here an example how your query should looks like
collection.Find(bson.M{"owner_id": ownerID, "created_at": bson.M{"$gt": nowFormated}}).All(&lt)
Note: as this will return multiple records, remember change the lt by an slice.

Golang Postgresql Array

If I have a table that returns something like:
id: 1
names: {Jim, Bob, Sam}
names is a varchar array.
How do I scan that back into a []string in Go?
I'm using lib/pg
Right now I have something like
rows, err := models.Db.Query("SELECT pKey, names FROM foo")
for rows.Next() {
var pKey int
var names []string
err = rows.Scan(&pKey, &names)
}
I keep getting:
panic: sql: Scan error on column index 1: unsupported Scan, storing driver.Value type []uint8 into type *[]string
It looks like I need to use StringArray
https://godoc.org/github.com/lib/pq#StringArray
But, I think I'm too new to Go to understand exactly how to use:
func (a *StringArray) Scan(src interface{})
You are right, you can use StringArray but you don't need to call the
func (a *StringArray) Scan(src interface{})
method yourself, this will be called automatically by rows.Scan when you pass it anything that implements the Scanner interface.
So what you need to do is to convert your []string to *StringArray and pass that to rows.Scan, like so:
rows, err := models.Db.Query("SELECT pKey, names FROM foo")
for rows.Next() {
var pKey int
var names []string
err = rows.Scan(&pKey, (*pq.StringArray)(&names))
}
Long Story Short, use like this to convert pgSQL array to GO array, here 5th column is coming as a array :
var _temp3 []string
for rows.Next() {
// ScanRows scan a row into temp_tbl
err := rows.Scan(&_temp, &_temp0, &_temp1, &_temp2, pq.Array(&_temp3))
In detail :
To insert a row that contains an array value, use the pq.Array function like this:
// "ins" is the SQL insert statement
ins := "INSERT INTO posts (title, tags) VALUES ($1, $2)"
// "tags" is the list of tags, as a string slice
tags := []string{"go", "goroutines", "queues"}
// the pq.Array function is the secret sauce
_, err = db.Exec(ins, "Job Queues in Go", pq.Array(tags))
To read a Postgres array value into a Go slice, use:
func getTags(db *sql.DB, title string) (tags []string) {
// the select query, returning 1 column of array type
sel := "SELECT tags FROM posts WHERE title=$1"
// wrap the output parameter in pq.Array for receiving into it
if err := db.QueryRow(sel, title).Scan(pq.Array(&tags)); err != nil {
log.Fatal(err)
}
return
}
Note: that in lib/pq, only slices of certain Go types may be passed to pq.Array().
Another example in which varchar array in pgSQL in generated at runtime in 5th column, like :
--> predefined_allow false admin iam.create {secrets,configMap}
I converted this as,
Q := "SELECT ar.policy_name, ar.allow, ar.role_name, pro.operation_name, ARRAY_AGG(pro.resource_id) as resources FROM iam.authorization_rules ar LEFT JOIN iam.policy_rules_by_operation pro ON pro.id = ar.operation_id GROUP BY ar.policy_name, ar.allow, ar.role_name, pro.operation_name;"
tx := g.db.Raw(Q)
rows, _ := tx.Rows()
defer rows.Close()
var _temp string
var _temp0 bool
var _temp1 string
var _temp2 string
var _temp3 []string
for rows.Next() {
// ScanRows scan a row into temp_tbl
err := rows.Scan(&_temp, &_temp0, &_temp1, &_temp2, pq.Array(&_temp3))
if err != nil {
return nil, err
}
fmt.Println("Query Executed...........\n", _temp, _temp0, _temp1, _temp2, _temp3)
}
Output :
Query Executed...........
predefined_allow false admin iam.create [secrets configMap]

Golang gorm time data type conversion

Situation:
I'm using a postgres database and have the following struct:
type Building struct {
ID int `json:"id,omitempty"`
Name string `gorm:"size:255" json:"name,omitempty"`
Lon string `gorm:"size:64" json:"lon,omitempty"`
Lat string `gorm:"size:64" json:"lat,omitempty"`
StartTime time.Time `gorm:"type:time" json:"start_time,omitempty"`
EndTime time.Time `gorm:"type:time" json:"end_time,omitempty"`
}
Problem:
However, when I try to insert this struct into the database, the following error occurs:
parsing time ""10:00:00"" as ""2006-01-02T15:04:05Z07:00"": cannot
parse "0:00"" as "2006""}.
Probably, it doesn't recognize the StartTime and EndTime fields as Time type and uses Timestamp instead. How can I specify that these fields are of the type Time?
Additional information
The following code snippet shows my Building creation:
if err = db.Create(&building).Error; err != nil {
return database.InsertResult{}, err
}
The SQL code of the Building table is as follows:
DROP TABLE IF EXISTS building CASCADE;
CREATE TABLE building(
id SERIAL,
name VARCHAR(255) NOT NULL ,
lon VARCHAR(31) NOT NULL ,
lat VARCHAR(31) NOT NULL ,
start_time TIME NOT NULL ,
end_time TIME NOT NULL ,
PRIMARY KEY (id)
);
While gorm does not support the TIME type directly, you can always create your own type that implements the sql.Scanner and driver.Valuer interfaces to be able to put in and take out time values from the database.
Here's an example implementation which reuses/aliases time.Time, but doesn't use the day, month, year data:
const MyTimeFormat = "15:04:05"
type MyTime time.Time
func NewMyTime(hour, min, sec int) MyTime {
t := time.Date(0, time.January, 1, hour, min, sec, 0, time.UTC)
return MyTime(t)
}
func (t *MyTime) Scan(value interface{}) error {
switch v := value.(type) {
case []byte:
return t.UnmarshalText(string(v))
case string:
return t.UnmarshalText(v)
case time.Time:
*t = MyTime(v)
case nil:
*t = MyTime{}
default:
return fmt.Errorf("cannot sql.Scan() MyTime from: %#v", v)
}
return nil
}
func (t MyTime) Value() (driver.Value, error) {
return driver.Value(time.Time(t).Format(MyTimeFormat)), nil
}
func (t *MyTime) UnmarshalText(value string) error {
dd, err := time.Parse(MyTimeFormat, value)
if err != nil {
return err
}
*t = MyTime(dd)
return nil
}
func (MyTime) GormDataType() string {
return "TIME"
}
You can use it like:
type Building struct {
ID int `json:"id,omitempty"`
Name string `gorm:"size:255" json:"name,omitempty"`
Lon string `gorm:"size:64" json:"lon,omitempty"`
Lat string `gorm:"size:64" json:"lat,omitempty"`
StartTime MyTime `json:"start_time,omitempty"`
EndTime MyTime `json:"end_time,omitempty"`
}
b := Building{
Name: "test",
StartTime: NewMyTime(10, 23, 59),
}
For proper JSON support you'll need to add implementations for json.Marshaler/json.Unmarshaler, which is left as an exercise for the reader 😉
As mentioned in "How to save time in the database in Go when using GORM and Postgresql?"
Currently, there's no support in GORM for any Date/Time types except timestamp with time zone.
So you might need to parse a time as a date:
time.Parse("2006-01-02 3:04PM", "1970-01-01 9:00PM")
I am have come across the same error. It seems like there is a mismatch between type of the column in the database and the Gorm Model
Probably the type of the column in the database is text which you might have set earlier and then changed the column type in gorm model.