I have a table with timestamp TIMESTAMP, data TEXT columns. I have a failing test because I can't get a timestamp value out of postgresql without time zone annotation. Here's an abridged version of what I've done in my Go application:
type Datapoint struct {
Timestamp string
Data sql.NullString
}
var testData = Datapoint{Timestamp:'2018-12-31 00:00:00', Data:'test'}
db.Exec("CREATE TABLE mytable (id SERIAL, timestamp TIMESTAMP, data TEXT);")
db.Exec("INSERT INTO mytable(timestamp, data) VALUES ($1, $2);", testData.Timestamp, testData.Data)
datapoints, err = db.Exec("SELECT timestamp::TIMESTAMP WITHOUT TIME ZONE, data FROM mytable;")
This trouble is that this query (after about 20 lines of error checking and row.Scan; golang's a bit verbose like that...) gives me:
expected 2018-12-31 00:00:00, received 2018-12-31T00:00:00Z
I requested without timezone (and the query succeeds in psql), so why am I getting the extra T and Z in the string?
Scan into a value of time.Time instead of string, then you can format the time as desired.
package main
import (
"database/sql"
"fmt"
"log"
"time"
)
type Datapoint struct {
Timestamp time.Time
Data sql.NullString
}
func main() {
var db *sql.DB
var dp Datapoint
err := db.QueryRow("SELECT timestamp, data FROM mytable").Scan(
&dp.Timestamp, &dp.Data,
)
switch {
case err == sql.ErrNoRows:
log.Fatal("No rows")
case err != nil:
log.Fatal(err)
default:
fmt.Println(dp.Timestamp.Format("2006-01-02 15:04:05"))
}
}
What you are receiving is an ISO 8601 representation of time.
T is the time designator that precedes the time components of the representation.
Z is used to represent that it is in UTC time, with Z representing zero offset.
In a way you are getting something without a timezone but it can be confusing, especially as you haven't localised your time at any point. I would suggest you consider using ISO times, or you could convert your time to a string like this
s := fmt.Sprintf("%d-%02d-%02d %02d:%02d:%02d\n",
t.Year(), t.Month(), t.Day(),
t.Hour(), t.Minute(), t.Second())
Related
I am trying to do find the time difference between time now from the created at column in the database
I return a row in the database, i have the created_at column with the following time format
{"id":1,"email":"fUPvyBA#FApYije.ru","timezone":"pacific time","created_at":"2022-01-23T02:45:01.241589Z","updated_at":"2022-01-23T02:46:01.241591Z"}
so created_at = 2022-01-23T02:45:01.241589Z
and
time.Now() = 2022-01-24 03:24:56.215573343 +0000 UTC m=+1325.103447033
I tested with the following
import (
"fmt"
"time"
)
func TestTime() {
timestart := "2022-01-23T02:45:01.241589Z"
timeend := time.Now()
timediff := timeend.Sub(timestart)
fmt.Println("time diff is: ", timediff.Seconds())
}
TestTime()
but i get the following errors
cannot use <string> as <time.Time> in argument to timeend.Sub
How do i subtract to get the difference between the time i stored in created_at column from the current time time.Now()?
The error means that you trying to use an string on a time.Time parameter
Try this:
import (
"fmt"
"time"
)
func TestTime() {
layout := time.RFC3339 // "2006-01-02T15:04:05Z07:00"
timestart, err := time.Parse(layout, "2022-01-23T02:45:01.241589Z")
if err != nil {
panic(err)
}
timeend := time.Now()
timediff := timeend.Sub(timestart)
fmt.Println("time diff is:", ltimediff.Seconds())
}
TestTime()
I am getting this error when I try to retreive data from a database:
thread 'main' panicked at 'error retrieving column 2: error deserializing column 2: cannot convert between the Rust type `alloc::string::String` and the Postgres type `timestamp`'
Db structure:
CREATE TABLE IF NOT EXISTS table_(
id SERIAL PRIMARY KEY,
data VARCHAR NOT NULL,
date_saved TIMESTAMP
)
struct MyType{
local_id: i32,
data: String,
date_saved: String
}
let records = client.query("SELECT id,data,date_saved FROM table_",&[])?;
let mut the_records : Vec<MyType> = vec![];
for record in records {
let saved_data = MyType {
local_id: record.get(0),
data: record.get(1),
date_saved: record.get(2),
};
println!("{:?}",saved_data.data);
the_records.push(saved_data);
}
I found out that there is no possible conversion between Postgres Timestamp and String according to https://docs.rs/postgres/0.17.5/postgres/types/trait.FromSql.html but we need to use std::time::SystemTime.
So MyType will be:
struct MyType{
local_id: i32,
data: String,
date_saved: std::time::SystemTime
}
And I can manipulate time from there.
The above answer is good, but if you want a quicker solution (for e.g. want to just print to screen / log etc.), just cast the Timestamp to a TEXT within Postgres and then rust wouldn't complain.
So for e.g. this would throw ERROR:
SELECT now(); -- Will throw error
But this wouldn't
SELECT now()::TEXT; -- Will work fine
i looking to check if exist item added in last 30 min in golang with mongodb.
this is my type models:
type PayCoin struct {
ID bson.ObjectId `json:"id" bson:"_id"`
OwnerID bson.ObjectId `json:"owner_id" bson:"owner_id"`
PublicKey string `json:"public_key" bson:"public_key"`
PrivateKey string `json:"-" bson:"private_key"`
QrCode string `json:"qrcode" bson:"-"`
ExchangeRate uint64 `json:"exchange_rate" bson:"exchange_rate"`
DepositAmount float32 `json:"deposit_amount" bson:"deposit_amount"`
Received uint64 `json:"received" bson:"received"`
Completed bool `json:"-" bson:"completed"`
CreatedAt time.Time `json:"created_at" bson:"created_at"`
UpdatedAt time.Time `json:"updated_at" bson:"updated_at"`
}
this is my current function :
func (s *Storage) CoinPayExistOperation(ownerID bson.ObjectId) (*models.PayCoin, error) {
collection := s.getCoinPay()
var lt models.PayCoin
timeFormat := "2006-01-02 15:04:05"
now := time.Now()
after := now.Add(-30*time.Minute)
nowFormated := after.Format(timeFormat)
err := collection.Find(bson.M{"owner_id": ownerID, "created_at": nowFormated}).One(<)
return <, err
}
i want to check if exist items in database added in last 30 min, my current code not return any item, and in database exist. How i can do this ?
You have two small things to fix here.
If you want to fetch various records you should change the word one by all
you are doing a filter where your data time is Greater than, for this, you have to use a comparison query operator $gt
here an example how your query should looks like
collection.Find(bson.M{"owner_id": ownerID, "created_at": bson.M{"$gt": nowFormated}}).All(<)
Note: as this will return multiple records, remember change the lt by an slice.
Situation:
I'm using a postgres database and have the following struct:
type Building struct {
ID int `json:"id,omitempty"`
Name string `gorm:"size:255" json:"name,omitempty"`
Lon string `gorm:"size:64" json:"lon,omitempty"`
Lat string `gorm:"size:64" json:"lat,omitempty"`
StartTime time.Time `gorm:"type:time" json:"start_time,omitempty"`
EndTime time.Time `gorm:"type:time" json:"end_time,omitempty"`
}
Problem:
However, when I try to insert this struct into the database, the following error occurs:
parsing time ""10:00:00"" as ""2006-01-02T15:04:05Z07:00"": cannot
parse "0:00"" as "2006""}.
Probably, it doesn't recognize the StartTime and EndTime fields as Time type and uses Timestamp instead. How can I specify that these fields are of the type Time?
Additional information
The following code snippet shows my Building creation:
if err = db.Create(&building).Error; err != nil {
return database.InsertResult{}, err
}
The SQL code of the Building table is as follows:
DROP TABLE IF EXISTS building CASCADE;
CREATE TABLE building(
id SERIAL,
name VARCHAR(255) NOT NULL ,
lon VARCHAR(31) NOT NULL ,
lat VARCHAR(31) NOT NULL ,
start_time TIME NOT NULL ,
end_time TIME NOT NULL ,
PRIMARY KEY (id)
);
While gorm does not support the TIME type directly, you can always create your own type that implements the sql.Scanner and driver.Valuer interfaces to be able to put in and take out time values from the database.
Here's an example implementation which reuses/aliases time.Time, but doesn't use the day, month, year data:
const MyTimeFormat = "15:04:05"
type MyTime time.Time
func NewMyTime(hour, min, sec int) MyTime {
t := time.Date(0, time.January, 1, hour, min, sec, 0, time.UTC)
return MyTime(t)
}
func (t *MyTime) Scan(value interface{}) error {
switch v := value.(type) {
case []byte:
return t.UnmarshalText(string(v))
case string:
return t.UnmarshalText(v)
case time.Time:
*t = MyTime(v)
case nil:
*t = MyTime{}
default:
return fmt.Errorf("cannot sql.Scan() MyTime from: %#v", v)
}
return nil
}
func (t MyTime) Value() (driver.Value, error) {
return driver.Value(time.Time(t).Format(MyTimeFormat)), nil
}
func (t *MyTime) UnmarshalText(value string) error {
dd, err := time.Parse(MyTimeFormat, value)
if err != nil {
return err
}
*t = MyTime(dd)
return nil
}
func (MyTime) GormDataType() string {
return "TIME"
}
You can use it like:
type Building struct {
ID int `json:"id,omitempty"`
Name string `gorm:"size:255" json:"name,omitempty"`
Lon string `gorm:"size:64" json:"lon,omitempty"`
Lat string `gorm:"size:64" json:"lat,omitempty"`
StartTime MyTime `json:"start_time,omitempty"`
EndTime MyTime `json:"end_time,omitempty"`
}
b := Building{
Name: "test",
StartTime: NewMyTime(10, 23, 59),
}
For proper JSON support you'll need to add implementations for json.Marshaler/json.Unmarshaler, which is left as an exercise for the reader 😉
As mentioned in "How to save time in the database in Go when using GORM and Postgresql?"
Currently, there's no support in GORM for any Date/Time types except timestamp with time zone.
So you might need to parse a time as a date:
time.Parse("2006-01-02 3:04PM", "1970-01-01 9:00PM")
I am have come across the same error. It seems like there is a mismatch between type of the column in the database and the Gorm Model
Probably the type of the column in the database is text which you might have set earlier and then changed the column type in gorm model.
How to represent PostgreSQL interval in Go?
My struct looks like this:
type Product struct {
Id int
Name string
Type int
Price float64
Execution_time ????
}
The execution_time field on my database is interval.
The best answer I've come across is to use bigint in your schema, and implement Value & Scan on a wrapper type for time.Duration.
// Duration lets us convert between a bigint in Postgres and time.Duration
// in Go
type Duration time.Duration
// Value converts Duration to a primitive value ready to written to a database.
func (d Duration) Value() (driver.Value, error) {
return driver.Value(int64(d)), nil
}
// Scan reads a Duration value from database driver type.
func (d *Duration) Scan(raw interface{}) error {
switch v := raw.(type) {
case int64:
*d = Duration(v)
case nil:
*d = Duration(0)
default:
return fmt.Errorf("cannot sql.Scan() strfmt.Duration from: %#v", v)
}
return nil
}
Unfortunately, you'll sacrifice the ability to do interval arithmetic inside queries - unless some clever fellow wants to post the type conversion for bigint => interval.
If you're OK with conforming to the time.Duration limits and you need only seconds accuracy you could:
Create the table with a SECOND precision
...
someInterval INTERVAL SECOND(0),
...
Convert INTERVAL into seconds:
SELECT EXTRACT(EPOCH FROM someInterval) FROM someTable;
Use time.Duration::Seconds to insert data to prepared statements
One solution is to wrap the time.Duration type in a wrapper type, and on it provide implementations of sql.Scanner and driver.Valuer.
// PgDuration wraps a time.Duration to provide implementations of
// sql.Scanner and driver.Valuer for reading/writing from/to a DB.
type PgDuration time.Duration
Postgres appears to be quite flexible with the format provided when inserting into an INTERVAL column. The default format returned by calling String() on a duration is accepted, so for the implementation of driver.Value, simply call it:
// Value converts the PgDuration into a string.
func (d PgDuration) Value() (driver.Value, error) {
return time.Duration(d).String(), nil
}
When retrieving an INTERVAL value from Postgres, it returns it in a format that is not so easily parsed by Go (ex. "2 days 05:00:30.000250"), so we need to do some manual parsing in our implementation of sql.Scanner. In my case, I only care about supporting hours, minutes, and seconds, so I implemented it as follows:
// Scan converts the received string in the format hh:mm:ss into a PgDuration.
func (d *PgDuration) Scan(value interface{}) error {
switch v := value.(type) {
case string:
// Convert format of hh:mm:ss into format parseable by time.ParseDuration()
v = strings.Replace(v, ":", "h", 1)
v = strings.Replace(v, ":", "m", 1)
v += "s"
dur, err := time.ParseDuration(v)
if err != nil {
return err
}
*d = PgDuration(dur)
return nil
default:
return fmt.Errorf("cannot sql.Scan() PgDuration from: %#v", v)
}
}
If you need to support other duration units, you can start with a similar approach, and handle the additional units appropriately.
Also, if you happen to be using the GORM library to automatically migrate your table, you will also want to provide an implementation of GORM's migrator.GormDataTypeInterface:
// GormDataType tells GORM to use the INTERVAL data type for a PgDuration column.
func (PgDuration) GormDataType() string {
return "INTERVAL"
}