Let us say we have have two tables:
CREATE TABLE IF NOT EXISTS tech_time(
ms_since_epoch BIGINT
);
CREATE TABLE IF NOT EXISTS readable_time(
ts timestamp without time zone,
);
Let us say tech_time has data and we would like to populate readable_time.
So in Postgres you could use to_timestamp(double precision) and do something like
INSERT INTO readable_time(ts)
SELECT DISTINCT to_timestamp(ms_since_epoch::float / 1000) AS ts,
FROM tech_time;
No such function seems to exist in Amazon Redshift:
function to_timestamp(double precision) does not exist
My question is: how do I properly populate readable_time, while losing the least amount of precision?
We can try using DATEADD and add the ms_since_epoch to January 1, 1970:
INSERT INTO readable_time (ts)
SELECT DATEADD(ms, ms_since_epoch, 'epoch')
FROM tech_time;
Related
I have a table and I want to calculate the difference (in time) between two columns of my table.
My columns are: scheduled_arrival_time(timestamptz), scheduled_departure_time(timestamptz) and I want to get the difference of them as "scheduled_duration"
(scheduled_duration = scheduled_arrival_time - scheduled_departure_time)
I tried this:
scheduled_departure_time TIMESTAMPTZ NOT NULL,
scheduled_arrival_time TIMESTAMPTZ NOT NULL,
scheduled_duration numeric(4,2) NOT NULL
generated always as
( extract(epoch from (scheduled_arrival_time - scheduled_departure_time))/3600 )
stored
but I got the error when I tried to insert data:
ERROR: cannot insert a non-DEFAULT value into column "scheduled_duration" DETAIL: Column "scheduled_duration" is a generated column. SQL state: 428C9
You can do this with for excample:
SELECT
EXTRACT(EPOCH FROM '2022-06-16 15:00:00.00000'::TIMESTAMP - '2022-06-15 15:00:00.00000'::TIMESTAMP)
You get the difference in seconds.
I have a big table in a postgres db with location of units. Now I need to retrieve a location for every 60 seconds.
In Mysql, this is a piece of cake: select * from location_table where unit_id = '123' GROUP BY round(timestamp / 60)
But in postgres this seems to be a very hard problem. I also have the timestamps in dateformat rather than in epoch format.
Here is an example of how the table looks
CREATE TABLE location_table (
unit_id int,
"timestamp" timestamp(3) without time zone NOT NULL,
lat double precision,
lng double precision
);
Use date_trunc() to make sets per minute:
SELECT * -- most likely not what you want
FROM location_table
WHERE unit_id = 123 -- numbers don't need quotes '
GROUP BY date_trunc('minute', 'timestamp');
The * is of course wrong, but I don't know what you want to know about the GROUP so I can't come up with something better.
Edit:
When you need a random result from your table, DISTINCT ON () could do the job:
SELECT DISTINCT ON(date_trunc('minute', timestamp))
* -- your columns
FROM location_table;
There are other (standard SQL) solutions as well, like using row_number().
I have 2 tables in my postgresql timescaledb database (version 12.06) that I try to query through inner join.
Tables' structure:
CREATE TABLE currency(
id serial PRIMARY KEY,
symbol TEXT NOT NULL,
name TEXT NOT NULL,
quote_asset TEXT
);
CREATE TABLE currency_price (
currency_id integer NOT NULL,
dt timestamp WITHOUT time ZONE NOT NULL,
open NUMERIC NOT NULL,
high NUMERIC NOT NULL,
low NUMERIC NOT NULL,
close NUMERIC,
volume NUMERIC NOT NULL,
PRIMARY KEY (
currency_id,
dt
),
CONSTRAINT fk_currency FOREIGN KEY (currency_id) REFERENCES currency(id)
);
The query I'm trying to make is:
SELECT currency_id AS id, symbol, MAX(close) AS close, DATE(dt) AS date
FROM currency_price
JOIN currency ON
currency.id = currency_price.currency_id
GROUP BY currency_id, symbol, date
LIMIT 100;
Basically, it returns all the rows that exist in currency_price table. I know that postgres doesn't allow select columns without an aggregate function or including them in "group by" clause. So, if I don't include dt column in my select query, i receive expected results, but if I include it, the output shows rows of every single day of each currency while I only want to have the max value of every currency and filter them out based on various dates afterwards.
I'm very inexperienced with SQL in general.
Any suggestions to solve this would be very appreciated.
There are several ways to do it, easiest one comes to mind is using window functions.
select *
from (
SELECT currency_id,symbol,close,dt
,row_number() over(partition by currency_id,symbol
order by close desc,dt desc) as rr
FROM currency_price
JOIN currency ON currency.id = currency_price.currency_id
where dt::date = '2021-06-07'
)q1
where rr=1
General window functions:
https://www.postgresql.org/docs/9.5/functions-window.html
works also with standard aggregate functions like SUM,AVG,MAX,MIN and others.
Some examples: https://www.postgresqltutorial.com/postgresql-window-function/
So I have a string time column in a table and now I want to change that time to date time type and then query data for selected dates.
Is there a direct way to do so? One way I could think of is
1) add a new column
2) insert values into it with converted date
3) Query using the new column
Here I am stuck with the 2nd step with INSERT so need help with that
ALTER TABLE "nds".”unacast_sample_august_2018"
ADD COLUMN new_date timestamp
-- Need correction in select statement that I don't understand
INSERT INTO "nds".”unacast_sample_august_2018” (new_date)
(SELECT new_date from_iso8601_date(substr(timestamp,1,10))
Could some one help me with correction and if possible a better way of doing it?
Tried other way to do in single step but gives error as Column does not exist new_date
SELECT *
FROM (SELECT from_iso8601_date(substr(timestamp,1,10)) FROM "db_name"."table_name") AS new_date
WHERE new_date > from_iso8601('2018-08-26') limit 10;
AND
SELECT new_date = (SELECT from_iso8601_date(substr(timestamp,1,10)))
FROM "db_name"."table_name"
WHERE new_date > from_iso8601('2018-08-26') limit 10;
Could someone correct these queries?
You don't need those steps, just use USING CAST clause on your ALTER TABLE:
CREATE TABLE foobar (my_timestamp) AS
VALUES ('2018-09-20 00:00:00');
ALTER TABLE foobar
ALTER COLUMN my_timestamp TYPE timestamp USING CAST(my_timestamp AS TIMESTAMP);
If your string timestamps are in a correct format this should be enough.
Solved as follows:
select *
from
(
SELECT from_iso8601_date(substr(timestamp,1,10)) as day,*
FROM "db"."table"
)
WHERE day > date_parse('2018-08-26', '%Y-%m-%d')
limit 10
I am using UUID version 1 as the primary key. I would like to sort on UUID v1 timestamp. Right now if I do something like this:
SELECT id, title
FROM table
ORDER BY id DESC;
PostgreSQL does not sort records by UUID timestamp, but by UUID string representation, which ends up with unexpected sorting result in my case.
Am I missing something, or there is not a built in way to do this in PostgreSQL?
The timestamp is one of the parts of a v1 UUID. It is stored in hex format as hundreds nanoseconds since 1582-10-15 00:00. This function extracts the timestamp:
create or replace function uuid_v1_timestamp (_uuid uuid)
returns timestamp with time zone as $$
select
to_timestamp(
(
('x' || lpad(h, 16, '0'))::bit(64)::bigint::double precision -
122192928000000000
) / 10000000
)
from (
select
substring (u from 16 for 3) ||
substring (u from 10 for 4) ||
substring (u from 1 for 8) as h
from (values (_uuid::text)) s (u)
) s
;
$$ language sql immutable;
select uuid_v1_timestamp(uuid_generate_v1());
uuid_v1_timestamp
-------------------------------
2016-06-16 12:17:39.261338+00
122192928000000000 is the interval between the start of the Gregorian calendar and the Unix timestamp.
In your query:
select id, title
from t
order by uuid_v1_timestamp(id) desc
To improve performance an index can be created on that:
create index uuid_timestamp_ndx on t (uuid_v1_timestamp(id));