Modelling blackout time ranges on a week? - postgresql

I'm trying to find an efficient way to store information about blackout time ranges over the course of a week. I'm storing everything like this today:
create table access_restriction (
id serial primary key,
user integer references user(id),
weekday integer not null, # ISO-8601 week day
start time not null,
end time not null,
);
So, whenever I have to check if the user is allowed or not to do something, I select everything from there and do the logic on the application code. Is there a better way of doing with with PostgreSQL?

With your existing structure, you could place the logic in the sql:
SELECT CASE WHEN COUNT(1) > 0 THEN true ELSE false END AS is_blacked_out
FROM access_restriction a
WHERE usr = :input_user_id
AND weekday = :input_weekday
AND start_time = (SELECT MAX(start_time)
FROM access_restriction
WHERE usr = a.usr
AND weekday = a.weekday
AND start_time <= :time_to_check
)
AND end_time >= :time_to_check
with the three input parameters:
input_user_id
input_weekday
time_to_check
You could change that into a stored function. You would need to check things like boundary conditions (ex is end_time inclusive or not?) and assumptions such as no overlapping boundaries configured for a user.
For other data structures, you could imagine a bitmap representing the times in a week and a function which converts the input time to a bitmap to be "anded" with the user's blackout bitmap. But that might be overkill and it would really depend on things like the number of users, the granularity of the blackout period, the frequency of change of the blackout times, etc.

Related

Get truncked data from a table - postgresSQL

I want to get truncked data over the last month. My time is in unix timestamps and I need to get data from last 30 days for each specific day.
The data is in the following form:
{
"id":"648637",
"exchange_name":"BYBIT",
"exchange_icon_url":"https://cryptowisdom.com.au/wp-content/uploads/2022/01/Bybit-colored-logo.png",
"trade_time":"1675262081986",
"price_in_quote_asset":23057.5,
"price_in_usd":1,
"trade_value":60180.075,
"base_asset_icon":"https://assets.coingecko.com/coins/images/1/large/bitcoin.png?1547033579",
"qty":2.61,
"quoteqty":60180.075,
"is_buyer_maker":true,
"pair":"BTCUSDT",
"base_asset_trade":"BTC",
"quote_asset_trade":"USDT"
}
I need to truncate data based on trade_time
How do I write the query?
The secret sauce is the date_trunc function, which takes a timestamp with time zone and truncates it to a specific precision (hour, day, week, etc). You can then group based on this value.
In your case we need to convert these unix timestamps javascript style timestamps to timestamp with time zone first, which we can do with to_timestamp, but it's still a fairly simple query.
SELECT
date_trunc('day', to_timestamp(trade_time / 1000.0)),
COUNT(1)
FROM pings_raw
GROUP BY date_trunc('day', to_timestamp(trade_time / 1000.0))
Another approach would be to leave everything as numbers, which might be marginally faster, though I find it less readable
SELECT
(trade_time/(1000*60*60*24))::int * (1000*60*60*24),
COUNT(1)
FROM pings_raw
GROUP BY (trade_time/(1000*60*60*24))::int

How to convert timestamp in Redshift so that I get seconds/milliseconds?

I have a dataframe like:
id state city time
123.04 ny 1 01-10-2021 12:30
123.05 ny 2 01-10-2021 12:30
I want the the id that that is associated with the most recent time by state. So I do:
select id, state
from data a
join (select state, max(time) as most_recent
from data group by 1) b on a on a.state = b.state and a.time = b.most_recent)
However, I am running into issues where the timestamp is the same. I know that I can do another query to then get the max ID but I would ideally like to just go by timestamp. I know that the ID is assigned in sequential order so if I am able to get the seconds or milliseconds then I will be able to actually get the most recent ID.
Is there way to get seconds/milliseconds or do I have to do another query?
Redshift timestamps have a 1 microsecond resolution - this is part of the data type definition. If you are not seeing the seconds and fractional seconds it is because of how your bench (/ connection to RS) is presenting the data to you. The easiest way to see the fractional seconds is to format the timestamp to a string when viewing and then there will not be any reformatting. For example:
select to_char(sysdate, 'HH24:MI:SS.US');

I need to add up lots of values between date ranges as quickly as possible using PostgreSQL, what's the best method?

Here's a simple example of what I'm trying to do:
CREATE TABLE daily_factors (
factor_date date,
factor_value numeric(3,1));
CREATE TABLE customer_date_ranges (
customer_id int,
date_from date,
date_to date);
INSERT INTO
daily_factors
SELECT
t.factor_date,
(random() * 10 + 30)::numeric(3,1)
FROM
generate_series(timestamp '20170101', timestamp '20210211', interval '1 day') AS t(factor_date);
WITH customer_id AS (
SELECT generate_series(1, 100000) AS customer_id),
date_from AS (
SELECT
customer_id,
(timestamp '20170101' + random() * (timestamp '20201231' - timestamp '20170101'))::date AS date_from
FROM
customer_id)
INSERT INTO
customer_date_ranges
SELECT
d.customer_id,
d.date_from,
(d.date_from::timestamp + random() * (timestamp '20210211' - d.date_from::timestamp))::date AS date_to
FROM
date_from d;
So I'm basically making two tables:
a list of daily factors, one for every day from 1st Jan 2017 until today's date;
a list of 100,000 "customers" all who have a date range between 1st Jan 2017 and today, some long, some short, basically random.
Then I want to add up the factors for each customer in their date range, and take the average value.
SELECT
cd.customer_id,
AVG(df.factor_value) AS average_value
FROM
customer_date_ranges cd
INNER JOIN daily_factors df ON df.factor_date BETWEEN cd.date_from AND cd.date_to
GROUP BY
cd.customer_id;
Having a non-equi join on a date range is never going to be pretty, but is there any way to speed this up?
The only index I could think of was this one:
CREATE INDEX performance_idx ON daily_factors (factor_date);
It makes a tiny difference to the execution time. When I run this locally I'm seeing around 32 seconds with no index, and around 28s with the index.
I can see that this is a massive bottleneck in the system I'm building, but I can't think of any way to make things faster. The ideas I did have were:
instead of using daily factors I could largely get away with monthly ones, but now I have the added complexity of "whole months and partial months" to work with. It doesn't seem like it's going to be worth it for the added complexity, e.g. "take 7 whole months for Feb to Aug 2020, then 10/31 of Jan 2020 and 15/30 of September 2020";
I could pre-calculate every average I will ever need, but with 1,503 factors (and that will increase with each new day), that's already 1,128,753 numbers to store (assuming we ignore zero date ranges and that my maths is right). Also my real world system has an extra level of complexity, a second identifier with 20 possible values, so this would mean having c.20 million numbers to pre-calculate. Also, every day the number of values to store grows exponentially;
I could take this work out of the database, and do it in code (in memory), as it seems like a relational database might not be the best solution here?
Any other suggestions?
The classic way to deal with this is to store running sums of factor_value, not (or in addition to) individual values. Then you just look up the running sum at the two end points (actually at the end, and one before the start), and take the difference. And of course divide by the count, to turn it into an average. I've never done this inside a database, but there is no reason it can't be done there.

Index for TIMESTAMPTZ and function immutability

We have a structure similar to the following:
create table company
(
id bigint not null,
tz text not null
);
create table company_data
(
company_id bigint not null,
ts_tz timestamp with time zone not null
);
The tables are simplified.
Fiddle with sample data here: SQL Fiddle
Every company has a fixed TZ. So, when we need to extract some information from company_data we use a query similar to the following:
select
cd.company_id,
cd.ts_tz at time zone c.tz
from company_data cd
join company c on c.id = cd.company_id;
We also have a function to get company tz:
create or replace function tz_company(f_company_id bigint) returns text
language plpgsql
as
$$
declare
f_tz text;
begin
select c.tz from company c where c.id = f_company_id into f_tz;
return f_tz;
end;
$$;
And another to transform a ts in a date applying a tz:
create or replace function tz_date(timestamp with time zone, text) returns date
language plpgsql
immutable strict
as
$$
begin
return ($1 at time zone $2) :: date;
end;
$$;
The problem we are having now is that company_data (and other similar tables) is a large and frequently used table. The majority of the SELECTs in that table performs filtering using a DATE.
For example:
select cd.company_id,
cd.ts_tz at time zone tz_company(cd.company_id)
from company_data cd
where tz_date(cd.ts_tz, tz_company(cd.company_id)) >= '2019-08-20'
and tz_date(cd.ts_tz, tz_company(cd.company_id)) <= '2019-08-22';
So, to speed up queries, we need to add an index in the company_data.ts_tz column. The only way for doing this that we found was the following:
create index idx_company_data_ts_tz on company_data
(((company_data.ts_tz at time zone tz_company(company_data.company_id))::date));
For this to work, we need to make the tz_company function immutable.
Some other problems (and ideas) emerged:
1 - The version of the query using tz_date function does not use index.
Not uses index:
explain analyse
select cd.company_id,
cd.ts_tz at time zone tz_company(cd.company_id)
from company_data cd
where tz_date(cd.ts_tz, tz_company(cd.company_id)) >= '2019-08-20'
and tz_date(cd.ts_tz, tz_company(cd.company_id)) <= '2019-08-22';
Uses index:
explain analyse
select cd.company_id,
cd.ts_tz at time zone tz_company(cd.company_id)
from company_data cd
where (cd.ts_tz at time zone tz_company(cd.company_id))::date >= '2019-08-20'
and (cd.ts_tz at time zone tz_company(cd.company_id))::date <= '2019-08-22';
Why that happens?
2 - We know that, in theory, tz_company should not be immutable, at most stable. But, the company tz is an information that should not change, ever. Yes, it could happen, but it is improbable. In the past three years, we never change the tz of any company. So, is still a problem for tz_company to be immutable? If it is, how could we rewrite the index? Note that a single SELECT could bring information of more than one company and mix different timezones.
3 - Because of the complexity of dealing with indexes in a timestamptz column we consider to add another column in every table that has a ts_tz. This new column would be a date with tz already applied. Is this a good approach?
Besides, we need to apply tz before casting because every client (company) selects only dates to filter and this dates are locale aware (tz aware).
EDIT 1:
The queries used are only for demonstration. But a requirement is that the client sees the timestamps in the timezone where the event has occurred, this is an important requirement. We deal with logistics operations in Brazil and Brazil itself has four different timezones across the country.
A holding could own different companies and every company could be in a different timezone.
So, a lot of queries deals with different companies at different timezones and applying some date filtering. Today, our backend returns all data ready to display, with timezone applied and this would be difficult to change.
What we want to achieve, is an easy and performative way of dealing with those timestamptz columns: applying filter by date (tz aware) and using indexes to speedup queries.
1 - That's because tz_date is not marked as immutable. It is safe to mark it as immutable if postgres allows to create an index on the same expression as in the body of the function -- it only would allow to do it on an immutable expression. Some postgres date-time manipulation functions and type casts are immutable, some aren't. BTW I'm not sure what happens to an index if at time zone operator breaks its immutability contract when tzdata is changed -- that happens quite often on postgres or OS upgrade, depending on the settings.
2 - That's a very dangerous approach, the index becomes corrupted if you change the data. You may lose data. If you absolutely need this pseudo-immutable function I would strongly recommend to add a trigger that disallows deletes, truncates and updates of company.tz. If you ever need to change the time zone data, drop the index first.
3 - The key question is whether you happen to query data across multiple companies?
a) If you do, it's only of numerological sense. 2011-09-13 events from Niue (UTC-11) and 2019-09-13 events from New Zealand (UTC+13) can never happen at the same time. The only common property of these events is they happened on Friday the 13th. That's only notation, it never was 2019-09-13 in both countries at the same time. So please make sure your queries really make sense. In this unlikely case denormalization of the date notation as a separate timestamp without time zone column would make sense, as you're filtering by notation of time, not by the moment of time. I would recommend a trigger to populate it.
b) All your queries are single-company. In this case I would create a plain index on columns only with no expressions and create a function and make queries like this:
create index on company_data(company_id, ts_tz);
create function midnight_at_company(p_date date, p_company_id bigint) strict returns timestamp with time zone as $$
select p_date::timestamp at time zone tz from company where id = p_company_id;
$$ language sql;
-- put your company id instead of $1
explain analyse
select cd.company_id,
cd.ts_tz at time zone tz_company(cd.company_id)
from company_data cd
where company_id = $1
and cd.ts_tz >= midnight_at_company('2019-08-20', $1)
and cd.ts_tz < midnight_at_company('2019-08-23', $1); --note exact `<`, not `<=`
I would standardize all the time zones into one calling it database or server time. I understand that companies are in different places, but that is not a good reason to have timezones all over your data. Using this method will eliminate the need to have a time zone reference table. When you pull the data from any one of these companies write your code to take into account the server time zone so that it reads in your local time.
This will eliminate tons of potential confusion. This is a method used across the world, that is why data timestamps in most APIs only have one timezone.
In response to Edit:
Hi #Luiz
Let me start with there is no right or wrong answer its whatever think works best. In my case I am of the opinion that the front end view and the data should be managed some what separate. On the data side as per this topic I would handle all date stamps using server time. The need to view data one way or another is a front end issue.
In the case of your requirement I would either hard code a js switch like such.
switch("CampanyA") {
case "CompanyA":
return Timezone EST...
// code block
break;
case "CompanyB" :
// code block
break;
default:
// code block
}
or if there are to many companies for a hard code to be handling I would make a table with the "Company ID", "Company name", and "Time Zone Code". Do not link this table to your data. You should add the "Company ID" to the main table with events that have the server time zone.
Use the table with the company time zone codes to populate your look up filter that will be used to run your query. When your script event handler reacts to the drop down menu it will save the current TM Zone code associated with that company and use the value when trying to display the time zone in accordance to your requirement. I would also force your code to load data as async (1000 records or so every few mil seconds) instead of all at once. This will vastly increase performance and the user will not be able to tell that their data is still loading.
This efforts will let you manipulate the time zone to meet the current and future requirements that might come up.
I think the current schema that u are using for your application is not the best for such a problem.
You would have a lot of problems saving different timezones at the same table.
Use UTC, only use UTC on the DB/Schema level, you can set that in Postgres conf also.
Depending on the application, you could send back UTC dates and convert them to their current local time in javascript/server Side. If that's not possible have one place where the user specifies their current UTC offset and then right before you display the date/time convert it to their time.
This is going to make your life super simple and u can achieve great performance on the Query level as u now would have a performant DB Schema, the SQL functions you have makes no sense as you can achieve much better performance just by using indexing in DB.
So as per your specific requirements, I would have the schema as u have with some additions, I would index the id for the table company and would store all the data in UTC for the timestamp in table company_data.
if the company data is being requested we fetch the Timezone(Text) from the company table, using this data we can have the backend code/JS do the timezone change magic.
we have a limited amount of timezones, you can ideally have those set in config to make the lookup easier and faster.

Postgres: using timestamps for pagination

I have table with created (timestamptz) property. Now, i need to create pagination based on timestamp, because while user is watching first page, new items could be submitted into this table, which will make data inconsistent in case if i'll use OFFSET for pagination.
So, the question is: should i keep created type as timestamptz or it's better to convert it into integer (unix, e.g. 1472031802812). If so, is there any disadvantages? Also, atm i have now() as default value in created - is there alternative function to create unix timestamp?
Let me rewrite things from comments to my answer. You want to use timestamp type instead of integer simply because that's exactly what it was designed for. Doing manual convertions between timestamp integers and timestamp objects is just a pain and you gain nothing. And you will need it eventually for more complex datetime based queries.
To answer a question about pagination. You simply do a query
SELECT *
FROM table_name
WHERE created < lastTimestamp
ORDER BY created DESC
LIMIT 30
If it is first query then you set say lastTimestamp = '3000-01-01'. Otherwise you set lastTimestamp = last_query.last_row.created.
Optimization
Note that if the table is big then ORDER BY created DESC might not be efficient (especially if called parallely with different ranges). In this case you can use moving "time windows", for example:
SELECT *
FROM table_name
WHERE
created < lastTimestamp
AND created >= lastTimestamp - interval '1 day'
The 1 day interval is picked arbitrarly (tune it to your needs). You can also sort results in the app.
If results is not empty then you update (in your app)
lastTimestamp = last_query.last_row.created
(assuming you've done sorting, otherwise you take min(last_query.row.created))
If results is empty then you repeat the query with lastTimestamp = lastTimestamp - interval '1 day' until you fetch something. Also you have to stop if lastTimestamp becomes to low, i.e. when it is lower then any other timestamp in the table (which has to be prefetched).
All of that is under some assumptions for inserts:
new_row.created >= any_row.created and
new_row.created ~ current_time
The distribution of new_row.created is more or less uniform
Assumption 1 ensures that pagination results in consistent data while assumption 2 is only needed for the default 3000-01-01 date. Assumption 3 is to make sure that you don't have big empty gaps when you have to issue many empty queries.
You mean something like this?
select extract(epoch from now())::integer as unix_time