Store value history in PostgreSQL - postgresql

Let's say we need to store reading of temperature sensors and keep a history, for every day. Each measure is a tuple day: number, value: string (some days could be missed that's why we need to store the day explicitly). There are hundreds of thousands of sensors.
Adding new measurement should not require re-reading and re-write the whole object, it should be small incremental addition.
And, there could be multiple reading for the same day. In such case, if day is the same only the latest measurement for that day should be kept.
What data structure should be used? I can see the following ways:
CREATE TABLE sensor_history (
sensor_id integer,
time integer[],
value text[]
);
or
CREATE TABLE sensor_history (
sensor_id integer,
history json/jsonb/hstore
);

Why not just store one row per tuple? e.g.
CREATE TABLE sensor_history (
sensor_id integer,
time integer,
value text
);

Related

PostgreSQL indexed columns choice

I have these two tables :
CREATE TABLE ref_dates(
date_id SERIAL PRIMARY KEY,
month int NOT NULL,
year int NOT NULL,
month_name CHAR(255)
);
CREATE TABLE util_kpi(
kpi_id SERIAL PRIMARY KEY,
kpi_description int NOT NULL,
kpi_value float,
date_id int NOT NULL,
dInsertion timestamp default CURRENT_TIMESTAMP,
CONSTRAINT fk_ref_kpi FOREIGN KEY (date_id) REFERENCES ref_dates(date_id)
);
Usually, the type of request i'd do is :
Selecting kpi_description and kpi_value for a specified month and year:
SELECT kpi_description, kpi_value FROM util_kpi u JOIN ref_dates r ON u.date_id = r.date_id WHERE month=X AND year=XXXX
Selecting kpi_description and kpi_value for a specified kpi_description, month and year:
SELECT kpi_description, kpi_value FROM util_kpi u JOIN ref_dates r ON u.date_id = r.date_id WHERE month=X AND year=XXXX AND kpi_description='XXXXXXXXXXX'
I tought about creating these indexes :
CREATE INDEX idx_ref_date_year_month ON ref_dates(year, month);
CREATE INDEX idx_util_kpi_date ON util_kpi(date_id);
First of all, i want to know if it's a good idea to create these indexes.
Second of all and finally, I was wondering if it's a good idea to add kpi_description to the indexes on util_kpi table.
Can you guys give me your opinion ?
Regards
It's not possible to give exact answer without looking on data.
So it's only possible to give an opinion.
A. ref_dates
This table looks very similar to date dimension in ROLAP-schemas.
So the first what I would do: is change date_id from SERIAL to:
DATE datatype
or even "smart integer": integer datatype but in form YYYYMMDD. E.g. 20210430. It may look strange but it's not uncommon to see such identificators in date dimensions
The main point in using such form is that date_id in fact tables became informative even without joining to date dimension.
B. util_kpi
I suppose that:
ref_dates is a date dimension. So it's ~365 * number of years rows. It could be populated once for 20-30 years for future and it's still will not be really big
util_kpi is fact table. Which must be big like "really big" - millions and more records.
For `util_kpi' I expected id of time dimension but did not found it. So no hourly stats are supposed yet.
I see util_kpi.dInsertion - which I suppose is planned to be used as time dimension. I would think to extract it into time_id where put hours, minutes and seconds (if milliseconds are not needed).
C.Indexing
ref_dates: it does not matters a lot how you index ref_dates because it's a relatively small table. Maybe unique index on date_id with INCLUDE options for all fields would be the best. Don't create individual index for fields with low selectivity like year or month - it will not make much sense but it will not harm a lot too.
util_kpi - you need an index on date_id (as for any foreign keys to other dimension tables that will appear in future).
That's my thoughts that based on what I supposed.

How do I make a simple day dimension table for data warehousing star schema with postgresql?

How would I go about creating and populating a simple DAY dimension table for a star schema in postgreSQL ?
It is for an intro course to data warehousing and so it only has a few fields but most of the examples online are very involved and seem very complicated for a beginner. This isn't for an assignment - it is for studying because I am trying to make my own simple Star Schema with a fact table so I can start getting comfortable with it.
Can anyone give me a simple example of how I'd create the table with just a few fields (day_key as the surrogate key, a string describing the day, and some integer values representing the days or months for example) so I can at least get started on understanding?
A very simple DAY dimension table that should work for most versions of PostgreSQL (I am using 10.5). This is just something that should help someone newer to Data Warehousing make a basic day dimension for use when just getting started.
Create a Day Table
CREATE TABLE day (
day_key SERIAL PRIMARY KEY, -- SERIAL is an integer that will auto-increment as new rows added
description VARCHAR(40), -- a 'string' for a description
full_date DATE, -- an actual date type
month_number INTEGER,
month_name VARCHAR(40),
year INTEGER
);
Inserting Rows into the Day dimension
INSERT INTO day(description, full_date, month_number, month_name, year)
SELECT
to_char(days.d, 'FMMonth DD, YYYY'),
days.d::DATE,
to_char(days.d, 'MM')::integer,
to_char(days.d, 'FMMonth'),
to_char(days.d, 'YYYY')::integer
from (
SELECT generate_series(
('2019-01-01')::date, -- 'start' date
('2019-12-31')::date, -- 'end' date
interval '1 day' -- one for each day between the start and day
)) as days(d);
Result
Notes:
Basically you are just using the rows generated by the nested SELECT generate_series(... to insert into the Day table.
I used the FM above twice to remove some of the white space padding automatically generated in some of these date formatting.
I'd recommend removing the INSERT INTO day(...) line the first time you do this just to make sure the format of each column is what you're after before inserting it into your table.
This is just what I've seen commonly used - check the PostgreSQL documentation has some more thorough and good examples of more ways to format date types and get all kinds of useful dimensions.

Calculate the difference between two hours on PostgreSQL 9.4

Given a table as the following:
create table meetings(
id integer primary key,
start_time varchar,
end_time varchar
)
Considering that the string stored in this table follow the format 'HH:MM' and have a 24 hours format, is there a command on PostgreSQL 9.4 that I can cast fields to time, calculate the difference between them, and return a single result of the counting of full hours available?
e.g: start_time: '08:00' - end_time: '12:00'
Result must be 4.
In your particular case, assuming that you are working with clock values (both of them belonging to the same day), I would guess you can do this
(clock_to::time - clock_from::time) as duration
Allow me to leave you a ready to run example:
with cte as (
select '4:00'::varchar as clock_from, '14:00'::varchar as clock_to
)
select (clock_to::time - clock_from::time) as duration
from cte

Handling of multiple queries as one result

Lets say I have this table
CREATE TABLE device_data_by_year (
year int,
device_id uuid,
sensor_id uuid,
nano_since_epoch bigint,
unit text,
value double,
source text,
username text,
PRIMARY KEY (year, device_id, nano_since_epoch,sensor_id)
) WITH CLUSTERING ORDER BY (device_id desc, nano_since_epoch desc);
I need to query data for a particular device and sensor in a period between 2017 and 2018. In this case 2 queries will be issued:
select * from device_data_by_year where year = 2018 AND device_id = ? AND sensor_id = ? AND nano_since_epoch >= ? AND nano_since_epoch <= ?
select * from device_data_by_year where year = 2018 AND device_id = ? AND sensor_id = ? AND nano_since_epoch >= ? AND nano_since_epoch <= ?
Currently I iterate over the resultsets and build a List with all the results. I am aware that this could (and will) run into OOM problems some day. Is there a better approach, how to handle / merge query results into one set?
Thanks
You can use IN to specify a list of years, but this is not very optimal solution - because the year field is partition key, then most probably the data will be on different machines, so one of the node will work as "coordinator", and will need to ask another machine for results, and aggregate data. From performance point of view, 2 async requests issued in parallel could be faster, and then do the merge on client side.
P.S. your data model have quite serious problems - you partition by year, this means:
Data isn't very good distributed across the cluster - only N=RF machines will hold the data;
These partitions will be very huge, even if you get only hundred of devices, reporting one measurement per minute;
Only one partition will be "hot" - it will receive all data during the year, and other partitions won't be used very often.
You can use months, or even days as partition key to decrease the size of partition, but it still won't solve the problem of the "hot" partitions.
If I remember correctly, Data Modelling course at DataStax Academy has an example of data model for sensor network.
Changed the table structure to:
CREATE TABLE device_data (
week_first_day timestamp,
device_id uuid,
sensor_id uuid,
nano_since_epoch bigint,
unit text,
value double,
source text,
username text,
PRIMARY KEY ((week_first_day, device_id), nano_since_epoch, sensor_id)
) WITH CLUSTERING ORDER BY (nano_since_epoch desc, sensor_id desc);
according to #AlexOtt proposal. Some changes to the application logic are required - for example findAllByYear needs to iterate over single weeks now.
Coming back to the original question: would you rather send 52 queries (getDataByYear, one query per week) oder would you use the IN operator here?

Looping through unique dates in PostgreSQL

In Python (pandas) I read from my database and then I use a pivot table to aggregate data each day. The raw data I am working on is about 2 million rows per day and it is per person and per 30 minutes. I am aggregating it to be daily instead so it is a lot smaller for visualization.
So in pandas, I would read each date into memory and aggregate it and then load it into a fresh table in postgres.
How can I do this directly in postgres? Can I loop through each unique report_date in my table, groupby, and then append it to another table? I am assuming doing it in postgres would be fast compared to reading it over a network in python, writing a temporary .csv file, and then writing it again over the network.
Here's an example: Suppose that you have a table
CREATE TABLE post (
posted_at timestamptz not null,
user_id integer not null,
score integer not null
);
representing the score various user have earned from posts they made in SO like forum. Then the following query
SELECT user_id, posted_at::date AS day, sum(score) AS score
FROM post
GROUP BY user_id, posted_at::date;
will aggregate the scores per user per day.
Note that this will consider that the day changes at 00:00 UTC (like SO does). If you want a different time, say midnight Paris time, then you can do it like so:
SELECT user_id, (posted_at AT TIME ZONE 'Europe/Paris')::date AS day, sum(score) AS score
FROM post
GROUP BY user_id, (posted_at AT TIME ZONE 'Europe/Paris')::date;
To have good performace for the above queries, you might want to create a (computed) index on (user_id, posted_at::date), or similarly for the second case.