I am currently modeling a table schema for PostgreSQL that has a lot of columns and is intended to hold a lot of rows. I don't know if it is faster to have more columns or to split the data into more rows.
The schema looks like this (shortened):
CREATE TABLE child_table (
PRIMARY KEY(id, position),
id bigint REFERENCES parent_table(id) ON DELETE CASCADE,
position integer,
account_id bigint REFERENCES accounts(account_id) ON DELETE CASCADE,
attribute_1 integer,
attribute_2 integer,
attribute_3 integer,
-- about 60 more columns
);
Exactly 10 rows of child_table are at maximum related to one row of parent_table. The order is given by the value in position which ranges from 1 to 10. parent_table is intended to hold 650 million rows. With this schema I would end up with 6.5 billion rows in child_table.
Is it smart to do this? Or is it better to model it this way so that I only have 650 million rows:
CREATE TABLE child_table (
PRIMARY KEY(id),
id bigint,
parent_id bigint REFERENCES other_table(id) ON DELETE CASCADE,
account_id_1 bigint REFERENCES accounts(account_id) ON DELETE CASCADE,
attribute_1_1 integer,
attribute_1_2 integer,
attribute_1_3 integer,
account_id_2 bigint REFERENCES accounts(account_id) ON DELETE CASCADE,
attribute_2_1 integer,
attribute_2_2 integer,
attribute_2_3 integer,
-- [...]
);
The number of columns and rows matters less than how well they are indexed. Indexes drastically reduce the number of rows which need to be searched. In a well-indexed table, the total number of rows is irrelevant. If you try to smash 10 rows into one row you'll make indexing much harder. It will also make writing efficient queries which use those indexes harder.
Postgres has many different types of indexes to cover many different types of data and searches. You can even write your own (though that shouldn't be necessary).
Exactly 10 rows of child_table are at maximum related to one row of parent_table.
Avoid encoding business logic in your schema. Business logic changes all the time, especially arbitrary numbers like 10.
One thing you might consider is reducing the number of attribute columns, 60 is a lot, especially if they are actually named attribute_1, attribute_2, etc. Instead, if your attributes are not well defined, store them as a single JSON column with keys and values. Postgres' JSON operations are very efficient (provided you use the jsonb type) and provide a nice middle ground between a key/value store and a relational database.
Similarly, if any sets of attributes are simple lists (like address1, address2, address3), you can also consider using Postgres arrays.
I can't give better advice than this without specifics.
Related
Could somebody tell is it good idea use varchar as PK. I mean is it less efficient or equal to int/uuid?
In example: car VIN I want to use it as PK but I'm not sure as good it will be indexed or work as FK or maybe there is some pitfalls.
It depends on which kind of data you are going to store.
In some cases (I would say in most cases) it is better to use integer-based primary keys:
for instance, bigint needs only 8 bytes, varchar can require more space. For this reason, a varchar comparison is often more costly than a bigint comparison.
while joining tables it would be more efficient to join them using integer-based values rather that strings
an integer-based key as a unique key is more appropriate for table relations. For instance, if you are going to store this primary key in another tables as a separate column. Again, varchar will require more space in other table too (see p.1).
This post on stackexchange compares non-integer types of primary keys on a particular example.
I am testing with creating a data warehouse for a relatively big dataset. Based on ~10% sample of the data I decided to partition some tables that are expected to exceed memory which currently 16GB
Based on the recommendation in postgreSQL docs: (edited)
These benefits will normally be worthwhile only when a table would otherwise be very large. The exact point at which a table will benefit from partitioning depends on the application, although a rule of thumb is that the size of the table should exceed the physical memory of the database server.
One particular table I am not sure how to partition, this table is frequently queried in 2 different ways, with WHERE clause that may include primary key OR another indexed column, so figured I need a range partition using the existing primary key and the other column (with the other column added to the primary key).
Knowing that the order of columns matters, and given the below information my question is:
What is the best order for primary key and range partitioning columns?
Original table:
CREATE TABLE items (
item_id BIGSERIAL NOT NULL, -- primary key
src_doc_id bigint NOT NULL, -- every item can exist in one src_doc only and a src_doc can have multiple items
item_name character varying(50) NOT NULL, -- used in `WHERE` clause with src_doc_id and guaranteed to be unique from source
attr_1 bool,
attr_2 bool, -- +15 other columns all bool or integer types
PRIMARY KEY (item_id)
);
CREATE INDEX index_items_src_doc_id ON items USING btree (src_doc_id);
CREATE INDEX index_items_item_name ON items USING hash (item_name);
Table size for 10% of the dataset is ~2GB (result of pg_total_relation_size) with 3M+ rows, loading or querying performance is excellent, but thinking that this table is expected to grow to 30M rows and size 20GB I do not know what to expect in terms of performance.
Partitioned table being considered:
CREATE TABLE items (
item_id BIGSERIAL NOT NULL,
src_doc_id bigint NOT NULL,
item_name character varying(50) NOT NULL,
attr_1 bool,
attr_2 bool,
PRIMARY KEY (item_id, src_doc_id) -- should the order be reversed?
) PARTITION BY RANGE (item_id, src_doc_id); -- should the order be reversed?
CREATE INDEX index_items_src_doc_id ON items USING btree (src_doc_id);
CREATE INDEX index_items_item_name ON items USING hash (item_name);
-- Ranges are not initially known, so maxvalue is used as upper bound,
-- when upper bound is known, partition is detached and reattached using
-- known known upper bound and a new partition is added for the next range
CREATE TABLE items_00 PARTITION OF items FOR VALUES FROM (MINVALUE, MINVALUE) TO (MAXVALUE, MAXVALUE);
Table usage
On loading data, the load process (python script) looks up existing items based on src_doc_id and item_name and stores item_id, so it does not reinsert existing items. Item_id gets referenced in lot of other tables, no foreign keys are used.
On querying for analytics item information is always looked up based on item_id.
So I can't decide the suitable order for the table PRIMARY KEY and PARTITION BY RANGE,
Should it be (item_id, src_doc_id) or (src_doc_id, item_id)?
I have a PostgreSQL table which I am trying to convert to a TimescaleDB hypertable.
The table looks as follows:
CREATE TABLE public.data
(
event_time timestamp with time zone NOT NULL,
pair_id integer NOT NULL,
entry_id bigint NOT NULL,
event_data int NOT NULL,
CONSTRAINT con1 UNIQUE (pair_id, entry_id ),
CONSTRAINT pair_id_fkey FOREIGN KEY (pair_id)
REFERENCES public.pairs (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
)
When I attempt to convert this table to a TimescaleDB hypertable using the following command:
SELECT create_hypertable(
'data',
'event_time',
chunk_time_interval => INTERVAL '1 hour',
migrate_data => TRUE
);
I get the Error: ERROR: cannot create a unique index without the column "event_time" (used in partitioning)
Question 1: From this post How to convert a simple postgresql table to hypertable or timescale db table using created_at for indexing my understanding is that this is because I have specified a unique constraint (pair_id_fkey) which does not contain the column I am partitioning by - event_time. Is that correct?
Question 2: How should I change my table or hypertable to be able to convert this? I have added some data on how I plan to use the data and the structure of the data bellow.
Data Properties and usage:
There can be multiple entries with the same event_time - those entries would have entry_id's which are in sequence
This means that if I have 2 entries (event_time 2021-05-18::10:16, id 105, <some_data>) and (event_time 2021-05-18::10:16, id 107, <some_data>) then the entry with id 106 would also have event_time 2021-05-18::10:16
The entry_id is not generated by me and I use the unique constraint con1 to ensure that I am not inserting duplicate data
I will query the data mainly on event_time e.g. to create plots and perform other analysis
At this point the database contains around 4.6 Billion rows but should contain many more soon
I would like to take advantage of TimescaleDB's speed and good compression
I don't care too much about insert performance
Solutions I have been considering:
Pack all the events which have the same timestamp in to an array somehow and keep them in one row. I think this would have downsides on compression and provide less flexibility on querying the data. Also I would probably end up having to unpack the data on each query.
Remove the unique constraint con1 - then how do I ensure that I don't add the same row twice?
Expand unique constraint con1 to include event_time - would that not somehow decrease performance while at the same time open up for the error where I accidentally insert 2 rows with entry_id and pair_id but different event_time? (I doubt this is a likely thing to happen though)
You understand correctly that UNIQUE (pair_id, entry_id ) doesn't allow to create hypertable from the table, since unique constraints need to include the partition key, i.e., event_time in your case.
I don't follow how the first option, where records with the same timestamp are packed into single record, will help with the uniqueness.
Removing the unique constraint will allow to create hypertable and as you mentioned you will lose possibility to check the constraint.
Adding the time column, e.g., UNIQUE (pair_id, entry_id, event_time) is quite common approach, but it allows to insert duplicates with different timestamps as you mentioned. It will perform worse than option 2 during inserts. You can replace index on event_time (which you need, since you query on this column, and it is created automatically by TimescaleDB) with unique index, so you save a little bit e.g.,
CREATE UNIQUE INDEX indx ON (event_time, pair_id, entry_id);
Manually create unique constraint on each chunk table. This will guarantee uniqueness within the chunk, but it will be still possible to have duplicates in different chunks. The main drawback is you will need to figure out how to create it when new chunk is created.
Unique constraints without partition keys are not supported in TimescaleDB, since it will require to access all existing chunks to check uniqueness and it will kill performance. (or it will require to create a global index, which can be large) I don't think it is common case for time series data to have unique constraints as it is usually related to artificially generated counter-based identifiers.
Suppose I have key/value/timerange tuples, e.g.:
CREATE TABLE historical_values(
key TEXT,
value NUMERIC,
from_time TIMESTAMPTZ,
to_time TIMESTAMPTZ
)
and would like to be able to efficiently query values (sorted descending) for a specific key and time, e.g.:
SELECT value
FROM historical_values
WHERE
key = [KEY]
AND from_time <= [TIME]
AND to_time >= [TIME]
ORDER BY value DESC
What kind of index/types should I use to get the best lookup performance? I suspect my solution will involve a tstzrange and a gist index, but I'm
not sure how to make that play well with the key matching and value ordering requirements.
Edit: Here's some more information about usage.
Ideally uses features available in Postgres v9.6.
Relation will contain approx. 1k keys and 5m values per key. Values are large integers (up to 32 bytes), mostly unique. Time ranges between few hours to a couple years. Time horizon is 5 years. No NULL values allowed, but some time ranges are open-ended (could either use NULL or a time far into the future for to_time).
The primary key is the key and time range (as there is only one historical value for a time range, per key).
Common operations are a) updating to_time to "close" a historical value, and b) inserting a new value with from_time = NOW.
All values may be queried. Partitioning is an option.
DB design
For a big table like that ("1k keys and 5m values per key") I would suggest to optimize storage like:
CREATE TABLE hist_keys (
key_id serial PRIMARY KEY
, key text NOT NULL UNIQUE
);
CREATE TABLE hist_values (
hist_value_id bigserial PRIMARY KEY -- optional, see below!
, key_id int NOT NULL REFERENCES hist_keys
, value numeric
, from_time timestamptz NOT NULL
, to_time timestamptz NOT NULL
, CONSTRAINT range_valid CHECK (from_time <= to_time) -- or < ?
);
Also helps index performance.
And consider partitioning. List-partitioning on key_id. Maybe even add sub-partitioning on (range partitioning this time) on from_time. Read the manual here.
With one partition per key_id, (and constraint exclusion enabled!) Postgres would only look at the small partition (and index) for the given key, instead of the whole big table. Major win.
But I would strongly suggest to upgrade to at least Postgres 10 first, which added "declarative partitioning". Makes managing partition a lot easier.
Better yet, skip forward to Postgres 11 (currently beta), which adds major improvements for partitioning (incl. performance improvements). Most notably, for your goal to get the best lookup performance, quoting the chapter on partitioning in release notes for Postgres 11 (currently beta):
Allow faster partition elimination during query processing (Amit Langote, David Rowley, Dilip Kumar)
This speeds access to partitioned tables with many partitions.
Allow partition elimination during query execution (David Rowley, Beena Emerson)
Previously partition elimination could only happen at planning time,
meaning many joins and prepared queries could not use partition elimination.
Index
From the perspective of the value column, the small subset of selected rows is arbitrary for every new query. I don't expect you'll find a useful way to support ORDER BY value DESC with an index. I'd concentrate on the other columns. Maybe add value as last column to each index if you can get index-only scans out of it (possible for btree and GiST).
Without partitioning:
CREATE UNIQUE INDEX hist_btree_idx ON hist_values (key_id, from_time, to_time DESC);
UNIQUE is optional, but see below.
Note the importance of opposing sort orders for from_time and to_time. See (closely related!):
Optimizing queries on a range of timestamps (two columns)
This is almost the same index as the one implementing your PK on (key_id, from_time, to_time). Unfortunately, we cannot use it as PK index. Quoting the manual:
Also, it must be a b-tree index with default sort ordering.
So I added a bigserial as surrogate primary key in my suggested table design above and NOT NULL constraints plus the UNIQUE index to enforce your uniqueness rule.
In Postgres 10 or later consider an IDENTITY column instead:
Auto increment table column
You might even do with PK constraint in this exceptional case to avoid duplicating the index and keep the table at minimum size. Depends on the complete situation. You may need it for FK constraints or similar. See:
How does PostgreSQL enforce the UNIQUE constraint / what type of index does it use?
A GiST index like you already suspected may be even faster. I suggest to keep your original timestamptz columns in the table (16 bytes instead of 32 bytes for a tstzrange) and add key_id after installing the additional module btree_gist:
CREATE INDEX hist_gist_idx ON hist_values
USING GiST (key_id, tstzrange(from_time, to_time, '[]'));
The expression tstzrange(from_time, to_time, '[]') constructs a range including upper and lower bound. Read the manual here.
Your query needs to match the index:
SELECT value
FROM hist_values
WHERE key = [KEY]
AND tstzrange(from_time, to_time, '[]') #> tstzrange([TIME_FROM], [TIME_TO], '[]')
ORDER BY value DESC;
It's equivalent to your original.
#> being the range contains operator.
With list-partitioning on key_id
With a separate table for each key_id, we can omit key_id from the index, improving size and performance - especially for the GiST index - for which we then also don't need the additional module btree_gist. Results in ~ 1000 partitions and the corresponding indexes:
CREATE INDEX hist999_gist_idx ON hist_values USING GiST (tstzrange(from_time, to_time, '[]'));
Related:
Store the day of the week and time?
There are an "official benchmark" or a simple rule of thumb to decide when space or performance will be affected?
My table have many simple and indexed fields,
CREATE TABLE t (
id serial PRIMARY KEY,
name varchar(250) NOT NULL,
...
xcontent xml, -- the NULL use disk space?? cut performance?
...
UNIQUE(name)
);
and it is a kind of "sparse content", many xcontent values will be NULL... So, these XML NULLs consumes some disk space?
Notes
I can normalize, the table t now will be nt,
CREATE TABLE nt (
id serial PRIMARY KEY,
name varchar(250) NOT NULL,
...
UNIQUE(name)
);
CREATE TABLE nt2 (
t_id int REFERENCES nt(id),
xcontent xml NOT NULL
);
CREATE VIEW nt_full AS
SELECT nt.*, nt2.xcontnt FROM nt LEFT JOIN nt2 ON id=t_id;
So, I need this complexity? this new table arrange will consume less disk spacess. The use of
SELECT id, name FROM nt WHERE name>'john'; -- Q1A
SELECT id, name FROM nt_full WHERE name>'john'; -- Q1B
SELECT id, name FROM t WHERE name>'john'; -- Q1C
SELECT id, xcontent FROM nt_full WHERE name>'john'; -- Q2A
SELECT id, xcontent FROM t WHERE name>'john'; -- Q2B
So, in theory, all the performances of Q1A vs Q1B vs Q1C will be the same?
And Q2A vs Q2B?
The answer to the question "how much space does a null value take" is: no space at all - at least not in the "data" area.
For each nullable column in the table there is one bit in the row header that marks the column value as null (or not null). So the "space" that the null values takes is already present in the row header - regardless whether the column is null or not.
Thus the null "value" does not occupy any space in the data block storing the row.
This is documented in the manual: http://www.postgresql.org/docs/current/static/storage-page-layout.html
Postgres will not store long string values (xml, varchar, text, json, ...) in the actual data block if it exceeds a certain threshold (about 2000 bytes). If the value is longer than that, it will be stored in a special storage area "away" from your actual data. So splitting up the table into two tables with a 1:1 relationship doesn't really by you that much. Unless you are storing a lot of rows (hundreds of millions), I doubt you will be able to notice the difference - but this also depends on your usage patterns.
The data that is stored "out-of-line" is also automatically compressed.
Details about this can be found in the manual: http://www.postgresql.org/docs/current/static/storage-toast.html
One reason why the separate table might be an advantage is the necessary "vacuum" cleanup. If you update the XML data a lot but the rest of the table hardly ever changes, then splitting this up in two tables might improve the overall performance because "XML table" will need less "maintenance" and the "main" table won't be changed at all.
A varchar field consumes 2 bytes more than the content.
so if you define it as varchar(250)
and put 10 chars in, it consumes 12 bytes
100 chars consumes 102 bytes
NULL consumes 2 bytes. no problem.
if you are in some situation where you need to store large amounts of xml data
and end up using a (for instance) blob type, you should put that in another table
and keep your primary table lean and fast