I'm not trying to restart the UUID vs serial integer key debate. I know there are valid points to either side. I'm using UUID's as the primary key in several of my tables.
Column type: "uuidKey" text NOT NULL
Index: CREATE UNIQUE INDEX grand_pkey ON grand USING btree ("uuidKey")
Primary Key Constraint: ADD CONSTRAINT grand_pkey PRIMARY KEY ("uuidKey");
Here is my first question; with PostgreSQL 9.4 is there any performance benefit to setting the column type to UUID?
The documentation http://www.postgresql.org/docs/9.4/static/datatype-uuid.html describes UUID's, but is there any benefit aside from type safety for using this type instead of text type? In the character types documentation it indicates that char(n) would not have any advantage over text in PostgreSQL.
Tip: There is no performance difference among these three types, apart
from increased storage space when using the blank-padded type, and a
few extra CPU cycles to check the length when storing into a
length-constrained column. While character(n) has performance
advantages in some other database systems, there is no such advantage
in PostgreSQL; in fact character(n) is usually the slowest of the
three because of its additional storage costs. In most situations text
or character varying should be used instead.
I'm not worried about disk space, I'm just wondering if it's worth my time benchmarking UUID vs text column types?
Second question, hash vs b-tree indexes. No sense in sorting UUID keys so would b-tree have any other advantages over hash index?
We had a table with about 30k rows that (for a specific unrelated architectural reason) had UUIDs stored in a text field and indexed. I noticed that the query perf was slower than I'd have expected. I created a new UUID column, copied in the text uuid primary key and compared below. 2.652ms vs 0.029ms. Quite a difference!
-- With text index
QUERY PLAN
Index Scan using tmptable_pkey on tmptable (cost=0.41..1024.34 rows=1 width=1797) (actual time=0.183..2.632 rows=1 loops=1)
Index Cond: (primarykey = '755ad490-9a34-4c9f-8027-45fa37632b04'::text)
Planning time: 0.121 ms
Execution time: 2.652 ms
-- With a uuid index
QUERY PLAN
Index Scan using idx_tmptable on tmptable (cost=0.29..2.51 rows=1 width=1797) (actual time=0.012..0.013 rows=1 loops=1)
Index Cond: (uuidkey = '755ad490-9a34-4c9f-8027-45fa37632b04'::uuid)
Planning time: 0.109 ms
Execution time: 0.029 ms
A UUID is a 16 bytes value. The same as text is a 32 bytes value. The storage sizes are:
select
pg_column_size('a0eebc999c0b4ef8bb6d6bb9bd380a11'::text) as text_size,
pg_column_size('a0eebc999c0b4ef8bb6d6bb9bd380a11'::uuid) as uuid_size;
text_size | uuid_size
-----------+-----------
36 | 16
Smaller tables lead to faster operations.
Related
Why would postgres refuse to use a foreign key? Here is my query, the join is on a foreign key.
EXPLAIN
SELECT *
FROM listings_searchresult
JOIN listings_searchquery ON listings_searchresult.search_query_id = listings_searchquery.id
Hash Cond: (listings_searchresult.search_query_id = listings_searchquery.id)
-> Seq Scan on listings_searchresult (cost=0.00..4898345.08 rows=83607008 width=1129)
-> Hash (cost=570499.88..570499.88 rows=20226788 width=109)
-> Seq Scan on listings_searchquery (cost=0.00..570499.88 rows=20226788 width=109)
Why would postgres not use the foreign key? I delete it and readded it in case it was corrupted, still not working. Can I somehow force postgres to use this?
Unless one of the tables is very small, reading the whole tables is the most efficient technique to process such a query.
With a nested loop join, which is what you envision, PostgreSQL would have to scan an index on listings_searchresult 20 million times.
Using a hash join as it does, PostgreSQL builds a hash table in memory from the smaller table and probes that hash table for each row in the bigger table, which will perform better.
Joining two big tables without an additional WHERE condition is always going to be slow and can potentially produce a large result set.
So I have a (logged) table with two columns A, B, containing text.
They basically contain the same type of information, it's just two columns because of where the data came from.
I wanted to have a table of all unique values (so I made the column be the primary key), not caring about the column. But when I asked postgres to do
insert into new_table(value) select A from old_table on conflict (value) do nothing; (and later on same thing for column B)
it used 1 cpu core, and only read from my SSD with about 5 MB/s. I stopped it after a couple of hours.
I suspected that it might be because of the b-tree being slow and so I added a hashindex on the only attribute in my new table. But it's still using 1 core to the max and reading from the ssd at only 5 MB/s per second. My java program can hashset that at at least 150 MB/s, so postgres should be way faster than 5 MB/s, right? I've analyzed my old table and I made my new table unlogged for faster inserts and yet it still uses 1 core and reads extremely slowly.
How to fix this?
EDIT: This is the explain to the above query. Seems like postgres is using the b-tree it created for the primary key instead of my (much faster, isn't it??) Hash index.
Insert on users (cost=0.00..28648717.24 rows=1340108416 width=14)
Conflict Resolution: NOTHING
Conflict Arbiter Indexes: users_pkey
-> Seq Scan on games (cost=0.00..28648717.24 rows=1340108416 width=14)
The ON CONFLICT mechanism is primarily for resolving concurrency-induced conflicts. You can use it in a "static" case like this, but other methods will be more efficient.
Just insert only distinct values in the first place:
insert into new_table(value)
select A from old_table union
select B from old_table
For increased performance, don't add the primary key until after the table is populated. And set work_mem to the largest value you credibly can.
My java program can hashset that at at least 150 MB/s,
That is working with the hashset entirely in memory. PostgreSQL indexes are disk-based structures. They do benefit from caching, but that only goes so far and depends on hardware and settings you haven't told us about.
Seems like postgres is using the b-tree it created for the primary key instead of my (much faster, isn't it??) Hash index.
It can only use the index which defines the constraint, which is the btree index, as hash indexes cannot support primary key constraints. You could define an EXCLUDE constraint using the hash index, but that would just make it slower yet. And in general, hash indexes are not "much faster" than btree indexes in PostgreSQL.
I have a relatively small table (~50k rows). When I select all records, it takes a ~40s. The table has 3 JSONB columns. When I select every column except for the JSONBs, the query takes ~700ms.
If I add in just one of the JSONB fields, the query time jumps to nearly 10s.
I'm never using a where clause referencing something inside the JSONB, just selecting *. Even so, I tried adding GIN indexes because I saw them frequently mentioned as a performance booster for JSONB.
I've ran a full vacuum.
Postgres version 9.6
explain (analyze, buffers) select * from message;
Seq Scan on message (cost=0.00..5541.69 rows=52969 width=834) (actual
time=1.736..116.183 rows=52969 loops=1)
Buffers: shared hit=64 read=4948
Planning time: 0.151 ms
Execution time: 133.555 ms
Jsonb is PostgreSQL varlena data type - that means so when the value is longer than 2KB, then it is stored in auxiliary table (named TOAST table). A pointer to TOAST table is stored in main table. So when you don't touch Jsonb column, then this value is not read.
GIN index doesn't help in this case. It helps just for searching.
10sec on 50K values is long time - maybe your Jsonb values are pretty long, or your IO system doesn't perform well. Please, check size of your table, and check the performance of your IO. The cheap cloud machines usually has terrible IO.
Another possible reason of slowdown is a complexity of Jsonb data type. Jsonb is serialized tree of json sub objects. If you don't need some special features of Jsonb data type, then use JSON data type. This is just test (JSON format is checked on input only). The output of JSONB is faster than Jsonb, because JSON is internally text, and there are not any operations necessary. Output of Jsonb should be serialized, what is more expensive.
Suppose I have key/value/timerange tuples, e.g.:
CREATE TABLE historical_values(
key TEXT,
value NUMERIC,
from_time TIMESTAMPTZ,
to_time TIMESTAMPTZ
)
and would like to be able to efficiently query values (sorted descending) for a specific key and time, e.g.:
SELECT value
FROM historical_values
WHERE
key = [KEY]
AND from_time <= [TIME]
AND to_time >= [TIME]
ORDER BY value DESC
What kind of index/types should I use to get the best lookup performance? I suspect my solution will involve a tstzrange and a gist index, but I'm
not sure how to make that play well with the key matching and value ordering requirements.
Edit: Here's some more information about usage.
Ideally uses features available in Postgres v9.6.
Relation will contain approx. 1k keys and 5m values per key. Values are large integers (up to 32 bytes), mostly unique. Time ranges between few hours to a couple years. Time horizon is 5 years. No NULL values allowed, but some time ranges are open-ended (could either use NULL or a time far into the future for to_time).
The primary key is the key and time range (as there is only one historical value for a time range, per key).
Common operations are a) updating to_time to "close" a historical value, and b) inserting a new value with from_time = NOW.
All values may be queried. Partitioning is an option.
DB design
For a big table like that ("1k keys and 5m values per key") I would suggest to optimize storage like:
CREATE TABLE hist_keys (
key_id serial PRIMARY KEY
, key text NOT NULL UNIQUE
);
CREATE TABLE hist_values (
hist_value_id bigserial PRIMARY KEY -- optional, see below!
, key_id int NOT NULL REFERENCES hist_keys
, value numeric
, from_time timestamptz NOT NULL
, to_time timestamptz NOT NULL
, CONSTRAINT range_valid CHECK (from_time <= to_time) -- or < ?
);
Also helps index performance.
And consider partitioning. List-partitioning on key_id. Maybe even add sub-partitioning on (range partitioning this time) on from_time. Read the manual here.
With one partition per key_id, (and constraint exclusion enabled!) Postgres would only look at the small partition (and index) for the given key, instead of the whole big table. Major win.
But I would strongly suggest to upgrade to at least Postgres 10 first, which added "declarative partitioning". Makes managing partition a lot easier.
Better yet, skip forward to Postgres 11 (currently beta), which adds major improvements for partitioning (incl. performance improvements). Most notably, for your goal to get the best lookup performance, quoting the chapter on partitioning in release notes for Postgres 11 (currently beta):
Allow faster partition elimination during query processing (Amit Langote, David Rowley, Dilip Kumar)
This speeds access to partitioned tables with many partitions.
Allow partition elimination during query execution (David Rowley, Beena Emerson)
Previously partition elimination could only happen at planning time,
meaning many joins and prepared queries could not use partition elimination.
Index
From the perspective of the value column, the small subset of selected rows is arbitrary for every new query. I don't expect you'll find a useful way to support ORDER BY value DESC with an index. I'd concentrate on the other columns. Maybe add value as last column to each index if you can get index-only scans out of it (possible for btree and GiST).
Without partitioning:
CREATE UNIQUE INDEX hist_btree_idx ON hist_values (key_id, from_time, to_time DESC);
UNIQUE is optional, but see below.
Note the importance of opposing sort orders for from_time and to_time. See (closely related!):
Optimizing queries on a range of timestamps (two columns)
This is almost the same index as the one implementing your PK on (key_id, from_time, to_time). Unfortunately, we cannot use it as PK index. Quoting the manual:
Also, it must be a b-tree index with default sort ordering.
So I added a bigserial as surrogate primary key in my suggested table design above and NOT NULL constraints plus the UNIQUE index to enforce your uniqueness rule.
In Postgres 10 or later consider an IDENTITY column instead:
Auto increment table column
You might even do with PK constraint in this exceptional case to avoid duplicating the index and keep the table at minimum size. Depends on the complete situation. You may need it for FK constraints or similar. See:
How does PostgreSQL enforce the UNIQUE constraint / what type of index does it use?
A GiST index like you already suspected may be even faster. I suggest to keep your original timestamptz columns in the table (16 bytes instead of 32 bytes for a tstzrange) and add key_id after installing the additional module btree_gist:
CREATE INDEX hist_gist_idx ON hist_values
USING GiST (key_id, tstzrange(from_time, to_time, '[]'));
The expression tstzrange(from_time, to_time, '[]') constructs a range including upper and lower bound. Read the manual here.
Your query needs to match the index:
SELECT value
FROM hist_values
WHERE key = [KEY]
AND tstzrange(from_time, to_time, '[]') #> tstzrange([TIME_FROM], [TIME_TO], '[]')
ORDER BY value DESC;
It's equivalent to your original.
#> being the range contains operator.
With list-partitioning on key_id
With a separate table for each key_id, we can omit key_id from the index, improving size and performance - especially for the GiST index - for which we then also don't need the additional module btree_gist. Results in ~ 1000 partitions and the corresponding indexes:
CREATE INDEX hist999_gist_idx ON hist_values USING GiST (tstzrange(from_time, to_time, '[]'));
Related:
Store the day of the week and time?
I'm running Postgres 9.5 and am playing around with BRIN indexes. I have a fact table with about 150 million rows and I'm trying to get PG to use a BRIN index. My query is:
select sum(transaction_amt),
sum (total_amt)
from fact_transaction
where transaction_date_key between 20170101 and 20170201
I created both a BTREE index and a BRIN index (default pages_per_range value of 128) on column transaction_date_key (the above query is referring to January to February 2017). I would have thought that PG would choose to use the BRIN index however it goes with the BTREE index. Here is the explain plan:
https://explain.depesz.com/s/uPI
I then deleted the BTREE index, did a vacuum / analyze on the the table, and re-ran the query and it did choose the BRIN index however the run time was considerably longer:
https://explain.depesz.com/s/5VXi
In fact my tests were all faster when using the BTREE index rather than the BRIN index. I thought it was supposed to be the opposite?
I'd prefer to use the BRIN index because of its smaller size however I can't seem to get PG to use it.
Note: I loaded the data, starting from January 2017 through to June 2017 (defined via transaction_date_key) as I read that physical table ordering makes a difference when using BRIN indexes.
Does anyone know why PG is choosing to use the BTREE index and why BRIN is so much slower in my case?
It seems like the BRIN index scan is not very selective – it returns 30 million rows, all of which have to be re-checked, which is where the time is spent.
That probably means that transaction_date_key is not well correlated with the physical location of the rows in the table.
A BRIN index works by “lumping together” ranges of table blocks (how many can be configured with the storage parameter pages_per_range, whose default value is 128). The maximum and minimum of the indexed value for eatch range of blocks is stored.
So a lot of block ranges in your table contain transaction_date_key between 20170101 and 20170201, and all of these blocks have to be scanned to compute the query result.
I see two options to improve the situation:
Lower the pages_per_range storage parameter. That will make the index bigger, but it will reduce the number of “false positive” blocks.
Cluster the table on the transaction_date_key attribute. As you have found out, that requires (at least temporarily) a B-tree index on the column.