I'm querying to return all rows from a table except those that are in some list of values that is constant at query time. E.g. SELECT * FROM table WHERE id IN (%), and % is guaranteed to be a list of values, not be a subquery. However, this list of values may be up to 1000 elements long in some cases. Should I limit this to a smaller sublist (as few as 50-100 elements is as low as I can go, in this case) or will there be a negligible performance gain?
I assume it's a large table, otherwise it wouldn't matter much.
Depending on table size and number of keys, this may turn into a sequence scan. If there are many IN keys, Postgres often chooses not to use an index for it. The more keys, the bigger the chance of a sequence scan.
If you use another indexed column in WHERE, like:
select * from table where id in (%) and my_date > '2010-01-01';
It's likely to fetch all rows matching the indexed (my_date) columns, and then perform an in-memory scan on them.
Using a JOIN to a persistent or temporary table may, but does not have to help. It still will need to locate all the rows, either with a nested loop (unlikely for large data), or for a hash/merge join.
I would say the solution is:
Use as few IN keys as possible.
Use other criteria for indexing and querying whenever possible. If IN requires an in-memory scan of all rows, at least there will be fewer of them thanks to additional criteria.
Use a temporary table to JOIN, gives better performance and has no limits. An IN() having a 1000 arguments, will give you problems in any database.
Related
I have a table with geometry column.
I have 2 indexes on this column:
create index idg1 on tbl using gist(geom)
create index idg2 on tbl using gist(st_geomfromewkb((geom)::bytea))
I have a lot of queries using the geom (geometry) field.
Which index is used ? (when and why)
If there are two indexes on same column (as I show here), can the select queries run slower than define just one index on column ?
The use of an index depends on how the index was defined, and how the query is invoked. If you SELECT <cols> FROM tbl WHERE geom = <some_value>, then you will use the idg1 index. If you SELECT <cols> FROM tabl WHERE st_geomfromewkb(geom) = <some_value>, then you will use the idg2 index.
A good way to know which index will be used for a particular query is to call the query with EXPLAIN (i.e., EXPLAIN SELECT <cols> FROM tbl WHERE geom = <some_value>) -- this will print out the query plan, which access methods, which indexes, which joins, etc. will be used.
For your question regarding performance, the SELECT queries could run slower because there are more indexes to consider in the query planning phase. In terms of executing a given query plan, a SELECT query will not run slower because by then the query plan has been established and the decision of which index to use has been made.
You will certainly experience performance impact upon INSERT/UPDATE/DELETE of the table, as all indexes will need to be updated with respect to the changes in the table. As such, there will be extra I/O activity on disk to propagate the changes, slowing down the database, especially at scale.
Which index is used depends on the query.
Any query that has
WHERE geom && '...'::geometry
or
WHERE st_intersects(geom, '...'::geometry)
or similar will use the first index.
The second index will only be used for queries that have the expression st_geomfromewkb((geom)::bytea) in them.
This is completely useless: it converts the geometry to EWKB format and back. You should find and rewrite all queries that have this weird construct, then you should drop that index.
Having two indexes on a single column does not slow down your queries significantly (planning will take a bit longer, but I doubt if you can measure that). You will have a performance penalty for every data modification though, which will take almost twice as long as with a single index.
How do you create an index in PostgreSQL 11 to speed up a specific query containing an ORDER BY?
I have a query that needs to get the first 100 records from a table containing 2M records, along with a few common filters like:
SELECT id, first_name, last_name
FROM users
WHERE active = true AND region IN (1,2,3)
ORDER BY last_active_timestamp DESC;
Without the ORDER BY clause, it returns in ~1 sec, almost instantly. However, with the clause, it takes an excruciating ~5 minutes.
So I tried creating a partial index like:
CREATE INDEX CONCURRENTLY my_user_index ON users (active, region, last_active_timestamp DESC NULLS LAST)
WHERE region IN (1, 2, 3) AND active = True;
but that had virtually no effect. The above query still takes several minutes. Is that just a limitation of ORDER BY in Postgres, or is there a different type of index I could use to speed it up?
To try an index was correct but you used the wrong one. Try this here:
CREATE INDEX CONCURRENTLY my_user_index
ON users (last_active_timestamp DESC)
WHERE region IN (1, 2, 3)
AND active = true;
Your index was only sorted by last_active_timestamp after already being sorted by active and region, thus you could not just use the index to have your sorted output.
For some more speedup, you could also include the columns of your select-clause within the index using INCLUDE (id, first_name, last_name). Now your query can (if the planner chooses so and I think it will) run on the index only without touching the table data at all.
In order to use an index with the ORDER BY in your query, you need to index on all the relevant columns (last_active_timestamp, along with a condition to include only active==true and regions a,b,c). This will essentially pull the data out in order for you).
Also, if you share your EXPLAIN ANALYZE output, you may see a Sort Method: external merge Disk: ####kB, indicating that the sort spilled out to disk and not in memory, due to an insufficiently-sized work_mem. The solution would then be to increase work_mem to a value of at least ####kB, and try again.
Note that you can set work_mem on a per-session basis, as a global change in work_mem could potentially have negative side-effects, such as running out of memory, because postgresql.conf-configured work_mem is allocated for each session (basically, it has a multiplicative effect).
If the query is still slow after tuning up work_mem (i.e., it's all sorting in memory, and it's still slow), then your returned data set is simply too large to sort quickly.
When SELECT querying one table l, no joins, with billions of rows, is it a good idea to run concurrent queries by splitting the query into multiple queries, split into distinct subsets/ranges by the indexes column, say integer primary key id?
Or does Postgres internally do this already, leading to no significant gain in speed for the end user?
I have two use cases:
getting the total count of rows
getting the list of ids
Edit-1: The query has conditional clause on columns where one of the columns is not indexed, and the other columns are indexed
SELECT id
FROM l
WHERE indexed_column-1='A'
AND indexed_column-2='B'
AND not_indexed_column-1='C'
Postgres has parallelization built in since version 9.6. (Improved in current versions.) It will be much more efficient than manually splitting a SELECT on a big table.
You can set the number of max_parallel_workers to your needs to optimize.
While you are only interested in the id column, it may help to have an index on (id) (given if it's the PK) and fulfill prerequisites for an index-only scan.
In the case where you want to count the number of rows, you can just let PostgreSQL's internal query parallelization do the work. It will be faster, and the result will be consistent.
In the case where you want to get the list of primary keys, it depends on the WHERE conditions of the query. If you are selecting only a few rows, parallel query will do nicely.
If you want all ids of the table, PostgreSQL will probably not choose a parallel plan, because the cost of exchanging so many values between the worker processes will outweigh the advantages of parallelization. In that case, you may be faster with parallel sessions as you envision.
This 4-column composite index would probably be faster than using parallelism:
INDEX(indexed_column-1, indexed_column-2, -- first, in either order
not_indexed_column-1, id)
We have a table with 10 million rows. We need to find first few rows with like 'user%' .
This query is fast if it matches at least 2 rows (It returns results in 0.5 sec). If it doesn't find any 2 rows matching with that criteria, it is taking at least 10 sec. 10 secs is huge for us (since we are using this auto suggestions, users will not wait for so long to see the suggestions.)
Query: select distinct(name) from user_sessions where name like 'user%' limit 2;
In the above query, the name column is of type citext and it is indexed.
Whenever you're working on performance, start by explaining your query. That'll show the the query optimizer's plan, and you can get a sense of how long it's spending doing various pieces. In particular, check for any full table scans, which mean the database is examining every row in the table.
Since the query is fast when it finds something and slow when it doesn't, it sounds like you are indeed hitting a full table scan. I believe you that it's indexed, but since you're doing a like, the standard string index can't be used efficiently. You'll want to check out varchar_pattern_ops (or text_pattern_ops, depending on the column type of name). You create that this way:
CREATE INDEX ON pattern_index_on_users_name ON users (name varchar_pattern_ops)
After creating an index, check EXPLAIN query to make sure it's being used. text_pattern_ops doesn't work with the citext extension, so in this case you'll have to index and search for lower(name) to get good case-insensitive performance:
CREATE INDEX ON pattern_index_on_users_name ON users (lower(name) text_pattern_ops)
SELECT * FROM users WHERE lower(name) like 'user%' LIMIT 2
I have a table in postgresql that contains an array which is updated constantly.
In my application i need to get the number of rows for which a specific parameter is not present in that array column. My query looks like this:
select count(id)
from table
where not (ARRAY['parameter value'] <# table.array_column)
But when increasing the amount of rows and the amount of executions of that query (several times per second, possibly hundreds or thousands) the performance decreses a lot, it seems to me that the counting in postgresql might have a linear order of execution (I’m not completely sure of this).
Basically my question is:
Is there an existing pattern I’m not aware of that applies to this situation? what would be the best approach for this?
Any suggestion you could give me would be really appreciated.
PostgreSQL actually supports GIN indexes on array columns. Unfortunately, it doesn't seem to be usable for NOT ARRAY[...] <# indexed_col, and GIN indexes are unsuitable for frequently-updated tables anyway.
Demo:
CREATE TABLE arrtable (id integer primary key, array_column integer[]);
INSERT INTO arrtable(1, ARRAY[1,2,3,4]);
CREATE INDEX arrtable_arraycolumn_gin_arr_idx
ON arrtable USING GIN(array_column);
-- Use the following *only* for testing whether Pg can use an index
-- Do not use it in production.
SET enable_seqscan = off;
explain (buffers, analyze) select count(id)
from arrtable
where not (ARRAY[1] <# arrtable.array_column);
Unfortunately, this shows that as written we can't use the index. If you don't negate the condition it can be used, so you can search for and count rows that do contain the search element (by removing NOT).
You could use the index to count entries that do contain the target value, then subtract that result from a count of all entries. Since counting all rows in a table is quite slow in PostgreSQL (9.1 and older) and requires a sequential scan this will actually be slower than your current query. It's possible that on 9.2 an index-only scan can be used to count the rows if you have a b-tree index on id, in which case this might actually be OK:
SELECT (
SELECT count(id) FROM arrtable
) - (
SELECT count(id) FROM arrtable
WHERE (ARRAY[1] <# arrtable.array_column)
);
It's guaranteed to perform worse than your original version for Pg 9.1 and below, because in addition to the seqscan your original requires it also needs an GIN index scan. I've now tested this on 9.2 and it does appear to use an index for the count, so it's worth exploring for 9.2. With some less trivial dummy data:
drop index arrtable_arraycolumn_gin_arr_idx ;
truncate table arrtable;
insert into arrtable (id, array_column)
select s, ARRAY[1,2,s,s*2,s*3,s/2,s/4] FROM generate_series(1,1000000) s;
CREATE INDEX arrtable_arraycolumn_gin_arr_idx
ON arrtable USING GIN(array_column);
Note that a GIN index like this will slow updates down a LOT, and is quite slow to create in the first place. It is not suitable for tables that get updated much at all - like your table.
Worse, the query using this index takes up to twice times as long as your original query and at best half as long on the same data set. It's worst for cases where the index is not very selective like ARRAY[1] - 4s vs 2s for the original query. Where the index is highly selective (ie: not many matches, like ARRAY[199]) it runs in about 1.2 seconds vs the original's 3s. This index simply isn't worth having for this query.
The lesson here? Sometimes, the right answer is just to do a sequential scan.
Since that won't do for your hit rates, either maintain a materialized view with a trigger as #debenhur suggests, or try to invert the array to be a list of parameters that the entry does not have so you can use a GiST index as #maniek suggests.
Is there an existing pattern I’m not aware of that applies to this
situation? what would be the best approach for this?
Your best bet in this situation might be to normalize your schema. Split the array out into a table. Add a b-tree index on the table of properties, or order the primary key so it's efficiently searchable by property_id.
CREATE TABLE demo( id integer primary key );
INSERT INTO demo (id) SELECT id FROM arrtable;
CREATE TABLE properties (
demo_id integer not null references demo(id),
property integer not null,
primary key (demo_id, property)
);
CREATE INDEX properties_property_idx ON properties(property);
You can then query the properties:
SELECT count(id)
FROM demo
WHERE NOT EXISTS (
SELECT 1 FROM properties WHERE demo.id = properties.demo_id AND property = 1
)
I expected this to be a lot faster than the original query, but it's actually much the same with the same sample data; it runs in the same 2s to 3s range as your original query. It's the same issue where searching for what is not there is much slower than searching for what is there; if we're looking for rows containing a property we can avoid the seqscan of demo and just scan properties for matching IDs directly.
Again, a seq scan on the array-containing table does the job just as well.
I think with Your current data model You are out of luck. Try to think of an algorithm that the database has to execute for Your query. There is no way it could work without sequential scanning of data.
Can You arrange the column so that it stores the inverse of data (so that the the query would be select count(id) from table where ARRAY[‘parameter value’] <# table.array_column) ? This query would use a gin/gist index.