Postgres using wrong index when querying a view of indexed expressions? - postgresql

When I run the following script with Postgres 9.3 (with enable_seqscan set to off), I expect the final query to make use of the "forms_string" partial index, but instead uses the "forms_int" index, which doesn't make sense.
When I've been testing this with actual code with JSON functions and indexes for more types, it consistently seems to use whatever the last index created was, for every query.
Adding more unrelated rows so that the rows relevant to the partial index are only a small percentage of total rows in the table results in a "bitmap heap scan", but still mentions the same incorrect index after that.
Any idea how I can get it to use the correct index?
CREATE EXTENSION IF NOT EXISTS plv8;
CREATE OR REPLACE FUNCTION
json_string(data json, key text) RETURNS TEXT AS $$
var ret = data,
keys = key.split('.'),
len = keys.length;
for (var i = 0; i < len; ++i) {
if (ret) {
ret = ret[keys[i]]
};
}
if (typeof ret === "undefined") {
ret = null;
} else if (ret) {
ret = ret.toString();
}
return ret;
$$ LANGUAGE plv8 IMMUTABLE STRICT;
CREATE OR REPLACE FUNCTION
json_int(data json, key text) RETURNS INT AS $$
var ret = data,
keys = key.split('.'),
len = keys.length;
for (var i = 0; i < len; ++i) {
if (ret) {
ret = ret[keys[i]]
}
}
if (typeof ret === "undefined") {
ret = null;
} else {
ret = parseInt(ret, 10);
if (isNaN(ret)) {
ret = null;
}
}
return ret;
$$ LANGUAGE plv8 IMMUTABLE STRICT;
CREATE TABLE form_types (
id SERIAL NOT NULL,
name VARCHAR(200),
PRIMARY KEY (id)
);
CREATE TABLE tenants (
id SERIAL NOT NULL,
name VARCHAR(200),
PRIMARY KEY (id)
);
CREATE TABLE forms (
id SERIAL NOT NULL,
tenant_id INTEGER,
type_id INTEGER,
data JSON,
PRIMARY KEY (id),
FOREIGN KEY(tenant_id) REFERENCES tenants (id),
FOREIGN KEY(type_id) REFERENCES form_types (id)
);
CREATE INDEX ix_forms_type_id ON forms (type_id);
CREATE INDEX ix_forms_tenant_id ON forms (tenant_id);
INSERT INTO tenants (name) VALUES ('mike'), ('bob');
INSERT INTO form_types (name) VALUES ('type 1'), ('type 2');
INSERT INTO forms (tenant_id, type_id, data) VALUES
(1, 1, '{"string": "unicorns", "int": 1}'),
(1, 1, '{"string": "pythons", "int": 2}'),
(1, 1, '{"string": "pythons", "int": 8}'),
(1, 1, '{"string": "penguins"}');
CREATE OR REPLACE VIEW foo AS
SELECT forms.id AS forms_id,
json_string(forms.data, 'string') AS "data.string",
json_int(forms.data, 'int') AS "data.int"
FROM forms
WHERE forms.tenant_id = 1 AND forms.type_id = 1;
CREATE INDEX "forms_string" ON forms (json_string(data, 'string'))
WHERE tenant_id = 1 AND type_id = 1;
CREATE INDEX "forms_int" ON forms (json_int(data, 'int'))
WHERE tenant_id = 1 AND type_id = 1;
EXPLAIN ANALYZE VERBOSE SELECT "data.string" from foo;
Outputs:
Index Scan using forms_int on public.forms
(cost=0.13..8.40 rows=1 width=32) (actual time=0.085..0.239 rows=20 loops=1)
Output: json_string(forms.data, 'string'::text)
Total runtime: 0.282 ms
Without enable_seqscan=off:
Seq Scan on public.forms (cost=0.00..1.31 rows=1 width=32) (actual time=0.080..0.277 rows=28 loops=1)
Output: json_string(forms.data, 'string'::text)
Filter: ((forms.tenant_id = 1) AND (forms.type_id = 1))
Total runtime: 0.327 ms
\d forms prints
Table "public.forms"
Column | Type | Modifiers
-----------+---------+----------------------------------------------------
id | integer | not null default nextval('forms_id_seq'::regclass)
tenant_id | integer |
type_id | integer |
data | json |
Indexes:
"forms_pkey" PRIMARY KEY, btree (id)
"forms_int" btree (json_int(data, 'int'::text)) WHERE tenant_id = 1 AND type_id = 1
"forms_string" btree (json_string(data, 'string'::text)) WHERE tenant_id = 1 AND type_id = 1
"ix_forms_tenant_id" btree (tenant_id)
"ix_forms_type_id" btree (type_id)
Foreign-key constraints:
"forms_tenant_id_fkey" FOREIGN KEY (tenant_id) REFERENCES tenants(id)
"forms_type_id_fkey" FOREIGN KEY (type_id) REFERENCES form_types(id)

Index vs seqscan, costs
Looks like your random_page_cost is too high compared to the real performance of your machine. Random I/O is faster (costs less) than Pg thinks it does, so it's choosing a slightly less ideal plan.
That's why the cost estimate for the indexscan is (cost=0.13..8.40 rows=1 width=32) and for the seqscan it's slightly lower at (cost=0.00..1.31 rows=1 width=32).
Lower random_page_cost - try SET random_page_cost = 2 then re-running.
To learn more, read the documentation on PostgreSQL query planning, parameters, and tuning, and the relevant wiki pages.
Index selection
PostgreSQL appears to be picking an index scan on forms_int instead of forms_string because it'll be a more compact, smaller index, and both indexes exactly match the search criteria for the view: tenant_id = 1 AND type_id = 1.
If you disable or drop forms_int it'll probably use forms_string and go slightly slower.
The key thing to understand is that while the index does contain the value of interest, PostgreSQL isn't actually using it. It's scanning the index without an index condition, since every tuple in the index matches, to get tuples from the heap. It's then extracting the value from those heap tuples and outputting them.
This can be demonstrated with an expression-index on a constant:
CREATE INDEX "forms_novalue" ON forms((true)) WHERE tenant_id = 1 AND type_id = 1;
PostgreSQL is quite likely to select this index for the query:
regress=# EXPLAIN ANALYZE VERBOSE SELECT "data.string" from foo;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
Index Scan using forms_novalue on public.forms (cost=0.13..13.21 rows=4 width=32) (actual time=0.190..0.310 rows=4 loops=1)
Output: json_string(forms.data, 'string'::text)
Total runtime: 0.346 ms
(3 rows)
All the indexes are the same size because they're all so tiny they fit in the minimum allocation:
regress=# SELECT x.idxname, pg_relation_size(x.idxname) FROM (VALUES ('forms_novalue'),('forms_int'),('forms_string')) x(idxname);
idxname | pg_relation_size
---------------+------------------
forms_novalue | 16384
forms_int | 16384
forms_string | 16384
(3 rows)
but the stats for novalue will be somewhat more attractive due to a narrower row width.
Index scan vs index-only scan
It sounds like what you really expect is an index-only scan, where Pg never touches the table's heap and only uses the tuples in the index its self.
I would expect that this query's requirements could be satisfied with forms_string, but can't get Pg to pick an index-only scan plan for it.
It's not immediately clear to me why Pg is not using an index-only scan here, as it should be a candidate, but it doesn't seem to be able to plan one. If I force enable_indexscan = off, it'll pick an inferior bitmap index scan plan instead, and if force disable enable_bitmapscan it'll fall back to a max-cost-estimate seqscan. This is true even after a VACUUM of the table(s) of interest.
That means it must not be being generated as a candidate path in the query planner - Pg doesn't know how to use an index-only scan for this query, or thinks it cannot do so for some reason.
It isn't an issue with view introspection, as an expanded view query is the same.

Your table has insufficient data in it. In short, Postgres won't use an index when the table fits in a single disk page. Ever. When your table contains a few hundred or thousand rows, it'll become too big to fit, and then you'll see Postgres begin to use index scans when relevant.
Another point to consider is you need to analyze your tables after a large import. Without accurate stats on your actual data, Postgres may end up dismissing some index scans as too expensive, when in fact they'd be cheap.
Lastly, there are cases when it is cheaper to not use an index. In essence, whenever Postgres is about to visit most disk pages repeatedly and in a random order to retrieve a large number of rows, it'll seriously consider the cost of visiting most (bitmap index) or all (seq scan) disk pages once sequentially and filtering invalid rows out. The latter wins if you're selecting enough rows.

Related

Optimizing a query that compares a table to itself with millions of rows

I could use some help optimizing a query that compares rows in a single table with millions of entries. Here's the table's definition:
CREATE TABLE IF NOT EXISTS data.row_check (
id uuid NOT NULL DEFAULT NULL,
version int8 NOT NULL DEFAULT NULL,
row_hash int8 NOT NULL DEFAULT NULL,
table_name text NOT NULL DEFAULT NULL,
CONSTRAINT row_check_pkey
PRIMARY KEY (id, version)
);
I'm reworking our push code and have a test bed with millions of records across about 20 tables. I run my tests, get the row counts, and can spot when some of my insert code has changed. The next step is to checksum each row, and then compare the rows for differences between versions of my code. Something like this:
-- Run my test of "version 0" of the push code, the base code I'm refactoring.
-- Insert the ID and checksum for each pushed row.
INSERT INTO row_check (id,version,row_hash,table_name)
SELECT id, 0, hashtext(record_changes_log::text),'record_changes_log'
FROM record_changes_log
ON CONFLICT ON CONSTRAINT row_check_pkey DO UPDATE SET
row_hash = EXCLUDED.row_hash,
table_name = EXCLUDED.table_name;
truncate table record_changes_log;
-- Run my test of "version 1" of the push code, the new code I'm validating.
-- Insert the ID and checksum for each pushed row.
INSERT INTO row_check (id,version,row_hash,table_name)
SELECT id, 1, hashtext(record_changes_log::text),'record_changes_log'
FROM record_changes_log
ON CONFLICT ON CONSTRAINT row_check_pkey DO UPDATE SET
row_hash = EXCLUDED.row_hash,
table_name = EXCLUDED.table_name;
That gets two rows in row_check for every row in record_changes_log, or any other table I'm checking. For the two runs of record_changes_log, I end up with more than 8.6M rows in row_check. They look like this:
id version row_hash table_name
e6218751-ab78-4942-9734-f017839703f6 0 -142492569 record_changes_log
6c0a4111-2f52-4b8b-bfb6-e608087ea9c1 0 -1917959999 record_changes_log
7fac6424-9469-4d98-b887-cd04fee5377d 0 -323725113 record_changes_log
1935590c-8d22-4baf-85ba-00b563022983 0 -1428730186 record_changes_log
2e5488b6-5b97-4755-8a46-6a46317c1ae2 0 -1631086027 record_changes_log
7a645ffd-31c5-4000-ab66-a565e6dad7e0 0 1857654119 record_changes_log
I asked yesterday for some help on the comparison query, and it lead to this:
select v0.table_name,
v0.id,
v0.row_hash as v0,
v1.row_hash as v1
from row_check v0
join row_check v1 on v0.id = v1.id and
v0.version = 0 and
v1.version = 1 and
v0.row_hash <> v1.row_hash
That works, but now I'm hoping to optimize it a bit. As an experiment, I clustered the data on version and then built a BRIN index, like this:
drop index if exists row_check_version_btree;
create index row_check_version_btree
on row_check
using btree(version);
cluster row_check using row_check_version_btree;
drop index row_check_version_btree; -- Eh? I want to see how the BRIN performs.
drop index if exists row_check_version_brin;
create index row_check_version_brin
on row_check
using brin(row_hash);
vacuum analyze row_check;
I ran the query through explain analyze and get this:
Merge Join (cost=1.12..559750.04 rows=4437567 width=51) (actual time=1511.987..14884.045 rows=10 loops=1)
Output: v0.table_name, v0.id, v0.row_hash, v1.row_hash
Inner Unique: true
Merge Cond: (v0.id = v1.id)
Join Filter: (v0.row_hash <> v1.row_hash)
Rows Removed by Join Filter: 4318290
Buffers: shared hit=8679005 read=42511
-> Index Scan using row_check_pkey on ascendco.row_check v0 (cost=0.56..239156.79 rows=4252416 width=43) (actual time=0.032..5548.180 rows=4318300 loops=1)
Output: v0.id, v0.version, v0.row_hash, v0.table_name
Index Cond: (v0.version = 0)
Buffers: shared hit=4360752
-> Index Scan using row_check_pkey on ascendco.row_check v1 (cost=0.56..240475.33 rows=4384270 width=24) (actual time=0.031..6070.790 rows=4318300 loops=1)
Output: v1.id, v1.version, v1.row_hash, v1.table_name
Index Cond: (v1.version = 1)
Buffers: shared hit=4318253 read=42511
Planning Time: 1.073 ms
Execution Time: 14884.121 ms
...which I did not really get the right idea from...so I ran it again to JSON and fed the results into this wonderful plan visualizer:
http://tatiyants.com/pev/#/plans
The tips there are right, the top node estimate is bad. The result is 10 rows, the estimate is for about 443,757 rows.
I'm hoping to learn more about optimizing this kind of thing, and this query seems like a good opportunity. I have a lot of notions about what might help:
-- CREATE STATISTICS?
-- Rework the query to move the where comparison?
-- Use a better index? I did try a GIN index and a straight B-tree on version, but neither was superior.
-- Rework the row_check format to move the two hashes into the same row instead of splitting them over two rows, compare on insert/update, flag non-matches, and add a partial index for the non-matching values.
Granted, it's funny to even try to index something where there are only two values (0 and 1 in the case above), so there's that. In fact, is there any sort of clever trick for Booleans? I'll always be comparing two versions, so "old" and "new", which I can express however makes life best. I understand that Postgres only has bitmap indexes internally at search/merge (?) time and that it does not have a bitmap type index. Would there be some kind of INTERSECT that might help? I don't know how Postgres implements set math operators internally.
Thanks for any suggestions on how to rethink this data or the query to make it faster for comparisons involving millions, or tens of millions, of rows.
I'm going to add an answer to my own question, but am still interested in what anyone else has to say. In the process of writing out my original question, I realized that maybe a redesign is in order. This hinges on my plan to only ever compare two versions at a time. That's a good solution here, but there are other cases where it wouldn't work. Anyway, here's a slightly different table design that folds the two results into a single row:
DROP TABLE IF EXISTS data.row_compare;
CREATE TABLE IF NOT EXISTS data.row_compare (
id uuid NOT NULL DEFAULT NULL,
hash_1 int8, -- Want NULL to defer calculating hash comparison until after both hashes are entered.
hash_2 int8, -- Ditto
hashes_match boolean, -- Likewise
table_name text NOT NULL DEFAULT NULL,
CONSTRAINT row_compare_pkey
PRIMARY KEY (id)
);
The following expression index should, hopefully, be very small as I shouldn't have any non-matching entries:
CREATE INDEX row_compare_fail ON row_compare (hashes_match)
WHERE hashes_match = false;
The trigger below does the column calculation, once hash_1 and hash_2 are both provided:
-- Run this as a BEFORE INSERT or UPDATE ROW trigger.
CREATE OR REPLACE FUNCTION data.on_upsert_row_compare()
RETURNS trigger AS
$BODY$
BEGIN
IF NEW.hash_1 = NULL OR
NEW.hash_2 = NULL THEN
RETURN NEW; -- Don't do the comparison, hash_1 hasn't been populated yet.
ELSE-- Do the comparison. The point of this is to avoid constantly thrashing the expression index.
NEW.hashes_match := NEW.hash_1 = NEW.hash_2;
RETURN NEW; -- important!
END IF;
END;
$BODY$
LANGUAGE plpgsql;
This now adds 4.3M rows instead of 8.6M rows:
-- Add the first set of results and build out the row_compare records.
INSERT INTO row_compare (id,hash_1,table_name)
SELECT id, hashtext(record_changes_log::text),'record_changes_log'
FROM record_changes_log
ON CONFLICT ON CONSTRAINT row_compare_pkey DO UPDATE SET
hash_1 = EXCLUDED.hash_1,
table_name = EXCLUDED.table_name;
-- I'll truncate the record_changes_log and push my sample data again here.
-- Add the second set of results and update the row compare records.
-- This time, the hash is going into the hash_2 field for comparison
INSERT INTO row_compare (id,hash_2,table_name)
SELECT id, hashtext(record_changes_log::text),'record_changes_log'
FROM record_changes_log
ON CONFLICT ON CONSTRAINT row_compare_pkey DO UPDATE SET
hash_2 = EXCLUDED.hash_2,
table_name = EXCLUDED.table_name;
And now the results are simple to find:
select * from row_compare where hashes_match = false;
This changes the query time from around 17 seconds to around 24 milliseconds.

Is this the right way to create a partial index in Postgres?

We have a table with 4 million records, and for a particular frequently used use-case we are only interested in records with a particular salesforce userType of 'Standard' which are only about 10,000 out of 4 million. The other usertype's that could exist are 'PowerPartner', 'CSPLitePortal', 'CustomerSuccess', 'PowerCustomerSuccess' and 'CsnOnly'.
So for this use case I thought creating a partial index would be better, as per the documentation.
So I am planning to create this partial index to speed up queries for records with a usertype of 'Standard' and prevent the request from the web from getting timed out:
CREATE INDEX user_type_idx ON user_table(userType)
WHERE userType NOT IN
('PowerPartner', 'CSPLitePortal', 'CustomerSuccess', 'PowerCustomerSuccess', 'CsnOnly');
The lookup query will be
select * from user_table where userType='Standard';
Could you please confirm if this is the right way to create the partial index? It would of great help.
Postgres can use that but it does so in a way that is (slightly) less efficient than an index specifying where user_type = 'Standard'.
I created a small test table with 4 million rows, 10.000 of them having the user_type 'Standard'. The other values were randomly distributed using the following script:
create table user_table
(
id serial primary key,
some_date date not null,
user_type text not null,
some_ts timestamp not null,
some_number integer not null,
some_data text,
some_flag boolean
);
insert into user_table (some_date, user_type, some_ts, some_number, some_data, some_flag)
select current_date,
case (random() * 4 + 1)::int
when 1 then 'PowerPartner'
when 2 then 'CSPLitePortal'
when 3 then 'CustomerSuccess'
when 4 then 'PowerCustomerSuccess'
when 5 then 'CsnOnly'
end,
clock_timestamp(),
42,
rpad(md5(random()::text), (random() * 200 + 1)::int, md5(random()::text)),
(random() + 1)::int = 1
from generate_series(1,4e6 - 10000) as t(i)
union all
select current_date,
'Standard',
clock_timestamp(),
42,
rpad(md5(random()::text), (random() * 200 + 1)::int, md5(random()::text)),
(random() + 1)::int = 1
from generate_series(1,10000) as t(i);
(I create tables that have more than just a few columns as the planner's choices are also driven by the size and width of the tables)
The first test using the index with NOT IN:
create index ix_not_in on user_table(user_type)
where user_type not in ('PowerPartner', 'CSPLitePortal', 'CustomerSuccess', 'PowerCustomerSuccess', 'CsnOnly');
explain (analyze true, verbose true, buffers true)
select *
from user_table
where user_type = 'Standard'
Results in:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on stuff.user_table (cost=139.68..14631.83 rows=11598 width=139) (actual time=1.035..2.171 rows=10000 loops=1)
Output: id, some_date, user_type, some_ts, some_number, some_data, some_flag
Recheck Cond: (user_table.user_type = 'Standard'::text)
Buffers: shared hit=262
-> Bitmap Index Scan on ix_not_in (cost=0.00..136.79 rows=11598 width=0) (actual time=1.007..1.007 rows=10000 loops=1)
Index Cond: (user_table.user_type = 'Standard'::text)
Buffers: shared hit=40
Total runtime: 2.506 ms
(The above is a typical execution time after I ran the statement about 10 times to eliminate caching issues)
As you can see the planner uses a Bitmap Index Scan which is a "lossy" scan that needs an extra step to filter out false positives.
When using the following index:
create index ix_standard on user_table(id)
where user_type = 'Standard';
This results in the following plan:
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------
Index Scan using ix_standard on stuff.user_table (cost=0.29..443.16 rows=10267 width=139) (actual time=0.011..1.498 rows=10000 loops=1)
Output: id, some_date, user_type, some_ts, some_number, some_data, some_flag
Buffers: shared hit=313
Total runtime: 1.815 ms
Conclusion:
Your index is used but an index on only the type that you are interested in is a bit more efficient.
The runtime is not that much different. I executed each explain about 10 times, and the average for the ix_standard index was slightly below 2ms and the average of the ix_not_in index was slightly above 2ms - so not a real performance difference.
But in general the Index Scan will scale better with increasing table sizes than the Bitmap Index Scan will do. This is basically due to the "Recheck Condition" - especially if not enough work_mem is available to keep the bitmap in memory (for larger tables).
For the index to be used, the WHERE condition must be used in the query as you wrote it.
PostgreSQL has some ability to make deductions, but it won't be able to infer that userType = 'Standard' is equivalent to the condition in the index.
Use EXPLAIN to find out if your index can be used.

Limits of Postgres Query Optimization (Already using Index-Only Scans)

I have a Postgres query that has already been optimized, but we're hitting 100% CPU usage under peak load, so I wanted to see if there's more that can yet be done in optimizing the database interactions. It already is using two index-only scans in the join, so my question is if there's much more to be done on the Postgres side of things.
The database is an Amazon-hosted Postgres RDS db.m3.2xlarge instance (8 vCPUs and 30 GB of memory) running 9.4.1, and the results below are from a period with low CPU usage and minimal connections (around 15). Peak usage is around 300 simultaneous connections, and that's when we're maxing our CPU (which kills performance on everything).
Here's the query and the EXPLAIN:
Query:
EXPLAIN (ANALYZE, BUFFERS)
SELECT m.valdate, p.index_name, m.market_data_closing, m.available_date
FROM md.market_data_closing m
JOIN md.primitive p on (m.primitive_id = p.index_id)
where p.index_name = ?
order by valdate desc
;
Output:
Sort (cost=183.80..186.22 rows=967 width=44) (actual time=44.590..54.788 rows=11133 loops=1)
Sort Key: m.valdate
Sort Method: quicksort Memory: 1254kB
Buffers: shared hit=181
-> Nested Loop (cost=0.85..135.85 rows=967 width=44) (actual time=0.041..32.853 rows=11133 loops=1)
Buffers: shared hit=181
-> Index Only Scan using primitive_index_name_index_id_idx on primitive p (cost=0.29..4.30 rows=1 width=25) (actual time=0.018..0.019 rows=1 loops=1)
Index Cond: (index_name = '?'::text)
Heap Fetches: 0
Buffers: shared hit=3
-> Index Only Scan using market_data_closing_primitive_id_valdate_available_date_mar_idx on market_data_closing m (cost=0.56..109.22 rows=2233 width=27) (actual time=0.016..12.059 rows=11133 loops=1)
Index Cond: (primitive_id = p.index_id)
Heap Fetches: 42
Buffers: shared hit=178
Planning time: 0.261 ms
Execution time: 64.957 ms
Here are the table sizes:
md.primitive: 14283 rows
md.market_data_closing: 13544087 rows
For reference, here is the underlying spec for the tables and indices:
CREATE TABLE md.primitive(
index_id serial NOT NULL,
index_name text NOT NULL UNIQUE,
index_description text not NULL,
index_source_code text NOT NULL DEFAULT 'MAN',
index_source_spec json NOT NULL DEFAULT '{}',
frequency text NULL,
primitive_type text NULL,
is_maintained boolean NOT NULL default true,
create_dt timestamp NOT NULL,
create_user text NOT NULL,
update_dt timestamp not NULL,
update_user text not NULL,
PRIMARY KEY
(
index_id
)
) ;
CREATE INDEX ON md.primitive
(
index_name ASC,
index_id ASC
);
CREATE TABLE md.market_data_closing(
valdate timestamp NOT NULL,
primitive_id int references md.primitive,
market_data_closing decimal(28, 10) not NULL,
available_date timestamp NULL,
pricing_source text not NULL,
create_dt timestamp NOT NULL,
create_user text NOT NULL,
update_dt timestamp not NULL,
update_user text not NULL,
PRIMARY KEY
(
valdate,
primitive_id
)
) ;
CREATE INDEX ON md.market_data_closing
(
primitive_id ASC,
valdate DESC,
available_date DESC,
market_data_closing ASC
);
What else can be done?
It seems the nested loop is taking an absurd amount of time and primitive table is returning only one row. You can try eliminating the nested loop by doing something like this:
SELECT m.valdate, m.market_data_closing, m.available_date
FROM md.market_data_closing m
WHERE m.primitive_id = (SELECT p.index_id
FROM md.primitive p
WHERE p.index_name = ?
OFFSET 0 -- probably not needed, try it)
ORDER BY valdate DESC;
This does not return p.index_name but that can be easily fixed by selecting it as a const.
For next generations reading this: the problem seems to be with index
md.market_data_closing(
...
PRIMARY KEY
(
valdate,
primitive_id
)
This seems to be an incorrect index. Should be:
md.market_data_closing(
...
PRIMARY KEY
(
primitive_id,
valdate
)
Explanation why. This kind of query:
...
JOIN md.primitive p on (m.primitive_id = p.index_id)
...
Will only be effective only if primitive_id is the first field.
Also
order by validate
Will be more effective if validate is second.
Why?
Because index is a B-tree structure.
(
valdate,
primitive_id
)
results in
valdate1
primitive_id1
primitive_id2
primitive_id3
valdate2
primitive_id1
primitive_id2
Using this tree you can't search by primitive_id1 effectively
But
(
primitive_id,
valdate
)
results in
primitive_id1
valdate1
valdate2
valdate3
primitive_id2
valdate1
valdate2
Which is effective for looking up by primitive_id.
There is another solution to this problem:
If you don't want to change the index, you add a strict equal condition on valdate.
Say 'valdate = some_date', this will make your index effective.

Postgres: Sorting by an immutable function index doesn't use index

I have a simple table.
CREATE TABLE posts
(
id uuid NOT NULL,
vote_up_count integer,
vote_down_count integer,
CONSTRAINT post_pkey PRIMARY KEY(id)
);
I have an IMMUTABLE function that does simple (but could be complex) arithmetic.
CREATE OR REPLACE FUNCTION score(
ups integer,
downs integer)
RETURNS integer AS
$BODY$
select $1 - $2
$BODY$
LANGUAGE sql IMMUTABLE
COST 100;
ALTER FUNCTION score(integer, integer)
OWNER TO postgres;
I create an index on the posts table that uses my function.
CREATE INDEX posts_score_index ON posts(score(vote_up_count, vote_down_count), date_created);
When I EXPLAIN the following query, it doesn't seem to be using the index.
SELECT * FROM posts ORDER BY score(vote_up_count, vote_down_count), date_created
Sort (cost=1.02..1.03 rows=1 width=310)
Output: id, date_created, last_edit_date, slug, sub_id, user_id, user_ip, type, title, content, url, domain, send_replies, vote_up_count, vote_down_count, verdict, approved_by, removed_by, verdict_message, number_of_reports, ignore_reports, number_of_com (...)"
Sort Key: ((posts.vote_up_count - posts.vote_down_count)), posts.date_created
-> Seq Scan on public.posts (cost=0.00..1.01 rows=1 width=310)
Output: id, date_created, last_edit_date, slug, sub_id, user_id, user_ip, type, title, content, url, domain, send_replies, vote_up_count, vote_down_count, verdict, approved_by, removed_by, verdict_message, number_of_reports, ignore_reports, number_ (...)
How do I get my ORDER BY to use an index from an IMMUTABLE function that could have some very complex arithmetic?
EDIT: Based off of #Егор-Рогов's suggestions, I change the query a bit to see if I can get it to use an index. Still no luck.
set enable_seqscan=off;
EXPLAIN VERBOSE select date_created from posts ORDER BY (hot(vote_up_count, vote_down_count, date_created),date_created);
Here is the output.
Sort (cost=10000000001.06..10000000001.06 rows=1 width=16)
Output: date_created, (ROW(round((((log((GREATEST(abs((vote_up_count - vote_down_count)), 1))::double precision) * sign(((vote_up_count - vote_down_count))::double precision)) + ((date_part('epoch'::text, date_created) - 1134028003::double precision) / 4 (...)
Sort Key: (ROW(round((((log((GREATEST(abs((posts.vote_up_count - posts.vote_down_count)), 1))::double precision) * sign(((posts.vote_up_count - posts.vote_down_count))::double precision)) + ((date_part('epoch'::text, posts.date_created) - 1134028003::dou (...)
-> Seq Scan on public.posts (cost=10000000000.00..10000000001.05 rows=1 width=16)
Output: date_created, ROW(round((((log((GREATEST(abs((vote_up_count - vote_down_count)), 1))::double precision) * sign(((vote_up_count - vote_down_count))::double precision)) + ((date_part('epoch'::text, date_created) - 1134028003::double precision (...)
EDIT2: It seems that I was not using the index because of a second order by with date_created.
I can see a couple of points that discourages the planner from using the index.
1.
Look at this line in the explain output:
Seq Scan on public.posts (cost=0.00..1.01 rows=1 width=310)
It says that the planner believes there is only one row in the table. In this case it makes no sense to use index scan, for sequential scan is faster.
Try to add more rows to the table, do analyze and try again. You can also test it by temporarily disabling sequential scans by set enable_seqscan=off;.
2.
You use your the function to sort the results. So the planner may decide to use the index in order to get tuple ids in the correct order. But then it needs to fetch each tuple from the table to get values of all columns (because of select *).
You can make the index more attractive to the planner by adding all necessary columns to it, which make possible to avoid table scan. This is called index-only scan.
CREATE INDEX posts_score_index ON posts(
score(vote_up_count, vote_down_count),
date_created,
id, -- do you actually need it in result set?
vote_up_count, -- do you actually need it in result set?
vote_down_count -- do you actually need it in result set?
);
And make sure you run vacuum after inserting/updating/deleting rows to update the visibility map.
The downside is the increased index size, of course.

Redshift SELECT * performance versus COUNT(*) for non existent row

I am confused about what Redshift is doing when I run 2 seemingly similar queries. Neither should return a result (querying a profile that doesn't exist). Specifically:
SELECT * FROM profile WHERE id = 'id_that_doesnt_exist' and project_id = 1;
Execution time: 36.75s
versus
SELECT COUNT(*) FROM profile WHERE id = 'id_that_doesnt_exist' and project_id = 1;
Execution time: 0.2s
Given that the table is sorted by project_id then id I would have thought this is just a key lookup. The SELECT COUNT(*) ... returns 0 results in 0.2sec which is about what I would expect. The SELECT * ... returns 0 results in 37.75sec. That's a huge difference for the same result and I don't understand why?
If it helps schema as follows:
CREATE TABLE profile (
project_id integer not null,
id varchar(256) not null,
created timestamp not null,
/* ... approx 50 other columns here */
)
DISTKEY(id)
SORTKEY(project_id, id);
Explain from SELECT COUNT(*) ...
XN Aggregate (cost=435.70..435.70 rows=1 width=0)
-> XN Seq Scan on profile (cost=0.00..435.70 rows=1 width=0)
Filter: (((id)::text = 'id_that_doesnt_exist'::text) AND (project_id = 1))
Explain from SELECT * ...
XN Seq Scan on profile (cost=0.00..435.70 rows=1 width=7356)
Filter: (((id)::text = 'id_that_doesnt_exist'::text) AND (project_id = 1))
Why is the non count much slower? Surely Redshift knows the row doesn't exist?
The reason is: in many RDBMS's the answer on count(*) question usually come without actual data scan: just from index or table statistics. Redshift stores minimal and maximal value for a block that used to give exist or not exists answers for example like in describer case. In case requested value inside of min/max block boundaries the scan will be performed only on filtering fields data. In case requested value is lower or upper block boundaries the answer will be given much faster on basis of the stored statistics. In case of "select * " question RedShift actually scans all columns data as asked in query: "*" but filter only by columns in "where " clause.