Redshift XN Sort cost - amazon-redshift

I have a query which is performing very poorly.
It's too big to post it here but basically it selects several columns and checks to which range they belong e.g.
CASE WHEN col < 3 THEN 'A' WHEN col BETWEEN 3 AND 5 THEN 'B' etc.
And then it counts 'A', 'B' etc
The execution time for 380M rows is 15 min.
It has no DISTINCT and no joins except of join to a small dimension table.
When I run explain it shows the following query plan
+-----------------------------------------------------------------------------------------------------------------+
| QUERY PLAN |
|-----------------------------------------------------------------------------------------------------------------|
| XN Merge (cost=1000011702317.03..1000011702317.16 rows=53 width=48) |
| Merge Key: programme_session.start_date |
| -> XN Network (cost=1000011702317.03..1000011702317.16 rows=53 width=48) |
| Send to leader |
| -> XN Sort (cost=1000011702317.03..1000011702317.16 rows=53 width=48) |
| Sort Key: programme_session.start_date |
| -> XN HashAggregate (cost=11702160.75..11702315.51 rows=53 width=48) |
| -> XN Hash Join DS_DIST_ALL_NONE (cost=105.08..11499929.95 rows=2186279 width=48) |
| Hash Cond: ("outer".start_date = "inner".tk) |
| -> XN Seq Scan on programme_session (cost=0.00..5101316.48 rows=510131648 width=48) |
| -> XN Hash (cost=105.00..105.00 rows=30 width=4) |
| -> XN Seq Scan on dim_date dd (cost=0.00..105.00 rows=30 width=4) |
| Filter: (("year" = 2019) AND ("month" = 6)) |
+-----------------------------------------------------------------------------------------------------------------+
Here I see the explosion of cost from XN HashAggregate to XN Sort.
What could be the reason?
The table is sorted by start_date, I run VACUUM SORT ONLY and skew values are not very big:
+-------------------+-----------+----------------+------------+-----------------+-------------+
| table | encoded | diststyle | sortkey1 | skew_sortkey1 | skew_rows |
|-------------------+-----------+----------------+------------+-----------------+-------------|
| programme_session | Y | KEY(device_id) | start_date | 2.26 | 1.00 |
+-------------------+-----------+----------------+------------+-----------------+-------------+
How can I improve the performance of my query?

Check the data types for the columns in your join clause. I've seen redshift broadcast the transaction table when they are not identical.

Related

Postgresql index is not used for slow queries >30s

POSTGRESQL VERSION: 10
HARDWARE: 4 workers / 16GBRAM / 50% used
I'm not a Postgresql expert. I have just read a lot of documentation and did a lot of tests.
I have some postgresql queries whick take a lot of times > 30s because of 10 millions of rows on a table.
Column | Type | Collation | Nullable | Default
------------------------------+--------------------------+-----------+----------+----------------------------------------------------------
id | integer | | not null |
cveid | character varying(50) | | |
summary | text | | not null |
published | timestamp with time zone | | |
modified | timestamp with time zone | | |
assigner | character varying(128) | | |
vulnerable_products | character varying(250)[] | | |
cvss | double precision | | |
cvss_time | timestamp with time zone | | |
cvss_vector | character varying(250) | | |
access | jsonb | | not null |
impact | jsonb | | not null |
score | integer | | not null |
is_exploitable | boolean | | not null |
is_confirmed | boolean | | not null |
is_in_the_news | boolean | | not null |
is_in_the_wild | boolean | | not null |
reflinks | jsonb | | not null |
reflinkids | jsonb | | not null |
created_at | timestamp with time zone | | |
history_id | integer | | not null | nextval('vulns_historicalvuln_history_id_seq'::regclass)
history_date | timestamp with time zone | | not null |
history_change_reason | character varying(100) | | |
history_type | character varying(1) | | not null |
Indexes:
"vulns_historicalvuln_pkey" PRIMARY KEY, btree (history_id)
"btree_varchar" btree (history_type varchar_pattern_ops)
"vulns_historicalvuln_cve_id_850876bb" btree (cve_id)
"vulns_historicalvuln_cwe_id_2013d697" btree (cwe_id)
"vulns_historicalvuln_history_user_id_9e25ebf5" btree (history_user_id)
"vulns_historicalvuln_id_773f2af7" btree (id)
--- TRUNCATE
Foreign-key constraints:
"vulns_historicalvuln_history_user_id_9e25ebf5_fk_custusers" FOREIGN KEY (history_user_id) REFERENCES custusers_user(id) DEFERRABLE INITIALLY DEFERRED
Example of queries:
SELECT * FROM vulns_historicalvuln WHERE history_type <> '+' order by id desc fetch first 10000 rows only; -> 30s without cache
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.43..31878.33 rows=10000 width=1736) (actual time=0.173..32839.474 rows=10000 loops=1)
-> Index Scan Backward using vulns_historicalvuln_id_773f2af7 on vulns_historicalvuln (cost=0.43..26346955.92 rows=8264960 width=1736) (actual time=0.172..32830.958 rows=10000 loops=1)
Filter: ((history_type)::text <> '+'::text)
Rows Removed by Filter: 296
Planning time: 19.514 ms
Execution time: 32845.015 ms
SELECT DISTINCT "vulns"."id", "vulns"."uuid", "vulns"."feedid", "vulns"."cve_id", "vulns"."cveid", "vulns"."summary", "vulns"."published", "vulns"."modified", "vulns"."assigner", "vulns"."cwe_id", "vulns"."vulnerable_packages_versions", "vulns"."vulnerable_products", "vulns"."vulnerable_product_versions", "vulns"."cvss", "vulns"."cvss_time", "vulns"."cvss_version", "vulns"."cvss_vector", "vulns"."cvss_metrics", "vulns"."access", "vulns"."impact", "vulns"."cvss3", "vulns"."cvss3_vector", "vulns"."cvss3_version", "vulns"."cvss3_metrics", "vulns"."score", "vulns"."is_exploitable", "vulns"."is_confirmed", "vulns"."is_in_the_news", "vulns"."is_in_the_wild", "vulns"."reflinks", "vulns"."reflinkids", "vulns"."created_at", "vulns"."updated_at", "vulns"."id" AS "exploit_count", false AS "monitored", '42' AS "org" FROM "vulns" WHERE ("vulns"."score" >= 0 AND "vulns"."score" <= 100) ORDER BY "vulns"."updated_at" DESC LIMIT 10
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=315191.32..315192.17 rows=10 width=1691) (actual time=3013.964..3013.990 rows=10 loops=1)
-> Unique (cost=315191.32..329642.42 rows=170013 width=1691) (actual time=3013.962..3013.986 rows=10 loops=1)
-> Sort (cost=315191.32..315616.35 rows=170013 width=1691) (actual time=3013.961..3013.970 rows=10 loops=1)
Sort Key: updated_at DESC, id, uuid, feedid, cve_id, cveid, summary, published, modified, assigner, cwe_id, vulnerable_packages_versions, vulnerable_products, vulnerable_product_versions, cvss, cvss_time, cvss_version, cvss_vector, cvss_metrics, access, impact, cvss3, cvss3_vector, cvss3_version, cvss3_metrics, score, is_exploitable, is_confirmed, is_in_the_news, is_in_the_wild, reflinks, reflinkids, created_at
Sort Method: external merge Disk: 277648kB
-> Seq Scan on vulns (cost=0.00..50542.19 rows=170013 width=1691) (actual time=0.044..836.597 rows=169846 loops=1)
Filter: ((score >= 0) AND (score <= 100))
Planning time: 3.183 ms
Execution time: 3070.346 ms
I have created a btree varchar index btree_varchar" btree (history_type varchar_pattern_ops) like this:
CREATE INDEX CONCURRENTLY btree_varchar ON vulns_historicalvuln (history_type varchar_pattern_ops);
I have also created a index for vulns score for my second queries:
CREATE INDEX CONCURRENTLY ON vulns (score);
I read a lot of post and documentation about slow queries and index. I'am sure it's the solution about slow queries but the query plan of Postgresql doesn't use the index I have created. It estimates that it processes faster with seq scan than using the index...
SELECT relname, indexrelname, idx_scan FROM pg_catalog.pg_stat_user_indexes;
relname | indexrelname | idx_scan
-------------------------------------+-----------------------------------------------------------------+------------
vulns_historicalvuln | btree_varchar | 0
Could you tell me if my index is well designed ? How I can debug this, feel free to ask more information if needed.
Thanks
After some research, I understand that index is not the solution of my problem here.
Low_cardinality (repeated value) of this field make the index useless.
The time of the query postgresql here is normal because of 30M rows matched.
I close this issue because there is no problem with index here.

Evenly select Records on categorical column with Repeating Sequence and pagination in Postgres

Database: Postgres
I have a product(id, title, source, ...) table which contains almost 500K records.
An example of data is:
| Id | title | source |
|:---|---------:|:--------:|
| 1 | product1 | source1 |
| 2 | product2 | source1 |
| 3 | product3 | source1 |
| 4 | product4 | source1 |
| . | ........ | source1 |
| . | ........ | source2 |
| x | productx | source2 |
|x+n |productX+n| sourceN |
There are are 5 distinct source values. And all records have source values random.
I need to get paginated results in such a way that:
If I need to select 20 products then the results set should contain results equally distributed based on source and should be in a repeating sequence. 2 products from each source till the last source and again next 2 products from each source.
For example:
| # | title | source |
|:---|---------:|:--------:|
| 1 | product1 | source1 |
| 2 | product2 | source1 |
| 3 | product3 | source2 |
| 4 | product4 | source2 |
| 5 | product5 | source3 |
| 6 | product6 | source3 |
| 7 | product7 | source4 |
| 8 | product8 | source4 |
| 9 | product9 | source5 |
| 10 |product10 | source5 |
| 11 | ........ | source1 |
| 12 | ........ | source1 |
| 13 | ........ | source2 |
| 14 | ........ | source2 |
| .. | ........ | ....... |
| 20 | ........ | source5 |
What is the optimized PgSql query to achieve the above scenario considering the LIMIT, OFFSET, sources can increase or decrease?
EDIT
As Suggested by George S, the below solution works, however, it is less performant. it takes almost 6 seconds to select only 20 records.
select id, title, source
, (row_number() over(partition by source order by last_modified DESC) - 1) / 2 as ordinal
-- order here can be by created time, id, title, etc
from product p
order by ordinal, source
limit 20
offset 2;
Explain ANALYZE of above query on real data
Limit (cost=147621.60..147621.65 rows=20 width=92) (actual time=5956.126..5956.138 rows=20 loops=1)
-> Sort (cost=147621.60..148813.72 rows=476848 width=92) (actual time=5956.123..5956.128 rows=22 loops=1)
Sort Key: (((row_number() OVER (?) - 1) / 2)), provider
Sort Method: top-N heapsort Memory: 28kB
-> WindowAgg (cost=122683.80..134605.00 rows=476848 width=92) (actual time=5099.059..5772.821 rows=477731 loops=1)
-> Sort (cost=122683.80..123875.92 rows=476848 width=84) (actual time=5098.873..5347.858 rows=477731 loops=1)
Sort Key: provider, last_modified DESC
Sort Method: external merge Disk: 46328kB
-> Seq Scan on product p (cost=0.00..54889.48 rows=476848 width=84) (actual time=0.012..4360.000 rows=477731 loops=1)
Planning Time: 0.354 ms
Execution Time: 5961.670 ms
This can be accomplished easily with a window function:
select id, title, source
, (row_number() over(partition by source order by id) - 1) / 2 as ordinal
--ordering here can be by created time, id, title, etc
from product p
order by ordinal, source
limit 10
offset 2;
As you noted, depending on your table size and other filters being used this may or may not be performant. The best way to tell is to run an EXPLAIN ANALYZE with the query on your actual data. If this isn't performant, you can also add the ordinal field to the table itself if it will always be the same value / ordering. Sadly, you can't create an index using a window function (at least not in PG12).
If you don't want to change the table itself, you can create a materialized view and then query that view so that the calculation only has to be done once:
CREATE MATERIALIZED VIEW ordered_product AS
select id, title, source
, (row_number() over(partition by source order by id) - 1) / 2 as ordinal
from product;
Afterwards, you can query the view like a normal table:
select * from ordered_product order by ordinal, source limit 10 offset 20;
You can also create indexes for it if necessary. Note that to refresh the view you'd run a command like:
REFRESH MATERIALIZED VIEW ordered_product;

Query planner behaviour degradation after PostgreSQL update from (10.11 to 11.6)

After updating postgres, I noticed that one of the queries I was using became much slower. After running EXPLAIN ANALYZE I see that it is now using a different index on the same query.
Among other columns, my table has an applicationid column which is a foreign key BIGINT, and I have an attributes columns which is a jsonb key/value map.
The description on coupons table are (some irrelevant parts were omitted):
+------------------------+--------------------------+-------------------------------------------------------+
| Column | Type | Modifiers |
|------------------------+--------------------------+-------------------------------------------------------|
| id | integer | not null default nextval('coupons_id_seq'::regclass) |
| created | timestamp with time zone | not null default now() |
| campaignid | bigint | |
| value | text | |
| expirydate | timestamp with time zone | |
| startdate | timestamp with time zone | |
| attributes | jsonb | not null default '{}'::jsonb |
| applicationid | bigint | |
| deleted | timestamp with time zone | |
| deleted_changelogid | bigint | not null default 0 |
| accountid | bigint | not null |
| recipientintegrationid | text | |
+------------------------+--------------------------+-------------------------------------------------------+
Indexes:
"coupons_applicationid_value_idx" UNIQUE, btree (applicationid, value) WHERE deleted IS NULL
"coupons_attrs_index" gin (attributes)
"coupons_recipientintegrationid_idx" btree (recipientintegrationid)
"coupons_value_trgm_idx" gin (value gin_trgm_ops)
The query I'm running is (some irrelevant parts were omitted):
EXPLAIN ANALYZE SELECT
*,
COUNT(*) OVER () AS total_rows
FROM
coupons
WHERE
deleted IS NULL
AND coupons.applicationid = 2
AND coupons.attributes #> '{"SessionId":"1070695459"}'
ORDER BY
id ASC
LIMIT 1000;
applicationid doesn't help us much. The index that was previously used was coupons_attrs_index (over attributes column) which produced very good results.
After the update however, the query planner started preferring the index coupons_applicationid_value_idx for some reason!
Here is output from EXPLAIN ANALYZE (some irrelevant parts were omitted):
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| QUERY PLAN |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| -> Sort (cost=64.09..64.10 rows=1 width=237) (actual time=3068.996..3068.996 rows=0 loops=1) |
| Sort Key: coupons.id |
| Sort Method: quicksort Memory: 25kB |
| -> WindowAgg (cost=0.86..64.08 rows=1 width=237) (actual time=3068.988..3068.988 rows=0 loops=1) |
| -> Nested Loop (cost=0.86..64.07 rows=1 width=229) (actual time=3068.985..3068.985 rows=0 loops=1) |
| -> Index Scan using coupons_applicationid_value_idx on coupons (cost=0.43..61.61 rows=1 width=213) (actual time=3068.984..3068.984 rows=0 loops=1) |
| Index Cond: (applicationid = 2) |
| Filter: (attributes #> '{"SessionId": "1070695459"}'::jsonb) |
| Rows Removed by Filter: 2344013 |
| Planning Time: 0.531 ms |
| Execution Time: 3069.076 ms |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
EXPLAIN
Time: 3.159s (3 seconds), executed in: 3.102s (3 seconds)
Can anyone help me understand why the query planner uses a less efficient index (coupons_applicationid_value_idx instead of coupons_attrs_index) after the update?
After adding a mixed (BTREE + GIN) index on (applicationid, attributes) that index was selected effectively solving the issue. I would still like to understand what happened to predict issues like this one in the future.
[EDIT 31-01-20 11:02]: The issue returned after 24 hours. Again the wrong index was chosen by the planner and the query became slow. Running a simple analyze solved it.
It is still very strange that it only started happening after the update to PG 11.

Redshift: Nested Loop Join in the query plan

I have my fact table t_session which looks simplified like this:
+------------+----------+-------------+
| start_hm | end_hm | device_id |
|------------+----------+-------------|
| 0 | 10 | 111 |
| 2 | 10 | 112 |
| 12 | 20 | 113 |
| 60 | 90 | 111 |
| 60 | 90 | 112 |
I also have my dimension table dim_time, which contains 1440 records with hours 0-23 and minutes 0-59 for every hour. So it contains all combination of hour-minute for a day. tk is a range of numbers 0-1439
+------+--------+----------+
| tk | hour | minute |
|------+--------+----------|
| 0 | 0 | 0 |
| 1 | 0 | 1 |
| 2 | 0 | 2 |
............................
| 60 | 1 | 0 |
| 61 | 1 | 1 |
| 62 | 1 | 2 |
............................
| 120 | 2 | 0 |
| 121 | 2 | 1 |
| 122 | 2 | 2 |
............................
I want to count the number of active device_id for every minute. In real application there is another table dim_date and half a dozen other relations, but let's keep it simple for this question.
The device is active in time slot between start_hm and end_hm. Both start_hm and end_hm have values between 0 and 1439.
select count(distinct device_id)
from t_session
join dim_time on tk between start_hm and end_hm
group by tk
order by tk;
This query executes slow like hell. When I look on the execution plan it complains about the Nested loop.
+--------------------------------------------------------------------------------------------------------------------------------+
| QUERY PLAN |
|--------------------------------------------------------------------------------------------------------------------------------|
| XN Limit (cost=1000002000820.94..1000002000820.97 rows=10 width=8) |
| -> XN Merge (cost=1000002000820.94..1000002000821.44 rows=200 width=8) |
| Merge Key: tk |
| -> XN Network (cost=1000002000820.94..1000002000821.44 rows=200 width=8) |
| Send to leader |
| -> XN Sort (cost=1000002000820.94..1000002000821.44 rows=200 width=8) |
| Sort Key: tk |
| -> XN HashAggregate (cost=2000812.80..2000813.30 rows=200 width=8) |
| -> XN Subquery Scan volt_dt_0 (cost=2000764.80..2000796.80 rows=3200 width=8) |
| -> XN HashAggregate (cost=2000764.80..2000764.80 rows=3200 width=8) |
| -> XN Nested Loop DS_BCAST_INNER (cost=0.00..2000748.80 rows=3200 width=8) |
| Join Filter: (("outer".tk <= "inner".end_hm) AND ("outer".tk >= "inner".start_hm)) |
| -> XN Seq Scan on dim_time (cost=0.00..28.80 rows=2880 width=4) |
| -> XN Seq Scan on t_session (cost=0.00..0.10 rows=10 width=12) |
| ----- Nested Loop Join in the query plan - review the join predicates to avoid Cartesian products ----- |
+--------------------------------------------------------------------------------------------------------------------------------+
I understand where the nested loop comes from. It needs to loop over dim_time for every record in t_session.
Is it possible to modify my query to avoid the nested loop and improve the performance?
UPDATE: The same query is running blazingly fast on Postgres and the execution plan does not have Cartesian product.
+--------------------------------------------------------------------------------------------------------------+
| QUERY PLAN |
|--------------------------------------------------------------------------------------------------------------|
| Limit (cost=85822.07..85839.17 rows=10 width=12) |
| -> GroupAggregate (cost=85822.07..88284.47 rows=1440 width=12) |
| Group Key: dim_time.tk |
| -> Sort (cost=85822.07..86638.07 rows=326400 width=8) |
| Sort Key: dim_time.tk |
| -> Nested Loop (cost=0.00..51467.40 rows=326400 width=8) |
| Join Filter: ((dim_time.tk >= t_session.start_hm) AND (dim_time.tk <= t_session.end_hm)) |
| -> Seq Scan on t_session (cost=0.00..30.40 rows=2040 width=12) |
| -> Materialize (cost=0.00..32.60 rows=1440 width=4) |
| -> Seq Scan on dim_time (cost=0.00..25.40 rows=1440 width=4) |
+--------------------------------------------------------------------------------------------------------------+
UPDATE2:
The t_session table has device_id column as DISTKEY and start_date column (not shown in the simplified example) as the SORTKEY: sessions are naturally sorted by the start_date.
The dim_time table has tk as the SORTKEY and DISTSTYLE ALL.
The execution time for 40000 sessions per day is 5-6 minutes on Redshift. And a couple of seconds on Postgres.
There are two dc2.large nodes on Redshift cluster

Postgresql very slow query on indexed column

I have table with 50 mln rows. One column named u_sphinx is very important available values are 1,2,3. Now all rows have value 3 but, when i checking for new rows (u_sphinx = 1) the query is very slow. What could be wrong ? Maybe index is broken ? Server: Debian, 8GB 4x Intel(R) Xeon(R) CPU E3-1220 V2 # 3.10GHz
Table structure:
base=> \d u_user
Table "public.u_user"
Column | Type | Modifiers
u_ip | character varying |
u_agent | text |
u_agent_js | text |
u_resolution_id | integer |
u_os | character varying |
u_os_id | smallint |
u_platform | character varying |
u_language | character varying |
u_language_id | smallint |
u_language_js | character varying |
u_cookie | smallint |
u_java | smallint |
u_color_depth | integer |
u_flash | character varying |
u_charset | character varying |
u_doctype | character varying |
u_compat_mode | character varying |
u_sex | character varying |
u_age | character varying |
u_theme | character varying |
u_behave | character varying |
u_targeting | character varying |
u_resolution | character varying |
u_user_hash | bigint |
u_tech_hash | character varying |
u_last_target_data_time | integer |
u_last_target_prof_time | integer |
u_id | bigint | not null default nextval('u_user_u_id_seq'::regclass)
u_sphinx | smallint | not null default 1::smallint
Indexes:
"u_user_u_id_pk" PRIMARY KEY, btree (u_id)
"u_user_hash_index" btree (u_user_hash)
"u_user_u_sphinx_ind" btree (u_sphinx)
Slow query:
base=> explain analyze SELECT u_id FROM u_user WHERE u_sphinx = 1 LIMIT 1;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.15 rows=1 width=8) (actual time=485146.252..485146.252 rows=0 loops=1)
-> Seq Scan on u_user (cost=0.00..3023707.80 rows=19848860 width=8) (actual time=485146.249..485146.249 rows=0 loops=1)
Filter: (u_sphinx = 1)
Rows Removed by Filter: 23170476
Total runtime: 485160.241 ms
(5 rows)
Solved:
After adding partial index
base=> explain analyze SELECT u_id FROM u_user WHERE u_sphinx = 1 LIMIT 1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.27..4.28 rows=1 width=8) (actual time=0.063..0.063 rows=0 loops=1)
-> Index Scan using u_user_u_sphinx_index_1 on u_user (cost=0.27..4.28 rows=1 width=8) (actual time=0.061..0.061 rows=0 loops=1)
Index Cond: (u_sphinx = 1)
Total runtime: 0.106 ms
Thx for #Kouber Saparev
Try making a partial index.
CREATE INDEX u_user_u_sphinx_idx ON u_user (u_sphinx) WHERE u_sphinx = 1;
Your query plan looks like the DB is treating the query as if 1 was so common in the DB that it'll be better off digging into a disk page or two in order to identify a relevant row, instead of adding the overhead of plowing through an index and finding a row in a random disk page.
This could be an indication that you forgot to run to analyze the table so the planner has proper stats:
analyze u_user