POSTGRESQL VERSION: 10
HARDWARE: 4 workers / 16GBRAM / 50% used
I'm not a Postgresql expert. I have just read a lot of documentation and did a lot of tests.
I have some postgresql queries whick take a lot of times > 30s because of 10 millions of rows on a table.
Column | Type | Collation | Nullable | Default
------------------------------+--------------------------+-----------+----------+----------------------------------------------------------
id | integer | | not null |
cveid | character varying(50) | | |
summary | text | | not null |
published | timestamp with time zone | | |
modified | timestamp with time zone | | |
assigner | character varying(128) | | |
vulnerable_products | character varying(250)[] | | |
cvss | double precision | | |
cvss_time | timestamp with time zone | | |
cvss_vector | character varying(250) | | |
access | jsonb | | not null |
impact | jsonb | | not null |
score | integer | | not null |
is_exploitable | boolean | | not null |
is_confirmed | boolean | | not null |
is_in_the_news | boolean | | not null |
is_in_the_wild | boolean | | not null |
reflinks | jsonb | | not null |
reflinkids | jsonb | | not null |
created_at | timestamp with time zone | | |
history_id | integer | | not null | nextval('vulns_historicalvuln_history_id_seq'::regclass)
history_date | timestamp with time zone | | not null |
history_change_reason | character varying(100) | | |
history_type | character varying(1) | | not null |
Indexes:
"vulns_historicalvuln_pkey" PRIMARY KEY, btree (history_id)
"btree_varchar" btree (history_type varchar_pattern_ops)
"vulns_historicalvuln_cve_id_850876bb" btree (cve_id)
"vulns_historicalvuln_cwe_id_2013d697" btree (cwe_id)
"vulns_historicalvuln_history_user_id_9e25ebf5" btree (history_user_id)
"vulns_historicalvuln_id_773f2af7" btree (id)
--- TRUNCATE
Foreign-key constraints:
"vulns_historicalvuln_history_user_id_9e25ebf5_fk_custusers" FOREIGN KEY (history_user_id) REFERENCES custusers_user(id) DEFERRABLE INITIALLY DEFERRED
Example of queries:
SELECT * FROM vulns_historicalvuln WHERE history_type <> '+' order by id desc fetch first 10000 rows only; -> 30s without cache
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.43..31878.33 rows=10000 width=1736) (actual time=0.173..32839.474 rows=10000 loops=1)
-> Index Scan Backward using vulns_historicalvuln_id_773f2af7 on vulns_historicalvuln (cost=0.43..26346955.92 rows=8264960 width=1736) (actual time=0.172..32830.958 rows=10000 loops=1)
Filter: ((history_type)::text <> '+'::text)
Rows Removed by Filter: 296
Planning time: 19.514 ms
Execution time: 32845.015 ms
SELECT DISTINCT "vulns"."id", "vulns"."uuid", "vulns"."feedid", "vulns"."cve_id", "vulns"."cveid", "vulns"."summary", "vulns"."published", "vulns"."modified", "vulns"."assigner", "vulns"."cwe_id", "vulns"."vulnerable_packages_versions", "vulns"."vulnerable_products", "vulns"."vulnerable_product_versions", "vulns"."cvss", "vulns"."cvss_time", "vulns"."cvss_version", "vulns"."cvss_vector", "vulns"."cvss_metrics", "vulns"."access", "vulns"."impact", "vulns"."cvss3", "vulns"."cvss3_vector", "vulns"."cvss3_version", "vulns"."cvss3_metrics", "vulns"."score", "vulns"."is_exploitable", "vulns"."is_confirmed", "vulns"."is_in_the_news", "vulns"."is_in_the_wild", "vulns"."reflinks", "vulns"."reflinkids", "vulns"."created_at", "vulns"."updated_at", "vulns"."id" AS "exploit_count", false AS "monitored", '42' AS "org" FROM "vulns" WHERE ("vulns"."score" >= 0 AND "vulns"."score" <= 100) ORDER BY "vulns"."updated_at" DESC LIMIT 10
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=315191.32..315192.17 rows=10 width=1691) (actual time=3013.964..3013.990 rows=10 loops=1)
-> Unique (cost=315191.32..329642.42 rows=170013 width=1691) (actual time=3013.962..3013.986 rows=10 loops=1)
-> Sort (cost=315191.32..315616.35 rows=170013 width=1691) (actual time=3013.961..3013.970 rows=10 loops=1)
Sort Key: updated_at DESC, id, uuid, feedid, cve_id, cveid, summary, published, modified, assigner, cwe_id, vulnerable_packages_versions, vulnerable_products, vulnerable_product_versions, cvss, cvss_time, cvss_version, cvss_vector, cvss_metrics, access, impact, cvss3, cvss3_vector, cvss3_version, cvss3_metrics, score, is_exploitable, is_confirmed, is_in_the_news, is_in_the_wild, reflinks, reflinkids, created_at
Sort Method: external merge Disk: 277648kB
-> Seq Scan on vulns (cost=0.00..50542.19 rows=170013 width=1691) (actual time=0.044..836.597 rows=169846 loops=1)
Filter: ((score >= 0) AND (score <= 100))
Planning time: 3.183 ms
Execution time: 3070.346 ms
I have created a btree varchar index btree_varchar" btree (history_type varchar_pattern_ops) like this:
CREATE INDEX CONCURRENTLY btree_varchar ON vulns_historicalvuln (history_type varchar_pattern_ops);
I have also created a index for vulns score for my second queries:
CREATE INDEX CONCURRENTLY ON vulns (score);
I read a lot of post and documentation about slow queries and index. I'am sure it's the solution about slow queries but the query plan of Postgresql doesn't use the index I have created. It estimates that it processes faster with seq scan than using the index...
SELECT relname, indexrelname, idx_scan FROM pg_catalog.pg_stat_user_indexes;
relname | indexrelname | idx_scan
-------------------------------------+-----------------------------------------------------------------+------------
vulns_historicalvuln | btree_varchar | 0
Could you tell me if my index is well designed ? How I can debug this, feel free to ask more information if needed.
Thanks
After some research, I understand that index is not the solution of my problem here.
Low_cardinality (repeated value) of this field make the index useless.
The time of the query postgresql here is normal because of 30M rows matched.
I close this issue because there is no problem with index here.
Related
After updating postgres, I noticed that one of the queries I was using became much slower. After running EXPLAIN ANALYZE I see that it is now using a different index on the same query.
Among other columns, my table has an applicationid column which is a foreign key BIGINT, and I have an attributes columns which is a jsonb key/value map.
The description on coupons table are (some irrelevant parts were omitted):
+------------------------+--------------------------+-------------------------------------------------------+
| Column | Type | Modifiers |
|------------------------+--------------------------+-------------------------------------------------------|
| id | integer | not null default nextval('coupons_id_seq'::regclass) |
| created | timestamp with time zone | not null default now() |
| campaignid | bigint | |
| value | text | |
| expirydate | timestamp with time zone | |
| startdate | timestamp with time zone | |
| attributes | jsonb | not null default '{}'::jsonb |
| applicationid | bigint | |
| deleted | timestamp with time zone | |
| deleted_changelogid | bigint | not null default 0 |
| accountid | bigint | not null |
| recipientintegrationid | text | |
+------------------------+--------------------------+-------------------------------------------------------+
Indexes:
"coupons_applicationid_value_idx" UNIQUE, btree (applicationid, value) WHERE deleted IS NULL
"coupons_attrs_index" gin (attributes)
"coupons_recipientintegrationid_idx" btree (recipientintegrationid)
"coupons_value_trgm_idx" gin (value gin_trgm_ops)
The query I'm running is (some irrelevant parts were omitted):
EXPLAIN ANALYZE SELECT
*,
COUNT(*) OVER () AS total_rows
FROM
coupons
WHERE
deleted IS NULL
AND coupons.applicationid = 2
AND coupons.attributes #> '{"SessionId":"1070695459"}'
ORDER BY
id ASC
LIMIT 1000;
applicationid doesn't help us much. The index that was previously used was coupons_attrs_index (over attributes column) which produced very good results.
After the update however, the query planner started preferring the index coupons_applicationid_value_idx for some reason!
Here is output from EXPLAIN ANALYZE (some irrelevant parts were omitted):
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| QUERY PLAN |
|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| -> Sort (cost=64.09..64.10 rows=1 width=237) (actual time=3068.996..3068.996 rows=0 loops=1) |
| Sort Key: coupons.id |
| Sort Method: quicksort Memory: 25kB |
| -> WindowAgg (cost=0.86..64.08 rows=1 width=237) (actual time=3068.988..3068.988 rows=0 loops=1) |
| -> Nested Loop (cost=0.86..64.07 rows=1 width=229) (actual time=3068.985..3068.985 rows=0 loops=1) |
| -> Index Scan using coupons_applicationid_value_idx on coupons (cost=0.43..61.61 rows=1 width=213) (actual time=3068.984..3068.984 rows=0 loops=1) |
| Index Cond: (applicationid = 2) |
| Filter: (attributes #> '{"SessionId": "1070695459"}'::jsonb) |
| Rows Removed by Filter: 2344013 |
| Planning Time: 0.531 ms |
| Execution Time: 3069.076 ms |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
EXPLAIN
Time: 3.159s (3 seconds), executed in: 3.102s (3 seconds)
Can anyone help me understand why the query planner uses a less efficient index (coupons_applicationid_value_idx instead of coupons_attrs_index) after the update?
After adding a mixed (BTREE + GIN) index on (applicationid, attributes) that index was selected effectively solving the issue. I would still like to understand what happened to predict issues like this one in the future.
[EDIT 31-01-20 11:02]: The issue returned after 24 hours. Again the wrong index was chosen by the planner and the query became slow. Running a simple analyze solved it.
It is still very strange that it only started happening after the update to PG 11.
I'm looking for speeding up a query (PostgreSql 9.5), but I cannot change it, because is executed by an application I cannot modify.
So, I captured the query from the PostgreSql logs, here it is:
SELECT Count(*)
FROM (SELECT ti.idturnosistemaexterno,
ti.historiaclinica_hp,
p.patientname,
CASE
WHEN ( ti.creationdate :: VARCHAR IS NOT NULL ) THEN
Date_trunc('SECOND', ti.creationdate) :: VARCHAR
ELSE 'NO EXISTE' :: VARCHAR
END AS creationdate,
CASE
WHEN ( st.idstudy :: VARCHAR IS NOT NULL ) THEN 'SI' :: VARCHAR
ELSE 'NO' :: VARCHAR
END AS idstudy,
st.institutionname,
CASE
WHEN ( st.created_time :: VARCHAR IS NOT NULL ) THEN
Date_trunc('SECOND', st.created_time) :: VARCHAR
ELSE 'NO EXISTE' :: VARCHAR
END AS created_time,
ti.enviado,
st.accessionnumber,
st.modality
FROM study st
right join turnointegracion ti
ON st.accessionnumber = ti.idturnosistemaexterno
left join patient p
ON st.idpatient = p.idpatient
ORDER BY ti.creationdate DESC) AS foo;
The explain analyze output is this:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=231136.16..231136.17 rows=1 width=0) (actual time=32765.883..32765.883 rows=1 loops=1)
-> Sort (cost=230150.04..230314.39 rows=65741 width=8) (actual time=32754.992..32761.780 rows=64751 loops=1)
Sort Key: ti.creationdate DESC
Sort Method: external merge Disk: 1648kB
-> Hash Right Join (cost=219856.39..224889.28 rows=65741 width=8) (actual time=26653.007..32714.961 rows=64751 loops=1)
Hash Cond: ((st.accessionnumber)::text = (ti.idturnosistemaexterno)::text)
-> Seq Scan on study st (cost=0.00..4086.26 rows=77126 width=12) (actual time=12.983..6032.251 rows=77106 loops=1)
-> Hash (cost=219048.95..219048.95 rows=64595 width=16) (actual time=26639.722..26639.722 rows=64601 loops=1)
Buckets: 65536 Batches: 1 Memory Usage: 3602kB
-> Seq Scan on turnointegracion ti (cost=0.00..219048.95 rows=64595 width=16) (actual time=17.259..26611.806 rows=64601 loops=1)
Planning time: 25.519 ms
Execution time: 32766.710 ms
(12 rows)
Here are the related table definitions:
Table "public.turnointegracion"
Column | Type | Modifiers
---------------------------+-----------------------------+--------------------------------------------------------------------
idturnosistemaexterno | character varying(50) |
historiaclinica_hp | integer |
matriculaprofrealiza | character varying(10) |
matriculaprofinforma | character varying(10) |
obrasocial | character varying(20) |
planobrasocial | character varying(20) |
nroafiliado | character varying(20) |
nroautorizacion | character varying(20) |
matriculaprofprescribe | character varying(10) |
codigonomenclador | character varying(10) |
cantidadcodigonomenclador | integer |
importeunitariohonorarios | money |
importeunitarioderechos | money |
nrodefacturacion | character varying(15) |
informe | bytea |
titulodelestudio | character varying(250) |
fechaprescripcion | timestamp without time zone |
fechahora | timestamp without time zone |
enviado | boolean | not null default false
enviadofechahora | timestamp without time zone |
procesado_hp | timestamp without time zone |
modalidad | character varying(6) |
orden | integer | not null default nextval('turnointegracion_orden_seq'::regclass)
idturno | integer | not null default nextval('seq_turnointegracion_idturno'::regclass)
creationdate | timestamp without time zone | default now()
informetxt | text |
informedisponible | timestamp without time zone |
informeprocesado | timestamp without time zone |
Indexes:
"turnointegracion_pkey" PRIMARY KEY, btree (idturno)
"idx_fechahora" btree (fechahora)
"idx_historiaclinicahp" btree (historiaclinica_hp)
"idx_idturnosistemaexterno" btree (idturnosistemaexterno)
"idx_informedisponible" btree (informedisponible)
"idx_turnointegracion_creationdate" btree (creationdate DESC)
"idx_turnointegracion_idturnosistext_text" btree ((idturnosistemaexterno::text))
Table "public.study"
Column | Type | Modifiers
------------------------------+-----------------------------+---------------------------------------------------------
idstudy | integer | not null default nextval('study_idstudy_seq'::regclass)
studydate | date |
studytime | time without time zone |
studyid | character varying(20) |
studydescription | character varying(255) |
modality | character varying(2) |
modalityaetitle | character varying(50) |
nameofphysiciansreadingstudy | character varying(255) |
accessionnumber | character varying(20) |
performingphysiciansname | character varying(255) |
referringphysiciansname | character varying(255) |
studyinstanceuid | character varying(255) |
status | status_ |
institutionname | character varying(100) |
idpatient | integer |
created_time | timestamp without time zone |
Indexes:
"study_pkey" PRIMARY KEY, btree (idstudy)
"study_studyinstanceuid_key" UNIQUE CONSTRAINT, btree (studyinstanceuid)
"idx_study_accession_text" btree ((accessionnumber::text))
"idx_study_accessionnumber" btree (accessionnumber)
"idx_study_idstudy" btree (idstudy)
Foreign-key constraints:
"study_idpatient_fkey" FOREIGN KEY (idpatient) REFERENCES patient(idpatient)
Referenced by:
TABLE "series" CONSTRAINT "series_idstudy_fkey" FOREIGN KEY (idstudy) REFERENCES study(idstudy)
As you can see, I've added indexes on the affected columns but the planner is still doing sequential scans. Is there a way to improve this?.
There is no WHERE condition, due to this join:
right join turnointegracion ti
ON st.accessionnumber = ti.idturnosistemaexterno
you're reading all records from turnointegracion, adding an index for `creationdate' you can accelerate the sort function, but again all records are returned.
Filtering by creationdate can reduce the final time.
In a PostgreSQL 9.4.0 database I have a busy table with 22 indexes which are larger than the actual data in the table.
Since most of these indexes are for columns which are almost entirely NULL, I've been trying to replace some of them with partial indexes.
One of the columns is: auto_decline_at timestamp without time zone. This has 5453085 NULLS out of a total 5457088 rows.
The partial index replacement is being used, but according to the stats, the old index is also still in use, so I am afraid to drop it.
Selecting from pg_tables I see:
tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched
-----------+---------------------------------------+-------------+------------+------------+--------+-----------------+-------------+----------------
jobs | index_jobs_on_auto_decline_at | 5.45496e+06 | 1916 MB | 3123 MB | N | 17056009 | 26506058607 | 26232155810
jobs | index_jobs_on_auto_decline_at_partial | 5.45496e+06 | 1916 MB | 120 kB | N | 6677 | 26850779 | 26679802
And a few minutes later:
tablename | indexname | num_rows | table_size | index_size | unique | number_of_scans | tuples_read | tuples_fetched
-----------+---------------------------------------+-------------+------------+------------+--------+-----------------+-------------+----------------
jobs | index_jobs_on_auto_decline_at | 5.45496e+06 | 1916 MB | 3124 MB | N | 17056099 | 26506058697 | 26232155900
jobs | index_jobs_on_auto_decline_at_partial | 5.45496e+06 | 1916 MB | 120 kB | N | 6767 | 27210639 | 27039623
So number_of_scans is increasing for both of them.
The index definitions:
"index_jobs_on_auto_decline_at" btree (auto_decline_at)
"index_jobs_on_auto_decline_at_partial" btree (auto_decline_at) WHERE auto_decline_at IS NOT NULL
The only relevant query I can see in my logs follows this pattern:
SELECT "jobs".* FROM "jobs" WHERE (jobs.pending_destroy IS NULL OR jobs.pending_destroy = FALSE) AND "jobs"."deleted_at" IS NULL AND (state = 'assigned' AND auto_decline_at IS NOT NULL AND auto_decline_at < '2015-08-17 06:57:22.325324')
EXPLAIN ANALYSE gives me the following plan, which uses the partial index as expected:
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------
Index Scan using index_jobs_on_auto_decline_at_partial on jobs (cost=0.28..12.27 rows=1 width=648) (actual time=22.143..22.143 rows=0 loops=1)
Index Cond: ((auto_decline_at IS NOT NULL) AND (auto_decline_at < '2015-08-17 06:57:22.325324'::timestamp without time zone))
Filter: (((pending_destroy IS NULL) OR (NOT pending_destroy)) AND (deleted_at IS NULL) AND ((state)::text = 'assigned'::text))
Rows Removed by Filter: 3982
Planning time: 2.731 ms
Execution time: 22.179 ms
(6 rows)
My questions:
Why is index_jobs_on_auto_decline_at still being used?
Could this same query sometimes use index_jobs_on_auto_decline_at, or is there likely to be another query I am missing?
Is there a way to log the queries which are using index_jobs_on_auto_decline_at?
I have table with 50 mln rows. One column named u_sphinx is very important available values are 1,2,3. Now all rows have value 3 but, when i checking for new rows (u_sphinx = 1) the query is very slow. What could be wrong ? Maybe index is broken ? Server: Debian, 8GB 4x Intel(R) Xeon(R) CPU E3-1220 V2 # 3.10GHz
Table structure:
base=> \d u_user
Table "public.u_user"
Column | Type | Modifiers
u_ip | character varying |
u_agent | text |
u_agent_js | text |
u_resolution_id | integer |
u_os | character varying |
u_os_id | smallint |
u_platform | character varying |
u_language | character varying |
u_language_id | smallint |
u_language_js | character varying |
u_cookie | smallint |
u_java | smallint |
u_color_depth | integer |
u_flash | character varying |
u_charset | character varying |
u_doctype | character varying |
u_compat_mode | character varying |
u_sex | character varying |
u_age | character varying |
u_theme | character varying |
u_behave | character varying |
u_targeting | character varying |
u_resolution | character varying |
u_user_hash | bigint |
u_tech_hash | character varying |
u_last_target_data_time | integer |
u_last_target_prof_time | integer |
u_id | bigint | not null default nextval('u_user_u_id_seq'::regclass)
u_sphinx | smallint | not null default 1::smallint
Indexes:
"u_user_u_id_pk" PRIMARY KEY, btree (u_id)
"u_user_hash_index" btree (u_user_hash)
"u_user_u_sphinx_ind" btree (u_sphinx)
Slow query:
base=> explain analyze SELECT u_id FROM u_user WHERE u_sphinx = 1 LIMIT 1;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..0.15 rows=1 width=8) (actual time=485146.252..485146.252 rows=0 loops=1)
-> Seq Scan on u_user (cost=0.00..3023707.80 rows=19848860 width=8) (actual time=485146.249..485146.249 rows=0 loops=1)
Filter: (u_sphinx = 1)
Rows Removed by Filter: 23170476
Total runtime: 485160.241 ms
(5 rows)
Solved:
After adding partial index
base=> explain analyze SELECT u_id FROM u_user WHERE u_sphinx = 1 LIMIT 1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.27..4.28 rows=1 width=8) (actual time=0.063..0.063 rows=0 loops=1)
-> Index Scan using u_user_u_sphinx_index_1 on u_user (cost=0.27..4.28 rows=1 width=8) (actual time=0.061..0.061 rows=0 loops=1)
Index Cond: (u_sphinx = 1)
Total runtime: 0.106 ms
Thx for #Kouber Saparev
Try making a partial index.
CREATE INDEX u_user_u_sphinx_idx ON u_user (u_sphinx) WHERE u_sphinx = 1;
Your query plan looks like the DB is treating the query as if 1 was so common in the DB that it'll be better off digging into a disk page or two in order to identify a relevant row, instead of adding the overhead of plowing through an index and finding a row in a random disk page.
This could be an indication that you forgot to run to analyze the table so the planner has proper stats:
analyze u_user
This query ran instantly:
mydb=# SELECT reports.* FROM reports WHERE reports.id = 9988 ORDER BY time DESC LIMIT 1;
This query took 33 seconds to run (and I only selected report with unit_id 9988 here. I will potentially have hundreds, if not thousands.):
(UPDATE: This is results of using EXPLAIN ANALYZIE):
mydb=# EXPLAIN ANALYZE SELECT DISTINCT ON (unit_id) r.* FROM reports r WHERE r.unit_id IN (3007, 3011, 6193) ORDER BY unit_id, time DESC;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Unique (cost=1377569.23..1381106.10 rows=11 width=155) (actual time=97175.381..97710.369 rows=3 loops=1)
-> Sort (cost=1377569.23..1379337.66 rows=707375 width=155) (actual time=97175.379..97616.039 rows=764509 loops=1)
Sort Key: unit_id, "time"
Sort Method: external merge Disk: 92336kB
-> Bitmap Heap Scan on reports r (cost=20224.85..1142005.76 rows=707375 width=155) (actual time=12396.930..94097.890 rows=764509 loops=1)
Recheck Cond: (unit_id = ANY ('{3007,3011,6193}'::integer[]))
-> Bitmap Index Scan on index_reports_on_unit_id (cost=0.00..20048.01 rows=707375 width=0) (actual time=12382.176..12382.176 rows=764700 loops=1)
Index Cond: (unit_id = ANY ('{3007,3011,6193}'::integer[]))
Total runtime: 97982.363 ms
(9 rows)
The schema of reports table is as follows:
mydb=# \d+ reports
Table "public.reports"
Column | Type | Modifiers | Storage | Description
----------------+-----------------------------+------------------------------------------------------+----------+-------------
id | integer | not null default nextval('reports_id_seq'::regclass) | plain |
unit_id | integer | not null | plain |
time_secs | integer | not null | plain |
time | timestamp without time zone | | plain |
latitude | numeric(15,10) | not null | main |
longitude | numeric(15,10) | not null | main |
speed | integer | | plain |
io | integer | | plain |
msg_type | integer | | plain |
msg_code | integer | | plain |
signal | integer | | plain |
cellid | integer | | plain |
lac | integer | | plain |
processed | boolean | default false | plain |
created_at | timestamp without time zone | | plain |
updated_at | timestamp without time zone | | plain |
street | character varying(255) | | extended |
county | character varying(255) | | extended |
state | character varying(255) | | extended |
postal_code | character varying(255) | | extended |
country | character varying(255) | | extended |
distance | numeric | | main |
gps_valid | boolean | default true | plain |
city | character varying(255) | | extended |
street_number | character varying(255) | | extended |
address_source | integer | | plain |
source | integer | default 0 | plain |
driver_id | integer | | plain |
Indexes:
"reports_pkey" PRIMARY KEY, btree (id)
"reports_uniqueness_index" UNIQUE, btree (unit_id, "time", latitude, longitude)
"index_reports_on_address_source" btree (address_source DESC)
"index_reports_on_driver_id" btree (driver_id)
"index_reports_on_time" btree ("time")
"index_reports_on_time_secs" btree (time_secs)
"index_reports_on_unit_id" btree (unit_id)
Foreign-key constraints:
"reports_driver_id_fkey" FOREIGN KEY (driver_id) REFERENCES drivers(id)
"reports_unit_id_fkey" FOREIGN KEY (unit_id) REFERENCES units(id)
Referenced by:
TABLE "alerts" CONSTRAINT "alerts_report_id_fkey" FOREIGN KEY (report_id) REFERENCES reports(id)
TABLE "pressure_transmitters" CONSTRAINT "pressure_transmitters_report_id_fkey" FOREIGN KEY (report_id) REFERENCES reports(id)
TABLE "thermoking" CONSTRAINT "thermoking_report_id_fkey" FOREIGN KEY (report_id) REFERENCES reports(id)
Has OIDs: no
Why is SELECT DISTINCT ON running so slow?
There is relative slow Bitmap Heap Scan with RECHECK - pls, try to increase a work_mem