Really long - running query when using order by - postgresql

I have a major issue with one of my queries:
SELECT tpostime, gispoint
FROM mytable
WHERE idterminal = 233463
ORDER BY idpos DESC
When idterminal does not exist in 'mytable' then this query is being processed forever, and then I'm presented with timeout (well 'canceling statement due to user request' message to be specific), but when I remove the order by clause, everything seems fine. Now I'm wondering - idpos is primary key for 'mytable', therefore it's indexed so ordering by it should be fast, I guess.
And what's important - 'mytable' weights 3gb.
Table and index definitions:
CREATE TABLE mytable (
idpos serial NOT NULL,
tpostime timestamp(0) without time zone,
idterminal integer DEFAULT 0,
gispoint geometry,
idtracks integer,
CONSTRAINT mytable_pkey PRIMARY KEY (idpos),
CONSTRAINT qwe FOREIGN KEY (idtracks) REFERENCES qwe (idtracks)
MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE,
CONSTRAINT abc FOREIGN KEY (idterminal) REFERENCES abc (idterminal)
MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE,
CONSTRAINT enforce_geotype_gispoint
CHECK (geometrytype(gispoint)= 'POINT'::text OR gispoint IS NULL),
CONSTRAINT enforce_srid_gispoint CHECK (srid(gispoint) = 4326)
) WITH OIDS;
CREATE INDEX idx_idterminal ON mytable USING btree (idterminal);
CREATE INDEX idx_idtracks ON mytable USING btree (idtracks);
CREATE INDEX idx_idtracks_idterminal ON mytable USING btree (idtracks, idterminal);

It looks to me like the selectivity of idterminal is low enough for postgres to choose a full scan of mytable_pkey rather than the cost of ordering all the rows with idterminal = 233463
I suggest:
CREATE INDEX idx_idterminal2 ON mytable USING btree (idterminal, idpos);
and perhaps:
DROP INDEX idx_idterminal;
You don't mention if this is a production database or not - if it is of course you will need to test the impact of the change first elsewhere.
If you prefer not to change the schema you might like to try and trick the optimizer into the path you know is best with something like (not tested) for 8.4 and above:
SELECT *
FROM ( SELECT tpostime, gispoint, idpos, row_number() over (order by 1)
FROM mytable
WHERE idterminal = 233463 )
ORDER BY idpos DESC;
or perhaps just:
SELECT *
FROM ( SELECT tpostime, gispoint, idpos
FROM mytable
WHERE idterminal = 233463
GROUP BY tpostime, gispoint, idpos )
ORDER BY idpos DESC;
or even:
SELECT tpostime, gispoint
FROM mytable
WHERE idterminal = 233463
ORDER BY idpos*2 DESC

Do you have an index on idterminal? Try adding a composite index with both (idpos, idterminal). What is probably happening if you do the explain plan, is it is ordering by idpos first, then scanning to find idterminal.

Related

Syntax error on Upsert PostgreSql while usin an insert into with on conflict [duplicate]

I'm getting the following error when doing the following type of insert:
Query:
INSERT INTO accounts (type, person_id) VALUES ('PersonAccount', 1) ON
CONFLICT (type, person_id) WHERE type = 'PersonAccount' DO UPDATE SET
updated_at = EXCLUDED.updated_at RETURNING *
Error:
SQL execution failed (Reason: ERROR: there is no unique or exclusion
constraint matching the ON CONFLICT specification)
I also have an unique INDEX:
CREATE UNIQUE INDEX uniq_person_accounts ON accounts USING btree (type,
person_id) WHERE ((type)::text = 'PersonAccount'::text);
The thing is that sometimes it works, but not every time. I randomly get
that exception, which is really strange. It seems that it can't access that
INDEX or it doesn't know it exists.
Any suggestion?
I'm using PostgreSQL 9.5.5.
Example while executing the code that tries to find or create an account:
INSERT INTO accounts (type, person_id, created_at, updated_at) VALUES ('PersonAccount', 69559, '2017-02-03 12:09:27.259', '2017-02-03 12:09:27.259') ON CONFLICT (type, person_id) WHERE type = 'PersonAccount' DO UPDATE SET updated_at = EXCLUDED.updated_at RETURNING *
SQL execution failed (Reason: ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification)
In this case, I'm sure that the account does not exist. Furthermore, it never outputs the error when the person has already an account. The problem is that, in some cases, it also works if there is no account yet. The query is exactly the same.
Per the docs,
All table_name unique indexes that, without regard to order, contain exactly the
conflict_target-specified columns/expressions are inferred (chosen) as arbiter
indexes. If an index_predicate is specified, it must, as a further requirement
for inference, satisfy arbiter indexes.
The docs go on to say,
[index_predicate are u]sed to allow inference of partial unique indexes
In an understated way, the docs are saying that when using a partial index and
upserting with ON CONFLICT, the index_predicate must be specified. It is not
inferred for you. I learned this
here, and the following example demonstrates this.
CREATE TABLE test.accounts (
id int PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
type text,
person_id int);
CREATE UNIQUE INDEX accounts_note_idx on accounts (type, person_id) WHERE ((type)::text = 'PersonAccount'::text);
INSERT INTO test.accounts (type, person_id) VALUES ('PersonAccount', 10);
so that we have:
unutbu=# select * from test.accounts;
+----+---------------+-----------+
| id | type | person_id |
+----+---------------+-----------+
| 1 | PersonAccount | 10 |
+----+---------------+-----------+
(1 row)
Without index_predicate we get an error:
INSERT INTO test.accounts (type, person_id) VALUES ('PersonAccount', 10) ON CONFLICT (type, person_id) DO NOTHING;
-- ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
But if instead you include the index_predicate, WHERE ((type)::text = 'PersonAccount'::text):
INSERT INTO test.accounts (type, person_id) VALUES ('PersonAccount', 10)
ON CONFLICT (type, person_id)
WHERE ((type)::text = 'PersonAccount'::text) DO NOTHING;
then there is no error and DO NOTHING is honored.
A simple solution of this error
First of all let's see the cause of error with a simple example. Here is the table mapping products to categories.
create table if not exists product_categories (
product_id uuid references products(product_id) not null,
category_id uuid references categories(category_id) not null,
whitelist boolean default false
);
If we use this query:
INSERT INTO product_categories (product_id, category_id, whitelist)
VALUES ('123...', '456...', TRUE)
ON CONFLICT (product_id, category_id)
DO UPDATE SET whitelist=EXCLUDED.whitelist;
This will give you error No unique or exclusion constraint matching the ON CONFLICT because there is no unique constraint on product_id and category_id. There could be multiple rows having the same combination of product and category id (so there can never be a conflict on them).
Solution:
Use unique constraint on both product_id and category_id like this:
create table if not exists product_categories (
product_id uuid references products(product_id) not null,
category_id uuid references categories(category_id) not null,
whitelist boolean default false,
primary key(product_id, category_id) -- This will solve the problem
-- unique(product_id, category_id) -- OR this if you already have a primary key
);
Now you can use ON CONFLICT (product_id, category_id) for both columns without any error.
In short: Whatever column(s) you use with on conflict, they should have unique constraint.
The easy way to fix it is by setting the conflicting column as UNIQUE
I did not have a chance to play with UPSERT, but I think you have a case from
docs:
Note that this means a non-partial unique index (a unique index
without a predicate) will be inferred (and thus used by ON CONFLICT)
if such an index satisfying every other criteria is available. If an
attempt at inference is unsuccessful, an error is raised.
I solved the same issue by creating one UNIQUE INDEX for ALL columns you want to include in the ON CONFLICT clause, not one UNIQUE INDEX for each of the columns.
CREATE TABLE table_name (
element_id UUID NOT NULL DEFAULT gen_random_uuid(),
timestamp TIMESTAMP NOT NULL DEFAULT now():::TIMESTAMP,
col1 UUID NOT NULL,
col2 STRING NOT NULL ,
col3 STRING NOT NULL ,
CONSTRAINT "primary" PRIMARY KEY (element_id ASC),
UNIQUE (col1 asc, col2 asc, col3 asc)
);
Which will allow to query like
INSERT INTO table_name (timestamp, col1, col2, col3) VALUES ('timestamp', 'uuid', 'string', 'string')
ON CONFLICT (col1, col2, col3)
DO UPDATE timestamp = EXCLUDED.timestamp, col1 = EXCLUDED.col1, col2 = excluded.col2, col3 = col3.excluded;

How can I bulk insert rows only if a compound primary key don't already exist? [AWS Redshift]

in Amazon Redshift I try to do a bulk insert value in a table from a temp table.
However I only want to insert the values where a compound of values (primary key) not exist in the table, to avoid adding duplicate.
Below the DDL of the table
• clusters_typologies table (table when i want to insert data)
create table if not exists clusters.clusters_typologies
(
cluster_id BIGINT,
typology_id BIGINT,
semantic_id BIGINT,
primary key (cluster_id, typology_id, semantic_id)
);
Temp Table is create with query below and after that all field are correctly inserted.
CREATE TEMPORARY TABLE temporary (
cluster_id bigint,
typology_name varchar(100),
typology_id bigint,
semantic_name varchar(100),
semantic_id bigint
);
Now when i try to insert with that query
INSERT INTO clusters.clusters_typologies (cluster_id, typology_id,semantic_id)
(SELECT temp.cluster_id, temp.typology_id, temp.semantic_id
FROM temporary temp
WHERE NOT EXISTS(SELECT 1
FROM clusters_typologies
where cluster_id = temp.cluster_id
and typology_id = temp.typology_id
and semantic_id = temp.semantic_id));
I got this error and i cannot figured out how to make it work.
Invalid operation: This type of correlated subquery pattern is not supported due to internal error;
Anyone know how to fix or how is the best way to insert in a table with a compound key avoiding duplicate.
Thanks.
To upsert follow this guide
https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-upsert.html
and note that certain types of correlated subquery are not allowed in redshift - that is the cause of your error
see
https://docs.aws.amazon.com/redshift/latest/dg/r_correlated_subqueries.html
After some attempt I figured out how to do an insert from a temp table, and check from a compound primary key to avoid duplicate.
Basically from AWS documentation that #Jon Scott as sent, I understand that use outer table in inner select is not supported from Redshift.
I solve using a left join and check if the joining column is null.
Below the query I use now.
INSERT INTO clusters.clusters_typologies (cluster_id, typology_id, semantic_id)
(SELECT temp.cluster_id, temp.typology_id, temp.semantic_id
FROM aaaa temp
LEFT JOIN clusters.clusters_typologies clu_typ ON temp.cluster_id = clu_typ.cluster_id AND
temp.typology_id = clu_typ.typology_id AND
temp.semantic_id = clu_typ.semantic_id
WHERE clu_typ.cluster_id IS NULL
AND clu_typ.typology_id IS NULL
AND clu_typ.semantic_id IS NULL);

Postgres: order of index

Does order of index (ASC or DESC) impacts the SELECT with ORDER BY in opposite direction?
For example, suppose we have a table:
CREATE TABLE public."Comments"
(
id integer NOT NULL DEFAULT nextval('"Comments_id_seq"'::regclass),
child_count integer DEFAULT 0,
comment text COLLATE pg_catalog."default",
CONSTRAINT "Comments_pkey" PRIMARY KEY (id)
)
And index on child_count column:
CREATE INDEX child_count
ON public."Comments" USING btree
(child_count)
TABLESPACE pg_default;
(I.e., in ASC order by default.)
Than we run a statement:
SELECT id, child_count, comment
FROM public."Comments"
ORDER BY child_count DESC
OFFSET 0
LIMIT 100
Do we need to invert index direction?
You can adjust the ordering of a by including the options ASC, DESC, NULLS FIRST, and/or NULLS LAST when creating the index; for example:
CREATE INDEX test2_info_nulls_low ON test2 (info NULLS FIRST);
CREATE INDEX test3_desc_index ON test3 (id DESC NULLS LAST);
via https://www.postgresql.org/docs/current/static/indexes-ordering.html

postgresql simple select is slow

i have a table:
CREATE TABLE my_table
(
id integer NOT NULL DEFAULT nextval('seq_my_table_id'::regclass),
fk_id1 integer NOT NULL,
fk_id2 smallint NOT NULL,
name character varying(255) NOT NULL,
description text,
currency_name character varying(3) NOT NULL,
created timestamp with time zone NOT NULL DEFAULT now(),
updated timestamp with time zone NOT NULL DEFAULT now(),
CONSTRAINT "PK_my_table_id" PRIMARY KEY (id ),
CONSTRAINT "FK_my_table_fk_id1" FOREIGN KEY (fk_id1)
REFERENCES my_table2 (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED,
CONSTRAINT "FK_my_table_fk_id2" FOREIGN KEY (fk_id2)
REFERENCES my_table3 (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED
)
WITH (
OIDS=FALSE,
autovacuum_enabled=true,
autovacuum_vacuum_threshold=50,
autovacuum_vacuum_scale_factor=0.2,
autovacuum_analyze_threshold=50,
autovacuum_analyze_scale_factor=0.1,
autovacuum_vacuum_cost_delay=20,
autovacuum_vacuum_cost_limit=200,
autovacuum_freeze_min_age=50000000,
autovacuum_freeze_max_age=200000000,
autovacuum_freeze_table_age=150000000
);
ALTER TABLE my_table
OWNER TO postgres;
CREATE INDEX my_table_fk_id1
ON my_table
USING btree
(fk_id1 );
CREATE INDEX my_table_fk_id2
ON my_table
USING btree
(fk_id2 );
tables records count
select count(id) from my_table; --24061
select count(id) from my_table2; --24061
select count(id) from my_table3; --123
execution time
select * from my_table -- ~17sec
vacuum/analyze - no effect
description - length ~ 4000 chars in each row
postgres.conf - standart settings
Version: 9.1
select all fields except description reduce execution time to ~1,5 sec
How to icrease select speed with description ?
upd
--explain analyze select * from my_table
"Seq Scan on my_table (cost=0.00..3425.79 rows=24079 width=1015) (actual time=0.019..17.238 rows=24079 loops=1)"
"Total runtime: 18.649 ms"
The question is how to make this fast. The issue is not on the server since it takes 18ms there. The simple solution is to select fewer columns so that there is less to transfer over the network. My guess is that you have long descriptions on some. Leave that column off your select and try again.

SELECT or INSERT a row in one command

I'm using PostgreSQL 9.0 and I have a table with just an artificial key (auto-incrementing sequence) and another unique key. (Yes, there is a reason for this table. :)) I want to look up an ID by the other key or, if it doesn't exist, insert it:
SELECT id
FROM mytable
WHERE other_key = 'SOMETHING'
Then, if no match:
INSERT INTO mytable (other_key)
VALUES ('SOMETHING')
RETURNING id
The question: is it possible to save a round-trip to the DB by doing both of these in one statement? I can insert the row if it doesn't exist like this:
INSERT INTO mytable (other_key)
SELECT 'SOMETHING'
WHERE NOT EXISTS (SELECT * FROM mytable WHERE other_key = 'SOMETHING')
RETURNING id
... but that doesn't give the ID of an existing row. Any ideas? There is a unique constraint on other_key, if that helps.
Have you tried to union it?
Edit - this requires Postgres 9.1:
create table mytable (id serial primary key, other_key varchar not null unique);
WITH new_row AS (
INSERT INTO mytable (other_key)
SELECT 'SOMETHING'
WHERE NOT EXISTS (SELECT * FROM mytable WHERE other_key = 'SOMETHING')
RETURNING *
)
SELECT * FROM new_row
UNION
SELECT * FROM mytable WHERE other_key = 'SOMETHING';
results in:
id | other_key
----+-----------
1 | SOMETHING
(1 row)
No, there is no special SQL syntax that allows you to do select or insert. You can do what Ilia mentions and create a sproc, which means it will not do a round trip fromt he client to server, but it will still result in two queries (three actually, if you count the sproc itself).
using 9.5 i successfully tried this
based on Denis de Bernardy's answer
only 1 parameter
no union
no stored procedure
atomic, thus no concurrency problems (i think...)
The Query:
WITH neworexisting AS (
INSERT INTO mytable(other_key) VALUES('hello 2')
ON CONFLICT(other_key) DO UPDATE SET existed=true -- need some update to return sth
RETURNING *
)
SELECT * FROM neworexisting
first call:
id|other_key|created |existed|
--|---------|-------------------|-------|
6|hello 1 |2019-09-11 11:39:29|false |
second call:
id|other_key|created |existed|
--|---------|-------------------|-------|
6|hello 1 |2019-09-11 11:39:29|true |
First create your table ;-)
CREATE TABLE mytable (
id serial NOT NULL,
other_key text NOT NULL,
created timestamptz NOT NULL DEFAULT now(),
existed bool NOT NULL DEFAULT false,
CONSTRAINT mytable_pk PRIMARY KEY (id),
CONSTRAINT mytable_uniq UNIQUE (other_key) --needed for on conflict
);
you can use a stored procedure
IF (SELECT id FROM mytable WHERE other_key = 'SOMETHING' LIMIT 1) < 0 THEN
INSERT INTO mytable (other_key) VALUES ('SOMETHING')
END IF
I have an alternative to Denis answer, that I think is less database-intensive, although a bit more complex:
create table mytable (id serial primary key, other_key varchar not null unique);
WITH table_sel AS (
SELECT id
FROM mytable
WHERE other_key = 'test'
UNION
SELECT NULL AS id
ORDER BY id NULLS LAST
LIMIT 1
), table_ins AS (
INSERT INTO mytable (id, other_key)
SELECT
COALESCE(id, NEXTVAL('mytable_id_seq'::REGCLASS)),
'test'
FROM table_sel
ON CONFLICT (id) DO NOTHING
RETURNING id
)
SELECT * FROM table_ins
UNION ALL
SELECT * FROM table_sel
WHERE id IS NOT NULL;
In table_sel CTE I'm looking for the right row. If I don't find it, I assure that table_sel returns at least one row, with a union with a SELECT NULL.
In table_ins CTE I try to insert the same row I was looking for earlier. COALESCE(id, NEXTVAL('mytable_id_seq'::REGCLASS)) is saying: id could be defined, if so, use it; whereas if id is null, increment the sequence on id and use this new value to insert a row. The ON CONFLICT clause assure
that if id is already in mytable I don't insert anything.
At the end I put everything together with a UNION between table_ins and table_sel, so that I'm sure to take my sweet id value and execute both CTE.
This query needs to search for the value other_key only once, and is a "search this value" not a "check if this value not exists in the table", that is very heavy; in Denis alternative you use other_key in both types of searches. In my query you "check if a value not exists" only on id that is a integer primary key, that, for construction, is fast.
Minor tweak a decade late to Denis's excellent answer:
-- Create the table with a unique constraint
CREATE TABLE mytable (
id serial PRIMARY KEY
, other_key varchar NOT NULL UNIQUE
);
WITH new_row AS (
-- Only insert when we don't find anything, avoiding a table lock if
-- possible.
INSERT INTO mytable ( other_key )
SELECT 'SOMETHING'
WHERE NOT EXISTS (
SELECT *
FROM mytable
WHERE other_key = 'SOMETHING'
)
RETURNING *
)
(
-- This comes first in the UNION ALL since it'll almost certainly be
-- in the query cache. Marginally slower for the insert case, but also
-- marginally faster for the much more common read-only case.
SELECT *
FROM mytable
WHERE other_key = 'SOMETHING'
-- Don't check for duplicates to be removed
UNION ALL
-- If we reach this point in iteration, we needed to do the INSERT and
-- lock after all.
SELECT *
FROM new_row
) LIMIT 1 -- Just return whatever comes first in the results and allow
-- the query engine to cut processing short for the INSERT
-- calculation.
;
The UNION ALL tells the planner it doesn't have to collect results for de-duplication. The LIMIT 1 at the end allows the planner to short-circuit further processing/iteration once it knows there's an answer available.
NOTE: There is a race condition present here and in the original answer. If the entry does not already exist, the INSERT will fail with a unique constraint violation. The error can be suppressed with ON CONFLICT DO NOTHING, but the query will return an empty set instead of the new row. This is a difficult problem because getting that info from another transaction would violate the I in ACID.