I created a unique index for a materialized view as :
create unique index if not exists matview_key on
matview (some_group_id, some_description);
I can't tell if it has been created
How do I see the index?
Thank you!
Two ways to verify index creation:
--In psql
\d matview
--Using SQL
select
*
from
pg_indexes
where
indexname = 'matview_key'
and
tablename = 'matview';
More information on pg_indexes.
Like has been commented, if the command finishes successfully and you don't get an error message, the index was created. Possible caveat: while the transaction is not committed, nobody else can see it (except the unique name is reserved now), and it still might get rolled back. Check in a separate transaction to be sure.
To be absolutely sure:
SELECT pg_get_indexdef(oid)
FROM pg_catalog.pg_class
WHERE relname = 'matview_key'
AND relkind = 'i'
-- AND relnamespace = 'public'::regnamespace -- optional, to make sure of the schema, too
This way you see whether an index of the given name exists, and also its exact definition to rule out a different index with the same name. Pure SQL, works from any client. (There is nothing special about an index on materialized views.)
Also filter for the schema to be absolutely sure. Would be the "default" schema (a.k.a. "current" schema) in your case, since you did not specify in the creation. See:
How does the search_path influence identifier resolution and the "current schema"
Related:
Create index if it does not exist
How to check if a table exists in a given schema
In psql:
\di public.matview_key
To only find indexes. Again, the schema is optional to narrow down.
Progress Reporting
If creating an index takes a long time, you can look up progress in pg_stat_progress_create_index since Postgres 12:
SELECT * FROM pg_stat_progress_create_index
-- WHERE relid = 'public.matview'::regclass -- optionally narrow down
Un alternative to looking into pg_indexes is pg_matviews (for a materialized view only)
select *
from pg_matviews
where matviewname = 'my_matview_name';
A trigger seems to be ignoring the 'when condition' in my definition but I'm unsure why. I'm running the following:
create trigger trigger_update_candidate_location
after update on candidates
for each row
when (
OLD.address1 is distinct from NEW.address1
or
OLD.address2 is distinct from NEW.address2
or
OLD.city is distinct from NEW.city
or
OLD.state is distinct from NEW.state
or
OLD.zip is distinct from NEW.zip
or
OLD.country is distinct from NEW.country
)
execute procedure entities.tf_update_candidate_location();
But when I check back in on it, I get the following:
-- auto-generated definition
create trigger trigger_update_candidate_location
after update
on candidates
for each row
execute procedure tf_update_candidate_location();
This is problematic because the procedure I call ends up doing an update on the same table for different columns (lat/lng). Since the 'when' condition is ignored this crates an infinite loop.
My intention is to watch for address change, do a lookup on another table to get lat/lng values.
Postgresql version: 10.6
IDE: DataGrip 2018.1.3
How exactly do you create and "check back"? With datagrip?
The WHEN condition was added with Postgres 9.0. Some old (or poor) clients may be outdated. To be sure, check in pgsql with:
SELECT pg_get_triggerdef(oid, true)
FROM pg_trigger
WHERE tgrelid = 'candidates'::regclass -- schema-qualify name to be sure
AND NOT tgisinternal;
Any actual WHEN qualification is stored in internal format in pg_trigger.tgqual, btw. Details in the manual here.
Also what's your current search_path and what's the schema of table candidates?
It stands out that the table candidates is unqualified, while the trigger function entities.tf_update_candidate_location() has a schema-qualification ... You are not confusing tables of the same name in different DB schemas, are you?
Aside, you can simplify with this shorter, equivalent syntax:
create trigger trigger_update_candidate_location
after update on candidates -- schema-qualify??
for each row
when (
(OLD.address1, OLD.address2, OLD.city, OLD.state, OLD.zip, OLD.country)
IS DISTINCT FROM
(NEW.address1, NEW.address2, NEW.city, NEW.state, NEW.zip, NEW.country)
)
execute procedure entities.tf_update_candidate_location();
Unfortunately, that's the issue of DataGrip. Please follow the ticket to be notified when it's fixed.
https://youtrack.jetbrains.com/issue/DBE-7247
for example the following is my query
WITH stkpos as (
select * from mytbl
),
updt as (
update stkpos set field=(select sum(fieldn) from stkpos)
)
select * from stkpos
ERROR: relation "stkpos" does not exist
Unlike MS-SQL and some other DBs, PostgreSQL's CTE terms are not treated like a view. They're more like an implicit temp table - they get materialized, the planner can't push filters down into them or pull filters up out of them, etc.
One consequence of this is that you can't update them, because they're a copy of the original data, not just a view of it. You'll need to find another way to do what you want.
If you want help with that you'll need to post a new question that has a clear and self contained example (with create table statements, etc) showing a cut down version of your real problem. Enough to actually let us understand what you're trying to accomplish and why.
I am trying to implement simple table queue in DB2 database. What I need
is to select and delete the row in the table at once so the multiple clients will not get the same row from the queue twice. I was looking for similiar questions but they describes the solution for another database or describes quite complicated solutions. I only need to select and delete the row at once.
UPDATE: I found on the web db2 clause like this, which look exactly like what i need - the select from delete: example:
SELECT * FROM OLD TABLE (DELETE FROM example WHERE example_id = 1)
but I am wondering if this statement is atomic if the two concurent request don't get the same result or delete the same row.
Something like this:
SELECT COL1, COL2, ... FROM TABLE WHERE TABLE_ID = value
WITH RR USE AND KEEP EXCLUSIVE LOCKS;
DELETE FROM TABLE WHERE TABLE_ID = value;
COMMIT;
If you're on DB2 for z/OS (Mainframe DB2) since version 9.1, or Linux/Unix/Windows (since at least 9.5, search for data-change-table-reference), you can use a SELECT FROM DELETE statement:
SELECT *
FROM OLD TABLE
(DELETE FROM TAB1
WHERE COL1 = 'asdf');
That would give you all the rows that were deleted.
Edit: Ooops, just saw your edit about using this type of statement. And as he said in the comment, two separate application getting the same row depends on your Isolation Level.
Some SQL servers have a feature where INSERT is skipped if it would violate a primary/unique key constraint. For instance, MySQL has INSERT IGNORE.
What's the best way to emulate INSERT IGNORE and ON DUPLICATE KEY UPDATE with PostgreSQL?
With PostgreSQL 9.5, this is now native functionality (like MySQL has had for several years):
INSERT ... ON CONFLICT DO NOTHING/UPDATE ("UPSERT")
9.5 brings support for "UPSERT" operations.
INSERT is extended to accept an ON CONFLICT DO UPDATE/IGNORE clause. This clause specifies an alternative action to take in the event of a would-be duplicate violation.
...
Further example of new syntax:
INSERT INTO user_logins (username, logins)
VALUES ('Naomi',1),('James',1)
ON CONFLICT (username)
DO UPDATE SET logins = user_logins.logins + EXCLUDED.logins;
Edit: in case you missed warren's answer, PG9.5 now has this natively; time to upgrade!
Building on Bill Karwin's answer, to spell out what a rule based approach would look like (transferring from another schema in the same DB, and with a multi-column primary key):
CREATE RULE "my_table_on_duplicate_ignore" AS ON INSERT TO "my_table"
WHERE EXISTS(SELECT 1 FROM my_table
WHERE (pk_col_1, pk_col_2)=(NEW.pk_col_1, NEW.pk_col_2))
DO INSTEAD NOTHING;
INSERT INTO my_table SELECT * FROM another_schema.my_table WHERE some_cond;
DROP RULE "my_table_on_duplicate_ignore" ON "my_table";
Note: The rule applies to all INSERT operations until the rule is dropped, so not quite ad hoc.
For those of you that have Postgres 9.5 or higher, the new ON CONFLICT DO NOTHING syntax should work:
INSERT INTO target_table (field_one, field_two, field_three )
SELECT field_one, field_two, field_three
FROM source_table
ON CONFLICT (field_one) DO NOTHING;
For those of us who have an earlier version, this right join will work instead:
INSERT INTO target_table (field_one, field_two, field_three )
SELECT source_table.field_one, source_table.field_two, source_table.field_three
FROM source_table
LEFT JOIN target_table ON source_table.field_one = target_table.field_one
WHERE target_table.field_one IS NULL;
Try to do an UPDATE. If it doesn't modify any row that means it didn't exist, so do an insert. Obviously, you do this inside a transaction.
You can of course wrap this in a function if you don't want to put the extra code on the client side. You also need a loop for the very rare race condition in that thinking.
There's an example of this in the documentation: http://www.postgresql.org/docs/9.3/static/plpgsql-control-structures.html, example 40-2 right at the bottom.
That's usually the easiest way. You can do some magic with rules, but it's likely going to be a lot messier. I'd recommend the wrap-in-function approach over that any day.
This works for single row, or few row, values. If you're dealing with large amounts of rows for example from a subquery, you're best of splitting it into two queries, one for INSERT and one for UPDATE (as an appropriate join/subselect of course - no need to write your main filter twice)
To get the insert ignore logic you can do something like below. I found simply inserting from a select statement of literal values worked best, then you can mask out the duplicate keys with a NOT EXISTS clause. To get the update on duplicate logic I suspect a pl/pgsql loop would be necessary.
INSERT INTO manager.vin_manufacturer
(SELECT * FROM( VALUES
('935',' Citroën Brazil','Citroën'),
('ABC', 'Toyota', 'Toyota'),
('ZOM',' OM','OM')
) as tmp (vin_manufacturer_id, manufacturer_desc, make_desc)
WHERE NOT EXISTS (
--ignore anything that has already been inserted
SELECT 1 FROM manager.vin_manufacturer m where m.vin_manufacturer_id = tmp.vin_manufacturer_id)
)
INSERT INTO mytable(col1,col2)
SELECT 'val1','val2'
WHERE NOT EXISTS (SELECT 1 FROM mytable WHERE col1='val1')
As #hanmari mentioned in his comment. when inserting into a postgres tables, the on conflict (..) do nothing is the best code to use for not inserting duplicate data.:
query = "INSERT INTO db_table_name(column_name)
VALUES(%s) ON CONFLICT (column_name) DO NOTHING;"
The ON CONFLICT line of code will allow the insert statement to still insert rows of data. The query and values code is an example of inserted date from a Excel into a postgres db table.
I have constraints added to a postgres table I use to make sure the ID field is unique. Instead of running a delete on rows of data that is the same, I add a line of sql code that renumbers the ID column starting at 1.
Example:
q = 'ALTER id_column serial RESTART WITH 1'
If my data has an ID field, I do not use this as the primary ID/serial ID, I create a ID column and I set it to serial.
I hope this information is helpful to everyone.
*I have no college degree in software development/coding. Everything I know in coding, I study on my own.
Looks like PostgreSQL supports a schema object called a rule.
http://www.postgresql.org/docs/current/static/rules-update.html
You could create a rule ON INSERT for a given table, making it do NOTHING if a row exists with the given primary key value, or else making it do an UPDATE instead of the INSERT if a row exists with the given primary key value.
I haven't tried this myself, so I can't speak from experience or offer an example.
This solution avoids using rules:
BEGIN
INSERT INTO tableA (unique_column,c2,c3) VALUES (1,2,3);
EXCEPTION
WHEN unique_violation THEN
UPDATE tableA SET c2 = 2, c3 = 3 WHERE unique_column = 1;
END;
but it has a performance drawback (see PostgreSQL.org):
A block containing an EXCEPTION clause is significantly more expensive
to enter and exit than a block without one. Therefore, don't use
EXCEPTION without need.
On bulk, you can always delete the row before the insert. A deletion of a row that doesn't exist doesn't cause an error, so its safely skipped.
For data import scripts, to replace "IF NOT EXISTS", in a way, there's a slightly awkward formulation that nevertheless works:
DO
$do$
BEGIN
PERFORM id
FROM whatever_table;
IF NOT FOUND THEN
-- INSERT stuff
END IF;
END
$do$;