I have a model in Postgres – say Student – which has primary key id serial, which means there is another table student_id_seq which keeps track of the sequence information for student.
If I change the table name of student to, say, StudentYear, will there be any problem? Would I have to do anything to make sure renaming the table is not going to crash anything?
You can rename the table or the sequence however you want, and everything will keep working.
The relationship between a table column and its sequence is stored in two places:
In pg_attrdef, where the default values of attributes are stored.
In pg_depend where dependencies between objects are tracked.
All objects and columns are internally referenced by their object IDs or attribute numbers, so renaming something is not a problem.
An example:
CREATE TABLE serialdemo (id serial);
SELECT oid FROM pg_class WHERE relname = 'serialdemo';
oid
-------
69427
(1 row)
SELECT attnum FROM pg_attribute WHERE attrelid = 69427 AND attname = 'id';
attnum
--------
1
(1 row)
The dependent objects:
SELECT classid::regclass, objid, deptype
FROM pg_depend
WHERE refobjid = 69427 AND refobjsubid = 1;
classid | objid | deptype
------------+-------+---------
pg_attrdef | 69430 | a
pg_class | 69425 | a
(2 rows)
One of these dependent objects is the column default definition:
SELECT adbin, pg_get_expr(adbin, adrelid) AS adsrc
FROM pg_attrdef WHERE oid = 69430;
-[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
adbin | {FUNCEXPR :funcid 480 :funcresulttype 23 :funcretset false :funcvariadic false :funcformat 2 :funccollid 0 :inputcollid 0 :args ({FUNCEXPR :funcid 1574 :funcresulttype 20 :funcretset false :funcvariadic false :funcformat 0 :funccollid 0 :inputcollid 0 :args ({CONST :consttype 2205 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location -1 :constvalue 4 [ 49 15 1 0 0 0 0 0 ]}) :location -1}) :location -1}
adsrc | nextval('serialdemo_id_seq'::regclass)
adsrc is just for the human eye. adbin will not change when the sequence is renamed, although everything keeps working.
The relevant reference to the sequence is :constvalue 4 [ 49 15 1 0 0 0 0 0 ]. This is a 4-byte unsigned integer, the object id (1 × 256² + 15 × 256 + 49 = 69425).
The other dependent object is the sequence:
SELECT relname, relkind FROM pg_class WHERE oid = 69425;
relname | relkind
-------------------+---------
serialdemo_id_seq | S
(1 row)
if change the table name of student to say StudentYear will there be any problem?
Nope. Column of table related with this sequence. If i am not mistake like following:
nextval('student_id_seq'::regclass)
So you don't need to care about renaming tables or columns.
Related
DBFIDDLE
How to query postgresql to check the constraint is valid or not.
CREATE TABLE emp (test_check int check ( test_check >1 and test_check < 0 ));
query the constraint:
select * from pg_constraint where conname = 'emp_test_check_check';
------------------------------------------------------------------------------------
oid | 24631
conname | emp_test_check_check
connamespace | 2200
contype | c
condeferrable | f
condeferred | f
convalidated | t
conrelid | 24628
contypid | 0
conindid | 0
conparentid | 0
confrelid | 0
confupdtype |
confdeltype |
confmatchtype |
conislocal | t
coninhcount | 0
connoinherit | f
conkey | {1}
confkey | [null]
conpfeqop | [null]
conppeqop | [null]
conffeqop | [null]
confdelsetcols | [null]
conexclop | [null]
conbin | {BOOLEXPR :boolop and :args ({OPEXPR :opno 521 :opfuncid 147 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 46} {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 58 :constvalue 4 [ 1 0 0 0 0 0 0 0 ]}) :location 57} {OPEXPR :opno 97 :opfuncid 66 :opresulttype 16 :opretset false :opcollid 0 :inputcollid 0 :args ({VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 64} {CONST :consttype 23 :consttypmod -1 :constcollid 0 :constlen 4 :constbyval true :constisnull false :location 77 :constvalue 4 [ 0 0 0 0 0 0 0 0 ]}) :location 75}) :location 60}
get the check definition:
select pgc.conname as constraint_name,
ccu.table_schema as table_schema,
ccu.table_name,
ccu.column_name,
pg_get_constraintdef(pgc.oid)
from pg_constraint pgc
join pg_namespace nsp on nsp.oid = pgc.connamespace
join pg_class cls on pgc.conrelid = cls.oid
left join information_schema.constraint_column_usage ccu
on pgc.conname = ccu.constraint_name
and nsp.nspname = ccu.constraint_schema
where contype ='c'
order by pgc.conname;
return:
-[ RECORD 1 ]--------+------------------------------------------------
constraint_name | emp_test_check_check
table_schema | public
table_name | emp
column_name | test_check
pg_get_constraintdef | CHECK (((test_check > 1) AND (test_check < 0)))
Similar question:
CREATE TABLE emp1 (test_check int check ( test_check >1 and test_check > 10 ));
can postgresql deduce from above check constraint to
CREATE TABLE emp1 (test_check int check ( test_check > 10 ));
if It can, how to query it.
Every boolean expression is valid for a CHECK constraint.
a Boolean (truth-value) expression.
Postgres does not attempt to simplify your expression. It's your responsibility to provide sensible expressions. The expression is parsed and checked to be valid, then stored in an internal format. (So every stored expression is valid.) But not simplified. Not even the simplest cases like CHECK (true AND true).
You already have a working query to retrieve CHECK constraints, but I would use this faster query not involving the (bloated) information_schema:
SELECT c.conname AS constraint_name
, c.conrelid::regclass AS table_name -- schema-qualified where needed
, a.column_names
, pg_get_constraintdef(c.oid) AS constraint_definition
FROM pg_constraint c
LEFT JOIN LATERAL (
SELECT string_agg(attname, ', ') AS column_names
FROM pg_attribute a
WHERE a.attrelid = c.conrelid
AND a.attnum = ANY (c.conkey)
) a ON true
WHERE c.contype = 'c' -- only CHECK constraints
AND c.conrelid > 0 -- only table constraints (incl. "column constraints")
ORDER BY c.conname;
Aside:
It should be noted that a check constraint is satisfied if the check expression evaluates to true or the null value.
Consider 2 tables (table A and table B) with a many-to-many relationship, each containing a primary key and other attributes. To map this relation there's a third joint table (table C) containing the foreign keys for each table of the relation ( fk_tableA | fk_tableB ).
Table B contains duplicate rows (except for the pk), so I want to merge these together into a single record with whatever unique primary key, just like so:
table B table B (after merging duplicates)
1 | Henry | 100.0 1 | Henry | 100.0
2 | Jessi | 97.0 2 | Jessi | 97.0
3 | Henry | 100.0 4 | Erica | 11.2
4 | Erica | 11.2
By merging these records, there may be foreign keys of table C (joint table) pointing to primary keys of table B that no longer exist. My goal is to edit them to point to the merged record:
Before merging:
tableA table B table C
id | att1 id | att1 | att2 fk_A | fk_b
----------- ------------------- ------------
1 | ab123 1 | Henry | 100.0 1 | 1
2 | adawd 2 | Jessi | 97.0 2 | 3
3 | da3wf 3 | Henry | 100.0
4 | Erica | 11.2
On table C, 2 records from table B are referenced (1 and 3) which happen to be duplicated rows. My goal is to merge those into a single record (in table B) and update the foreign key in table C:
After merging:
tableA table B table C
id | att1 id | att1 | att2 fk_A | fk_b
----------- ------------------- ------------
1 | ab123 1 | Henry | 100.0 1 | 1
2 | adawd 2 | Jessi | 97.0 2 | 1
3 | da3wf 4 | Erica | 11.2
- Note that id=3 was merged/deleted from table B and the same id
was updated on table C to point to the merged record's id.
So my question is basically how to update a junction table upon merging records of a table? I am currently using Postgres and working on millions of data.
-- \i tmp.sql
CREATE TABLE persons
( id integer primary key
, name text
, weight decimal(4,1)
);
INSERT INTO persons(id,name,weight)VALUES
(1 ,'Henry', 100.0)
,(2 ,'Jessi', 97.0)
,(3 ,'Henry', 100.0)
,(4 ,'Erica', 11.)
;
CREATE TABLE junctiontab
( fk_A integer NOT NULL
, p_id integer REFERENCES persons(id)
, PRIMARY KEY (fk_A,p_id)
);
INSERT INTO junctiontab(fk_A, p_id)VALUES (1 , 1 ),(2 , 3 );
-- find the ids of affected persons.
-- [for simplicity: put them in a temp table]
CREATE TEMP table xlat AS
SELECT * FROM(
SELECt id AS wrong_id
,min(id) OVER (PARTITION BY name ORDER BY id) AS good_id
FROM persons p
) x
WHERE good_id <> wrong_id
;
--show it
SELECT *FROM xlat;
UPDATE junctiontab j
SET p_id = x.good_id
FROM xlat x
WHERE j.p_id = x.wrong_id
-- The good junction-entry *could* already exist...
AND NOT EXISTS (
SELECT *FROM junctiontab nx
WHERE nx.fk_A= j.fk_A
AND nx.p_id= x.good_id
)
;
DELETE FROM junctiontab d
-- if the good junction-entry already existed, we can delete the wrong one now.
WHERE EXISTS (
SELECT *FROM junctiontab g
JOIN xlat x ON g.p_id= x.good_id
AND d.p_id = x.wrong_id
WHERE g.fk_A= d.fk_A
)
;
--show it
SELECT *FROM junctiontab
;
-- Delete thewrongperson-records
DELETE FROM persons p
WHERE EXISTS (
SELECT *FROM xlat x
WHERE p.id = x.wrong_id
);
--show it
SELECT * FROM persons p;
Result:
DROP SCHEMA
CREATE SCHEMA
SET
CREATE TABLE
INSERT 0 4
CREATE TABLE
INSERT 0 2
SELECT 1
wrong_id | good_id
----------+---------
3 | 1
(1 row)
UPDATE 1
DELETE 0
fk_a | p_id
------+------
1 | 1
2 | 1
(2 rows)
DELETE 1
id | name | weight
----+-------+--------
1 | Henry | 100.0
2 | Jessi | 97.0
4 | Erica | 11.0
(3 rows)
I have a Users Table
it has id, username, A, and B as my columns.
I also have a Votes Table.
a user can vote on A, and B.
it has id, user_id foreign key, A_id foreign key, and B_id foreign key.
I would like to do a query to Users and have the tally of votes for A and votes for B included.
so lets say I find a User with the id of 1,
How would I get something like
{
"id": 1,
"display_name": "test",
"A" : 32,
"B" : 132
}
Assuming there are 32 rows in the Votes table and 132 rows for B in the Votes table.
simplified table:
t=# select* from users;
i | t
---+---
1 | A
1 | B
1 | B
2 | A
2 | A
2 | B
(6 rows)
query:
t=# select distinct
i
, sum(case when t='A' then count(1) filter (where t='A') else 0 end) over (partition by i) a
, sum(case when t='B' then count(1) filter (where t='B') else 0 end ) over (partition by i)b
from users
group by i,t
order by i;
i | a | b
---+---+---
1 | 1 | 2
2 | 2 | 1
(2 rows)
Sample Data:
customer txn_date tag
A 1-Jan-17 1
A 2-Jan-17 1
A 4-Jan-17 1
A 5-Jan-17 0
B 3-Jan-17 1
B 5-Jan-17 0
Need to fill every missing txn_date between date range (1-Jan-17 to 5-Jan-2017). Just like below:
Output should be:
customer txn_date tag
A 1-Jan-17 1
A 2-Jan-17 1
A 3-Jan-17 0 (inserted)
A 4-Jan-17 1
A 5-Jan-17 0
B 1-Jan-17 0 (inserted)
B 2-Jan-17 0 (inserted)
B 3-Jan-17 1
B 4-Jan-17 0 (inserted)
B 5-Jan-17 0
select c.customer
,d.txn_date
,coalesce(t.tag,0) as tag
from (select date_add (from_date,i) as txn_date
from (select date '2017-01-01' as from_date
,date '2017-01-05' as to_date
) p
lateral view
posexplode(split(space(datediff(p.to_date,p.from_date)),' ')) pe as i,x
) d
cross join (select distinct
customer
from t
) c
left join t
on t.customer = c.customer
and t.txn_date = d.txn_date
;
c.customer d.txn_date tag
A 2017-01-01 1
A 2017-01-02 1
A 2017-01-03 0
A 2017-01-04 1
A 2017-01-05 0
B 2017-01-01 0
B 2017-01-02 0
B 2017-01-03 1
B 2017-01-04 0
B 2017-01-05 0
Just have the delta content i.e the missing data in a file(input.txt) delimited with the same delimiter you have mentioned when you created the table.
Then use the load data command to insert this records into the table.
load data local inpath '/tmp/input.txt' into table tablename;
Your data wont be in the order you have mentioned , it would get appended to the last. You could retrieve the order by adding order by txn_date in the select query.
I'm looking to dynamically insert a set of columns from one table to another in PostgreSQL. What I think I'd like to do is read in a 'checklist' of column headings (those columns which exist in table 1 - the storage table), and if they exist in the export table (table 2) then insert them in all at once from table 1. Table 2 will be variable in its columns though - once imported ill drop it and import new data to be imported with potentially different column structure. So I need to import it based on the column names.
e.g.
Table 1. - The storage table
ID NAME YEAR LITH_AGE PROV_AGE SIO2 TIO2 CAO MGO COMMENTS
1 John 1998 2000 3000 65 10 5 5 comment1
2 Mark 2005 2444 3444 63 8 2 3 comment2
3 Luke 2001 1000 1500 77 10 2 2 comment3
Table 2. - The export table
ID NAME MG# METHOD SIO2 TIO2 CAO MGO
1 Amy 4 Method1 65 10 5 5
2 Poe 3 Method2 63 8 2 3
3 Ben 2 Method3 77 10 2 2
As you can see the export table may include columns which do not exist in the storage table, so these would be ignored.
I want to insert all of these columns at once, as I've found if I do it individually by column it extends the number of rows each time on the insert (maybe someone can solve this issue instead? Currently I've written a function to check if a column name exists in table 2, if it does, insert it, but as said this extends the rows of the table every time and NULL the rest of the columns).
The INSERT line from my function:
EXECUTE format('INSERT INTO %s (%s) (SELECT %s::%s FROM %s);',_tbl_import, _col,_col,_type,_tbl_export);
As a type of 'code example' for my question:
EXECUTE FORMAT('INSERT INTO table1 (%s) (SELECT (%s) FROM table2)',columns)
where 'columns' would be some variable denoting the columns that exist in the export table that need to go into the storage table. This will be variable as table 2 will be different every time.
This would ideally update Table 1 as:
ID NAME YEAR LITH_AGE PROV_AGE SIO2 TIO2 CAO MGO COMMENTS
1 John 1998 2000 3000 65 10 5 5 comment1
2 Mark 2005 2444 3444 63 8 2 3 comment2
3 Luke 2001 1000 1500 77 10 2 2 comment3
4 Amy NULL NULL NULL 65 10 5 5 NULL
5 Poe NULL NULL NULL 63 8 2 3 NULL
6 Ben NULL NULL NULL 77 10 2 2 NULL
UPDATED answer
As my original answer did not meet requirement came out later but was asked to post an alternative example for information_schema solution so here it is.
I made two versions for solutions:
V1 - is equivalent to already given example using information_schema. But that solution relies on table1 column DEFAULTs. Meaning, if table1 column that does not exist at table2 does not have DEFAULT NULL then it will be filled with whatever the default is.
V2 - is modified to force 'NULL' in case of two table columns mismatch and does not inherit table1 own DEFAULTs
Version1:
CREATE OR REPLACE FUNCTION insert_into_table1_v1()
RETURNS void AS $main$
DECLARE
columns text;
BEGIN
SELECT string_agg(c1.attname, ',')
INTO columns
FROM pg_attribute c1
JOIN pg_attribute c2
ON c1.attrelid = 'public.table1'::regclass
AND c2.attrelid = 'public.table2'::regclass
AND c1.attnum > 0
AND c2.attnum > 0
AND NOT c1.attisdropped
AND NOT c2.attisdropped
AND c1.attname = c2.attname
AND c1.attname <> 'id';
-- Following is the actual result of query above, based on given data examples:
-- -[ RECORD 1 ]----------------------
-- string_agg | name,si02,ti02,cao,mgo
EXECUTE format(
' INSERT INTO table1 ( %1$s )
SELECT %1$s
FROM table2
',
columns
);
END;
$main$ LANGUAGE plpgsql;
Version2:
CREATE OR REPLACE FUNCTION insert_into_table1_v2()
RETURNS void AS $main$
DECLARE
t1_cols text;
t2_cols text;
BEGIN
SELECT string_agg( c1.attname, ',' ),
string_agg( COALESCE( c2.attname, 'NULL' ), ',' )
INTO t1_cols,
t2_cols
FROM pg_attribute c1
LEFT JOIN pg_attribute c2
ON c2.attrelid = 'public.table2'::regclass
AND c2.attnum > 0
AND NOT c2.attisdropped
AND c1.attname = c2.attname
WHERE c1.attrelid = 'public.table1'::regclass
AND c1.attnum > 0
AND NOT c1.attisdropped
AND c1.attname <> 'id';
-- Following is the actual result of query above, based on given data examples:
-- t1_cols | t2_cols
-- --------------------------------------------------------+--------------------------------------------
-- name,year,lith_age,prov_age,si02,ti02,cao,mgo,comments | name,NULL,NULL,NULL,si02,ti02,cao,mgo,NULL
-- (1 row)
EXECUTE format(
' INSERT INTO table1 ( %s )
SELECT %s
FROM table2
',
t1_cols,
t2_cols
);
END;
$main$ LANGUAGE plpgsql;
Also link to documentation about pg_attribute table columns if something is unclear: https://www.postgresql.org/docs/current/static/catalog-pg-attribute.html
Hopefully this helps :)