I'm looking to dynamically insert a set of columns from one table to another in PostgreSQL. What I think I'd like to do is read in a 'checklist' of column headings (those columns which exist in table 1 - the storage table), and if they exist in the export table (table 2) then insert them in all at once from table 1. Table 2 will be variable in its columns though - once imported ill drop it and import new data to be imported with potentially different column structure. So I need to import it based on the column names.
e.g.
Table 1. - The storage table
ID NAME YEAR LITH_AGE PROV_AGE SIO2 TIO2 CAO MGO COMMENTS
1 John 1998 2000 3000 65 10 5 5 comment1
2 Mark 2005 2444 3444 63 8 2 3 comment2
3 Luke 2001 1000 1500 77 10 2 2 comment3
Table 2. - The export table
ID NAME MG# METHOD SIO2 TIO2 CAO MGO
1 Amy 4 Method1 65 10 5 5
2 Poe 3 Method2 63 8 2 3
3 Ben 2 Method3 77 10 2 2
As you can see the export table may include columns which do not exist in the storage table, so these would be ignored.
I want to insert all of these columns at once, as I've found if I do it individually by column it extends the number of rows each time on the insert (maybe someone can solve this issue instead? Currently I've written a function to check if a column name exists in table 2, if it does, insert it, but as said this extends the rows of the table every time and NULL the rest of the columns).
The INSERT line from my function:
EXECUTE format('INSERT INTO %s (%s) (SELECT %s::%s FROM %s);',_tbl_import, _col,_col,_type,_tbl_export);
As a type of 'code example' for my question:
EXECUTE FORMAT('INSERT INTO table1 (%s) (SELECT (%s) FROM table2)',columns)
where 'columns' would be some variable denoting the columns that exist in the export table that need to go into the storage table. This will be variable as table 2 will be different every time.
This would ideally update Table 1 as:
ID NAME YEAR LITH_AGE PROV_AGE SIO2 TIO2 CAO MGO COMMENTS
1 John 1998 2000 3000 65 10 5 5 comment1
2 Mark 2005 2444 3444 63 8 2 3 comment2
3 Luke 2001 1000 1500 77 10 2 2 comment3
4 Amy NULL NULL NULL 65 10 5 5 NULL
5 Poe NULL NULL NULL 63 8 2 3 NULL
6 Ben NULL NULL NULL 77 10 2 2 NULL
UPDATED answer
As my original answer did not meet requirement came out later but was asked to post an alternative example for information_schema solution so here it is.
I made two versions for solutions:
V1 - is equivalent to already given example using information_schema. But that solution relies on table1 column DEFAULTs. Meaning, if table1 column that does not exist at table2 does not have DEFAULT NULL then it will be filled with whatever the default is.
V2 - is modified to force 'NULL' in case of two table columns mismatch and does not inherit table1 own DEFAULTs
Version1:
CREATE OR REPLACE FUNCTION insert_into_table1_v1()
RETURNS void AS $main$
DECLARE
columns text;
BEGIN
SELECT string_agg(c1.attname, ',')
INTO columns
FROM pg_attribute c1
JOIN pg_attribute c2
ON c1.attrelid = 'public.table1'::regclass
AND c2.attrelid = 'public.table2'::regclass
AND c1.attnum > 0
AND c2.attnum > 0
AND NOT c1.attisdropped
AND NOT c2.attisdropped
AND c1.attname = c2.attname
AND c1.attname <> 'id';
-- Following is the actual result of query above, based on given data examples:
-- -[ RECORD 1 ]----------------------
-- string_agg | name,si02,ti02,cao,mgo
EXECUTE format(
' INSERT INTO table1 ( %1$s )
SELECT %1$s
FROM table2
',
columns
);
END;
$main$ LANGUAGE plpgsql;
Version2:
CREATE OR REPLACE FUNCTION insert_into_table1_v2()
RETURNS void AS $main$
DECLARE
t1_cols text;
t2_cols text;
BEGIN
SELECT string_agg( c1.attname, ',' ),
string_agg( COALESCE( c2.attname, 'NULL' ), ',' )
INTO t1_cols,
t2_cols
FROM pg_attribute c1
LEFT JOIN pg_attribute c2
ON c2.attrelid = 'public.table2'::regclass
AND c2.attnum > 0
AND NOT c2.attisdropped
AND c1.attname = c2.attname
WHERE c1.attrelid = 'public.table1'::regclass
AND c1.attnum > 0
AND NOT c1.attisdropped
AND c1.attname <> 'id';
-- Following is the actual result of query above, based on given data examples:
-- t1_cols | t2_cols
-- --------------------------------------------------------+--------------------------------------------
-- name,year,lith_age,prov_age,si02,ti02,cao,mgo,comments | name,NULL,NULL,NULL,si02,ti02,cao,mgo,NULL
-- (1 row)
EXECUTE format(
' INSERT INTO table1 ( %s )
SELECT %s
FROM table2
',
t1_cols,
t2_cols
);
END;
$main$ LANGUAGE plpgsql;
Also link to documentation about pg_attribute table columns if something is unclear: https://www.postgresql.org/docs/current/static/catalog-pg-attribute.html
Hopefully this helps :)
Related
Before, I had to solve something similar:
Here was my pivot and flatten for another solution:
I want to do the same thing on the example below but it is slightly different because there are no ranks.
In my previous example, the table looked like this:
LocationID Code Rank
1 123 1
1 124 2
1 138 3
2 999 1
2 888 2
2 938 3
And I was able to use this function to properly get my rows in a single column.
-- Check if tables exist, delete if they do so that you can start fresh.
IF OBJECT_ID('tempdb.dbo.#tbl_Location_Taxonomy_Pivot_Table', 'U') IS NOT NULL
DROP TABLE #tbl_Location_Taxonomy_Pivot_Table;
IF OBJECT_ID('tbl_Location_Taxonomy_NPPES_Flattened', 'U') IS NOT NULL
DROP TABLE tbl_Location_Taxonomy_NPPES_Flattened;
-- Pivot the original table so that you have
SELECT *
INTO #tbl_Location_Taxonomy_Pivot_Table
FROM [MOAD].[dbo].[tbl_Location_Taxonomy_NPPES] tax
PIVOT (MAX(tax.tbl_lkp_Taxonomy_Seq)
FOR tax.Taxonomy_Rank in ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15])) AS pvt
-- ORDER BY Location_ID
-- Flatten the tables.
SELECT Location_ID
,max(piv.[1]) as Tax_Seq_1
,max(piv.[2]) as Tax_Seq_2
,max(piv.[3]) as Tax_Seq_3
,max(piv.[4]) as Tax_Seq_4
,max(piv.[5]) as Tax_Seq_5
,max(piv.[6]) as Tax_Seq_6
,max(piv.[7]) as Tax_Seq_7
,max(piv.[8]) as Tax_Seq_8
,max(piv.[9]) as Tax_Seq_9
,max(piv.[10]) as Tax_Seq_10
,max(piv.[11]) as Tax_Seq_11
,max(piv.[12]) as Tax_Seq_12
,max(piv.[13]) as Tax_Seq_13
,max(piv.[14]) as Tax_Seq_14
,max(piv.[15]) as Tax_Seq_15
-- JOIN HERE
INTO tbl_Location_Taxonomy_NPPES_Flattened
FROM #tbl_Location_Taxonomy_Pivot_Table piv
GROUP BY Location_ID
So, then here is the data I would like to work with in this example.
LocationID Foreign Key
2 2
2 670
2 2902
2 5389
3 3
3 722
3 2905
3 5561
So I have some data that is formatted like this:
I have used pivot on data like this before--But the difference was it had a rank also. Is there a way to get my foreign keys to show up in this format using a pivot?
locationID FK1 FK2 FK3 FK4
2 2 670 2902 5389
3 3 722 2905 5561
Another way I'm looking to solve this is like this:
Another way I could look at doing this is I have the values in:
this form as well:
LocationID Address_Seq
2 670, 5389, 2902, 2,
3 722, 5561, 2905, 3
etc
is there anyway I can get this to be the same?
ID Col1 Col2 Col3 Col4
2 670 5389, 2902, 2
This, adding a rank column and reversing the orders, should gives you what you require:
SELECT locationid, [4] col1, [3] col2, [2] col3, [1] col4
FROM
(
SELECT locationid, foreignkey,rank from #Pivot_Table ----- temp table with a rank column
) x
PIVOT (MAX(x.foreignkey)
FOR x.rank in ([4],[3],[2],[1]) ) pvt
I am seeking to alter a table content based on information of another table using a stored procedure. To make my point (and dodge my rusty English skills) I created the following simplification.
I have a table with fragment amounts of the form
SELECT * FROM [dbo].[obtained_fragments] ->
fragment amount
22 42
76 7
101 31
128 4
177 22
212 6
and a table that lists all possible combinations to combine these fragments to other fragments.
SELECT * FROM [dbo].[possible_combinations] ->
fragment consists_of_f1 f1_amount_needed consists_of_f2 f2_amount_needed
1001 128 1 22 3
1004 151 1 101 12
1012 128 1 177 6
1047 212 1 76 4
My aim is to alter the first table so that all possible fragment combinations are performed, leading to
SELECT * FROM [dbo].[obtained_fragments] ->
fragment amount
22 30
76 3
101 31
177 22
212 5
1001 4
1047 1
In words, combined fragments are added to the table based on [dbo].[possible_combinations], and the amount of needed fragments is reduced. Depleted fragments are removed from the table.
How do I achieve this fragment transformation in an easy way? I started writing a while loop, checking if sufficient fragments are available, inside of a for loop, interating through the fragment numbers. However, I am unable to come up with a functional amount check and begin to wonder if this is even possible in T-SQL this way.
The code doesn't have to be super efficient since both tables will always be smaller than 200 rows.
It is important to note that it doesn't matter which combinations are created.
It might come in handy that [f1_amount_needed] always has a value of 1.
UPDATE
Using the solution of iamdave, which works perfectly fine as long I don't touch it, I receive the following error message:
Column name or number of supplied values does not match table definition.
I barely changed anything really. Is there a chance that using existing tables with more than the necessary columns instead of declaring the tables (as iamdave did) makes this difference?
DECLARE #t TABLE(Binding_ID int, Exists_of_Binding_ID_2 int, Exists_of_Pieces_2 int, Binding1 int, Binding2 int);
WHILE 1=1
BEGIN
DELETE #t
INSERT INTO #t
SELECT TOP 1
k.Binding_ID
,k.Exists_of_Binding_ID_2
,k.Exists_of_Pieces_2
,g1.mat_Binding_ID AS Binding1
,g2.mat_Binding_ID AS Binding2
FROM [dbo].[vwCombiBinding] AS k
JOIN [leer].[sandbox5] AS g1
ON k.Exists_of_Binding_ID_1 = g1.mat_Binding_ID AND g1.Amount >= 1
JOIN [leer].[sandbox5] AS g2
ON k.Exists_of_Binding_ID_2 = g2.mat_Binding_ID AND g2.Amount >= k.Exists_of_Pieces_2
ORDER BY k.Binding_ID
IF (SELECT COUNT(1) FROM #t) = 1
BEGIN
UPDATE g
SET Amount = g.Amount +1
FROM [leer].[sandbox5] AS g
JOIN #t AS t
ON g.mat_Binding_ID = t.Binding_ID
INSERT INTO [leer].[sandbox5]
SELECT
t.Binding_ID
,1
FROM #t AS t
WHERE NOT EXISTS (SELECT NULL FROM [leer].[sandbox5] AS g WHERE g.mat_Binding_ID = t.Binding_ID);
UPDATE g
SET Amount = g.Amount - 1
FROM [leer].[sandbox5] AS g
JOIN #t AS t
ON g.mat_Binding_ID = t.Binding1
UPDATE g
SET Amount = g.Amount - t.Exists_of_Pieces_2
FROM [leer].[sandbox5] AS g
JOIN #t AS t
ON g.mat_Binding_ID = t.Binding2
END
ELSE
BREAK
END
SELECT * FROM [leer].[sandbox5]
You can do this with a while loop that contains several statements to handle your iterative data updates. As you need to make changes based on a re-assessment of your data each iteration this has to be done in a loop of some kind:
declare #f table(fragment int,amount int);
insert into #f values (22 ,42),(76 ,7 ),(101,31),(128,4 ),(177,22),(212,6 );
declare #c table(fragment int,consists_of_f1 int,f1_amount_needed int,consists_of_f2 int,f2_amount_needed int);
insert into #c values (1001,128,1,22,3),(1004,151,1,101,12),(1012,128,1,177,6),(1047,212,1,76,4);
declare #t table(fragment int,consists_of_f2 int,f2_amount_needed int,fragment1 int,fragment2 int);
while 1 = 1
begin
-- Clear out staging area
delete #t;
-- Populate with the latest possible combination
insert into #t
select top 1 c.fragment
,c.consists_of_f2
,c.f2_amount_needed
,f1.fragment as fragment1
,f2.fragment as fragment2
from #c as c
join #f as f1
on c.consists_of_f1 = f1.fragment
and f1.amount >= 1
join #f as f2
on c.consists_of_f2 = f2.fragment
and f2.amount >= c.f2_amount_needed
order by c.fragment;
-- Update fragments table if a new combination can be made
if (select count(1) from #t) = 1
begin
-- Update if additional fragment
update f
set amount = f.amount + 1
from #f as f
join #t as t
on f.fragment = t.fragment;
-- Insert if a new fragment
insert into #f
select t.fragment
,1
from #t as t
where not exists(select null
from #f as f
where f.fragment = t.fragment
);
-- Update fragment1 amounts
update f
set amount = f.amount - 1
from #f as f
join #t as t
on f.fragment = t.fragment1;
-- Update fragment2 amounts
update f
set amount = f.amount - t.f2_amount_needed
from #f as f
join #t as t
on f.fragment = t.fragment2;
end
else -- If no new combinations possible, break the loop
break
end;
select *
from #f;
Output:
+----------+--------+
| fragment | amount |
+----------+--------+
| 22 | 30 |
| 76 | 3 |
| 101 | 31 |
| 128 | 0 |
| 177 | 22 |
| 212 | 5 |
| 1001 | 4 |
| 1047 | 1 |
+----------+--------+
I have table like
create table test(employee integer NOT NULL, code character varying(200), number integer)
I want to auto increment column 'number' on every insert record
insert into test(employee, code) values(17,'bangalore')
insert into test(employee, code) values(17,'bangalore')
insert into test(employee, code) values(17,'mumbai')
I want result like
employee code number
17 bangalore 1
17 bangalore 2
17 bangalore 3
17 mumbai 1
17 mumbai 2
17 bangalore 4
17 mumbai 3
18 bangalore 1
18 bangalore 2
18 mumbai 1
18 mumbai 2
For a batch upload of data, try if below approach would be useful.
create a temporary table test2
create table test2(employee integer NOT NULL, code character varying(200))
insert into test2(employee, code) values(17,'bangalore')
insert into test2(employee, code) values(17,'bangalore')
insert into test2(employee, code) values(17,'mumbai')
Insert into actual table along with incremental number
insert into test(employee, code, number)
select employee, code, row_number() over (partition by code ) from test2
You could include order by clause like primary key column or another column like created_date :
over (partition by code order by created_date)
create table test (employee integer NOT NULL, code character varying(200), number integer)
insert into test(employee, code, number ) values(17,'bangalore',(select coalesce(max(number) + 1,1) from test where employee = 17 and code = 'bangalore'));
insert into test(employee, code, number ) values(17,'bangalore',(select coalesce(max(number) + 1,1) from test where employee = 17 and code = 'bangalore'));
insert into test(employee, code, number ) values(17,'mumbai',(select coalesce(max(number) + 1,1) from test where employee = 17 and code = 'mumbai'));
Banging my head up against this one for a while. I'm constructing a database on oracle 11g, and am attempting to insert a record into a "registry" table whenever a record is created on a "data product" table. The registry table needs to auto-increment the product_id, and then that product_id is used as a foreign key on the data product table. Here is my trigger code:
CREATE OR REPLACE TRIGGER "TR_CAMERA_DP_DPR_CREATE"
BEFORE INSERT ON "DD1"."CAMERA_DP"
FOR EACH ROW
BEGIN
:new.product_id := ID_SEQ.NEXTVAL;
insert into dd1.dp_registry
( product_id,
fs_location,
parent_group_id,
product_name,
shortdes,
createdate,
revision )
values
( :new.product_id,
'placeholder',
0,
'_image',
'description placeholder',
sysdate,
0
);
END;
So, ideally, an insert into dd1.camera_dp without providing a product_id will first insert a record into dd1.dp_registry, and then use that incremented product_id as the key field for dd1.camera_dp.
The insert statement works when run with a hard-coded value for :new.product_id, and ID_SEQ.NEXTVAL is also working properly. I get the feeling I'm missing something obvious.
Thanks!
Your code works perfectly well for me. If you're getting an error, there is something about the code that you are actually running from the code that you posted.
SQL> create table CAMERA_DP(
2 product_id number,
3 name varchar2(10)
4 );
Table created.
SQL> create sequence id_seq;
Sequence created.
SQL> ed
Wrote file afiedt.buf
1 create table dp_registry
2 ( product_id number,
3 fs_location varchar2(100),
4 parent_group_id number,
5 product_name varchar2(100),
6 shortdes varchar2(100),
7 createdate date,
8* revision number)
SQL> /
Table created.
SQL> ed
Wrote file afiedt.buf
1 CREATE OR REPLACE TRIGGER "TR_CAMERA_DP_DPR_CREATE"
2 BEFORE INSERT ON "CAMERA_DP"
3 FOR EACH ROW
4 BEGIN
5 :new.product_id := ID_SEQ.NEXTVAL;
6 insert into dp_registry
7 ( product_id,
8 fs_location,
9 parent_group_id,
10 product_name,
11 shortdes,
12 createdate,
13 revision )
14 values
15 ( :new.product_id,
16 'placeholder',
17 0,
18 '_image',
19 'description placeholder',
20 sysdate,
21 0
22 );
23* END;
24 /
Trigger created.
SQL> insert into camera_dp( name ) values( 'Foo' );
1 row created.
SQL> ed
Wrote file afiedt.buf
1* select product_id from dp_registry
SQL> /
PRODUCT_ID
----------
1
If you're getting an error that a table doesn't exist, the common culprits would be
You actually have a typo in the name of your table
You don't have permission to insert into the table. Note that if in your actual code, not everything is in the same schema, my guess would be that the user that owns the trigger has privileges to INSERT into the DP_REGISTRY table via a role rather than via a direct grant. Since priileges granted through a role are not available in a definer's rights stored procedure block, that would explain why you can do something at the command line but not in PL/SQL.
How can I add a series in length of 4 to a table like this:
Source table:
id
1
2
3
4
5
6
7
8
Results table:
id series
1 1
2 2
3 3
4 4
5 1
6 2
7 3
8 4
I'm using PostgreSQL 9.1.
If your IDs are really consecutive and gapless, you can just use id % 4 + 1. But I imagine that in reality your IDs aren't so orderly, and if they're generated from a SEQUENCE you shouldn't rely on them being gapless.
You can do it properly with row_number(), as shown here: http://sqlfiddle.com/#!12/22767/5
SELECT
id,
(row_number() OVER (ORDER BY id) - 1) % 4 + 1
FROM Table1
ORDER BY 1;
It's also possible to do using generate_series as a set-returning-function in the SELECT list, but that's a PostgreSQL extension, wheras the above is standard SQL that'll work in any modern database except MySQL, which doesn't support window functions.
If you want to actually add a column to the table it gets a bit more complicated. I'm not really sure why you'd want to do that, but it's possible using UPDATE ... FROM:
BEGIN;
ALTER TABLE table1 ADD COLUMN col2 INTEGER;
WITH gen_num(id,n) AS (
SELECT
id,
(row_number() OVER (ORDER BY id) - 1) % 4 + 1
FROM Table1
ORDER BY 1)
UPDATE table1 SET col2 = n
FROM gen_num
WHERE gen_num.id = table1.id;
COMMIT;