Hello iam trying to use xmltable in function, to update some data in other table, as for input i got string
This is a test program the final result will be that the insert should be created from any xml doing some data adjustment
the xml look like this
<header>
<data>
<line>
<field1>some data in f1</field1>
<field2>other data f2</field2>
<field3>this data contains numers 012323</field3>
<field4>and the last data</field4>
<line>
</data>
</header>
Iam taking the next steps
create table with casted string to xml input values
create table if not exists test_xml_table as select input_string::xml as string_xml;
will do some data adjustments
final step is insert
insert into test_tab( field1, field2, field3, field4 )
select xt.field1, xt.field2, xt.field3, field4 from test_xml_table
cross join xmltable( '/data/line' passing string_xml
columns field1 text path 'field1', field2 text path 'field2',
field3 text path 'field3', field4 text path 'field4' ) as xt;
The problem is that if the table test_xml_table doesnt exists, the program dont create it ( still didnt see it after create table command. I tried to do workaround, create the table first and fill it with XML data but now i dont know what to put after PASSING phrase. There is no error just no data is inserted. Will be grateful for help
the whole code is below
create or replace function test_function(
input_string character varying )
RETURNS void
LANGUAGE plpgsql
as $BODY$
begin
create table test_xml_table as select input_string::xml as string_xml;
insert into test_tab( field1, field2, field3, field4 )
select xt.field1, xt.field2, xt.field3, field4 from test_xml_table
cross join xmltable( '/data/line' passing string_xml
columns field1 text path 'field1', field2 text path 'field2',
field3 text path 'field3', field4 text path 'field4' ) as xt;
end;
$BODY$;
call ->
select test_function( '<header>
<data>
<line>
<field1>some data in f1</field1>
<field2>other data f2</field2>
<field3>this data contains numers 012323</field3>
<field4>and the last data</field4>
</line>
</data>
</header>' )
Tried to use postgres xmltable function to insert data into table. Iam expecting that the data from function input that is type character varying will insert data
cross join is not needed.
The xml namespace_uri should be /header/data/line not just /data/line
create or replace function test_function( input_string character varying )
RETURNS void
LANGUAGE plpgsql
as $BODY$
begin
create table test_xml_table as select input_string::xml as string_xml;
insert into test_tab( field1, field2, field3, field4 )
select xt.field1, xt.field2, xt.field3, field4
from test_xml_table, xmltable( '/header/data/line' passing string_xml
columns
field1 text path 'field1',
field2 text path 'field2',
field3 text path 'field3',
field4 text path 'field4'
) as xt;
end;
$BODY$;
Demo here
Related
I have a table:
TABLE_A
id_table_a
field1
field2
field3
field4
field5
I need to move the data to 3 tables, 1 parent with 2 children:
TABLE_B
id_table_b
id_table_a
field1
field2
TABLE_C
id_table_c
id_table_b
field3
field4
TABLE_D
id_table_d
id_table_b
field5
We're talking about millions of registers. What would be the correct and most effective way to do this?
I'm completely new to PostgreSQL and I've come up with this after reading the documentation:
INSERT INTO table_b (id_table_a, field1, field2) SELECT id_table_a FROM table_a, SELECT field1 FROM table_a, SELECT field2 FROM table_a;
INSERT INTO table_c (id_table_b, field3, field4) SELECT id_table_b FROM table_b, SELECT field3 FROM table_a WHERE table_b.id_table_a = table_a.id_table_a, SELECT field4 FROM table_a WHERE table_b.id_table_a = table_a.id_table_a;
INSERT INTO table_d (id_table_d, field5) SELECT id_table_c FROM table_c, SELECT field5 FROM table_a WHERE table_b.id_table_a = table_a.id_table_a;
Would this do what I need or am I missing something? Thank you.
This will not work:
INSERT INTO table_b (id_table_a, field1, field2) SELECT id_table_a FROM table_a, SELECT field1 FROM table_a, SELECT field2 FROM table_a;
because the query SELECT id_table_a FROM table_a will (or can) return more than 1 value.
You need to write it like:
INSERT INTO table_b (id_table_a, field1, field2)
SELECT id_table_a, field1, field2 FROM table_a;
Maybe sub-optimal but a straightforward PL/pgSQL do block will help. Pls. note the returning into clause. Assuming that id_table_b, id_table_c and id_table_d are autogenerated integers, then
DO language plpgsql
$$
declare
r record;
var_id_table_b integer;
begin
for r in select * from table_a loop
insert into table_b (id_table_a, field1, field2)
values (r.id_table_a, r.field1, r.field2)
RETURNING id_table_b INTO var_id_table_b;
insert into table_c (id_table_b, field3, field4)
values (var_id_table_b, r.field3, r.field4);
insert into table_d (id_table_b, field5)
values (var_id_table_b, r.field5);
end loop;
end;
$$;
In My code,
SELECT X.DEP_ID
FROM (SELECT XMLPARSE (DOCUMENT '<root><DEP_ID>1000000004</DEP_ID><DEP_ID>1000000005</DEP_ID></root>') AS ELEMENT_VALUE
FROM SYSIBM.SYSDUMMY1) AS A,
XMLTABLE (
'$d/root'
PASSING Element_value AS "d"
COLUMNS
DEP_ID VARCHAR (10) PATH 'DEP_ID'
) AS X;
Need as result of:
DEP_ID
1000000004
1000000005
If its single values means it working that means only one DEP_ID in xml.
But Multiple return means it will show error.
How to get the output as like above in db2.
Wrong row-xquery-expression-constant.
Try this:
SELECT X.DEP_ID
FROM
(
SELECT XMLPARSE (DOCUMENT '<root><DEP_ID>1000000004</DEP_ID><DEP_ID>1000000005</DEP_ID></root>') AS ELEMENT_VALUE
FROM SYSIBM.SYSDUMMY1
) AS A
, XMLTABLE
(
'$d/root/DEP_ID' PASSING Element_value AS "d"
COLUMNS
DEP_ID VARCHAR (10) PATH '.'
) AS X;
I have a list of values:
(56957,85697,56325,45698,21367,56397,14758,39656)
and a 'template' row in a table.
I want to do this:
for value in valuelist:
{
insert into table1 (field1, field2, field3, field4)
select value1, value2, value3, (value)
from table1
where ID = (ID of template row)
}
I know how I would do this in code, like c# for instance, but I'm not sure how to 'loop' this while passing in a new value to the insert statement. (i know that code makes no sense, just trying to convey what I'm trying to accomplish.
There is no need to loop here, SQL is a set based language and you apply your operations to entire sets of data all at once as opposed to looping through row by row.
insert statements can come from either an explicit list of values or from the result of a regular select statement, for example:
insert into table1(col1, col2)
select col3
,col4
from table2;
There is nothing stopping you selecting your data from the same place you are inserting to, which will duplicate all your data:
insert into table1(col1, col2)
select col1
,col2
from table1;
If you want to edit one of these column values - say by incrementing the value currently held, you simply apply this logic to your select statement and make sure the resultant dataset matches your target table in number of columns and data types:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1;
Optionally, if you only want to do this for a subset of those values, just add a standard where clause:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1
where col1 = <your value>;
Now if this isn't enough for you to work it out by yourself, you can join your dataset to you values list to get a version of the data to be inserted for each value in that list. Because you want each row to join to each value, you can use a cross join:
declare #v table(value int);
insert into #v values(56957),(85697),(56325),(45698),(21367),(56397),(14758),(39656);
insert into table1(col1, col2, value)
select t.col1
,t.col2
,v.value
from table1 as t
cross join #v as v
I have a table with an XML type column. This column contains a dynamic list of attributes that may be different between records.
I am trying to GROUP BY COUNT over these attributes without having to go through the table separately for each attribute.
For example, one record could have attributes A, B and C and the other would have B, C, D then, when I do the GROUP BY COUNT I would get A = 1, B = 2, C = 2 and D = 1.
Is there any straightforward way to do this?
EDIT in reply to Andrew's answer:
Because my knowledge of this construct is superficial at best I had to fiddle with it to get it to do what I want. In my actual code I needed to group by the TimeRange, as well as only select some attributes depending on their name. I am pasting the actual query below:
WITH attributes AS (
SELECT
Timestamp,
N.a.value('#name[1]', 'nvarchar(max)') AS AttributeName,
N.a.value('(.)[1]', 'nvarchar(max)') AS AttributeValue
FROM MyTable
CROSS APPLY AttributesXml.nodes('/Attributes/Attribute') AS N(a)
)
SELECT Datepart(dy, Timestamp), AttributeValue, COUNT(AttributeValue)
FROM attributes
WHERE AttributeName IN ('AttributeA', 'AttributeB')
GROUP BY Datepart(dy, Timestamp), AttributeValue
As a side-note: Is there any way to reduce this further?
WITH attributes AS (
SELECT a.value('(.)[1]', 'nvarchar(max)') AS attribute
FROM YourTable
CROSS APPLY YourXMLColumn.nodes('//path/to/attributes') AS N(a)
)
SELECT attribute, COUNT(attribute)
FROM attributes
GROUP BY attribute
CROSS APPLY is like being able to JOIN the xml as a table. The WITH is needed because you can't have xml methods in a group clause.
Here is a way to get the attribute data into a way that you can easily work with it and reduce the number of times you need to go through the main table.
--create test data
declare #tmp table (
field1 varchar(20),
field2 varchar(20),
field3 varchar(20))
insert into #tmp (field1, field2, field3)
values ('A', 'B', 'C'),
('B', 'C', 'D')
--convert the individual fields from seperate columns to one column
declare #table table(
field varchar(20))
insert into #table (field)
select field1 from #tmp
union all
select field2 from #tmp
union all
select field3 from #tmp
--run the group by and get the count
select field, count(*)
from #table
group by field
I have a table like this:
create table1 (field1 int,
field2 int default 5557,
field3 int default 1337,
field4 int default 1337)
I want to insert a row which has the default values for field2 and field4.
I've tried insert into table1 values (5,null,10,null) but it doesn't work and ISNULL(field2,default) doesn't work either.
How can I tell the database to use the default value for the column when I insert a row?
Best practice it to list your columns so you're independent of table changes (new column or column order etc)
insert into table1 (field1, field3) values (5,10)
However, if you don't want to do this, use the DEFAULT keyword
insert into table1 values (5, DEFAULT, 10, DEFAULT)
Just don't include the columns that you want to use the default value for in your insert statement. For instance:
INSERT INTO table1 (field1, field3) VALUES (5, 10);
...will take the default values for field2 and field4, and assign 5 to field1 and 10 to field3.
This works if all the columns have associated defaults and one does not want to specify the column names:
insert into your_table
default values
Try it like this
INSERT INTO table1 (field1, field3) VALUES (5,10)
Then field2 and field4 should have default values.
I had a case where I had a very simple table, and I basically just wanted an extra row with just the default values. Not sure if there is a prettier way of doing it, but here's one way:
This sets every column in the new row to its default value:
INSERT INTO your_table VALUES ()
Note: This is extra useful for MySQL where INSERT INTO your_table DEFAULT VALUES does not work.
If your columns should not contain NULL values, you need to define the columns as NOT NULL as well, otherwise the passed in NULL will be used instead of the default and not produce an error.
If you don't pass in any value to these fields (which requires you to specify the fields that you do want to use), the defaults will be used:
INSERT INTO
table1 (field1, field3)
VALUES (5,10)
You can write in this way
GO
ALTER TABLE Table_name ADD
column_name decimal(18, 2) NOT NULL CONSTRAINT Constant_name DEFAULT 0
GO
ALTER TABLE Table_name SET (LOCK_ESCALATION = TABLE)
GO
COMMIT
To insert the default values you should omit them something like this :
Insert into Table (Field2) values(5)
All other fields will have null or their default values if it has defined.
CREATE TABLE #dum (id int identity(1,1) primary key, def int NOT NULL default(5), name varchar(25))
-- this works
INSERT #dum (def, name) VALUES (DEFAULT, 'jeff')
SELECT * FROM #dum;
DECLARE #some int
-- this *doesn't* work and I think it should
INSERT #dum (def, name)
VALUES (ISNULL(#some, DEFAULT), 'george')
SELECT * FROM #dum;
CREATE PROC SP_EMPLOYEE --By Using TYPE parameter and CASE in Stored procedure
(#TYPE INT)
AS
BEGIN
IF #TYPE=1
BEGIN
SELECT DESIGID,DESIGNAME FROM GP_DESIGNATION
END
IF #TYPE=2
BEGIN
SELECT ID,NAME,DESIGNAME,
case D.ISACTIVE when 'Y' then 'ISACTIVE' when 'N' then 'INACTIVE' else 'not' end as ACTIVE
FROM GP_EMPLOYEEDETAILS ED
JOIN GP_DESIGNATION D ON ED.DESIGNATION=D.DESIGID
END
END