How to append sed hold space to later match? - sed

I have a file with contents:
Create view xyz1 as Select
field1,
field2,
field3,
field4
Create view xyz2 as Select
field1,
field2,
field3,
field4
I am thinking i can use sed hold space to capture the 'Create view' line and then print it at the next empty line(ie after field4)? I could not quite find this scenario on the interwebs, is it possible?
So it would look like this:
Create view xyz1 as Select
field1,
field2,
field3,
field4
Create view xyz1 as Select
Create view xyz2 as Select
field1,
field2,
field3,
field4
Create view xyz2 as Select
thanks

sed may not be the best solution, try awk
awk '/Create/ {s=$0} /^$/ {$0=s} 1; END {print s}' file
Create view xyz1 as Select
field1,
field2,
field3,
field4
Create view xyz1 as Select
Create view xyz2 as Select
field1,
field2,
field3,
field4
Create view xyz2 as Select

Related

Postgressql Xmltable as string for input to procedure problem

Hello iam trying to use xmltable in function, to update some data in other table, as for input i got string
This is a test program the final result will be that the insert should be created from any xml doing some data adjustment
the xml look like this
<header>
<data>
<line>
<field1>some data in f1</field1>
<field2>other data f2</field2>
<field3>this data contains numers 012323</field3>
<field4>and the last data</field4>
<line>
</data>
</header>
Iam taking the next steps
create table with casted string to xml input values
create table if not exists test_xml_table as select input_string::xml as string_xml;
will do some data adjustments
final step is insert
insert into test_tab( field1, field2, field3, field4 )
select xt.field1, xt.field2, xt.field3, field4 from test_xml_table
cross join xmltable( '/data/line' passing string_xml
columns field1 text path 'field1', field2 text path 'field2',
field3 text path 'field3', field4 text path 'field4' ) as xt;
The problem is that if the table test_xml_table doesnt exists, the program dont create it ( still didnt see it after create table command. I tried to do workaround, create the table first and fill it with XML data but now i dont know what to put after PASSING phrase. There is no error just no data is inserted. Will be grateful for help
the whole code is below
create or replace function test_function(
input_string character varying )
RETURNS void
LANGUAGE plpgsql
as $BODY$
begin
create table test_xml_table as select input_string::xml as string_xml;
insert into test_tab( field1, field2, field3, field4 )
select xt.field1, xt.field2, xt.field3, field4 from test_xml_table
cross join xmltable( '/data/line' passing string_xml
columns field1 text path 'field1', field2 text path 'field2',
field3 text path 'field3', field4 text path 'field4' ) as xt;
end;
$BODY$;
call ->
select test_function( '<header>
<data>
<line>
<field1>some data in f1</field1>
<field2>other data f2</field2>
<field3>this data contains numers 012323</field3>
<field4>and the last data</field4>
</line>
</data>
</header>' )
Tried to use postgres xmltable function to insert data into table. Iam expecting that the data from function input that is type character varying will insert data
cross join is not needed.
The xml namespace_uri should be /header/data/line not just /data/line
create or replace function test_function( input_string character varying )
RETURNS void
LANGUAGE plpgsql
as $BODY$
begin
create table test_xml_table as select input_string::xml as string_xml;
insert into test_tab( field1, field2, field3, field4 )
select xt.field1, xt.field2, xt.field3, field4
from test_xml_table, xmltable( '/header/data/line' passing string_xml
columns
field1 text path 'field1',
field2 text path 'field2',
field3 text path 'field3',
field4 text path 'field4'
) as xt;
end;
$BODY$;
Demo here

Postgresql moving data from table to 3 other tables

I have a table:
TABLE_A
id_table_a
field1
field2
field3
field4
field5
I need to move the data to 3 tables, 1 parent with 2 children:
TABLE_B
id_table_b
id_table_a
field1
field2
TABLE_C
id_table_c
id_table_b
field3
field4
TABLE_D
id_table_d
id_table_b
field5
We're talking about millions of registers. What would be the correct and most effective way to do this?
I'm completely new to PostgreSQL and I've come up with this after reading the documentation:
INSERT INTO table_b (id_table_a, field1, field2) SELECT id_table_a FROM table_a, SELECT field1 FROM table_a, SELECT field2 FROM table_a;
INSERT INTO table_c (id_table_b, field3, field4) SELECT id_table_b FROM table_b, SELECT field3 FROM table_a WHERE table_b.id_table_a = table_a.id_table_a, SELECT field4 FROM table_a WHERE table_b.id_table_a = table_a.id_table_a;
INSERT INTO table_d (id_table_d, field5) SELECT id_table_c FROM table_c, SELECT field5 FROM table_a WHERE table_b.id_table_a = table_a.id_table_a;
Would this do what I need or am I missing something? Thank you.
This will not work:
INSERT INTO table_b (id_table_a, field1, field2) SELECT id_table_a FROM table_a, SELECT field1 FROM table_a, SELECT field2 FROM table_a;
because the query SELECT id_table_a FROM table_a will (or can) return more than 1 value.
You need to write it like:
INSERT INTO table_b (id_table_a, field1, field2)
SELECT id_table_a, field1, field2 FROM table_a;
Maybe sub-optimal but a straightforward PL/pgSQL do block will help. Pls. note the returning into clause. Assuming that id_table_b, id_table_c and id_table_d are autogenerated integers, then
DO language plpgsql
$$
declare
r record;
var_id_table_b integer;
begin
for r in select * from table_a loop
insert into table_b (id_table_a, field1, field2)
values (r.id_table_a, r.field1, r.field2)
RETURNING id_table_b INTO var_id_table_b;
insert into table_c (id_table_b, field3, field4)
values (var_id_table_b, r.field3, r.field4);
insert into table_d (id_table_b, field5)
values (var_id_table_b, r.field5);
end loop;
end;
$$;

How to set up independent queries within a single query

I need to query a table for null values but on different fields. I have been setting up my queries independently and running them one at a time to view my results.
EX:
Query 1:
select * from TABLE1 where FIELD1 is null and FIELD2 is NOT null
Query 2:
Select * from TABLE1 where FIELD6 = 'YES' and FIELD2 is null
Query 3:
select * from TABLE1 where FIELD4 = 'OUTSIDE' and FIELD7 is not null
Is there a way to set up a single query which will allow me retrieve data from a single table but run queries where the conditions are different?
It looks like you could use the or operator:
select * from TABLE1
where (FIELD1 is null and FIELD2 is NOT null)
or (FIELD6 = 'YES' and FIELD2 is null)
or (FIELD4 = 'OUTSIDE' and FIELD7 is not null)

Copy rows into same table, but change value of one field

I have a list of values:
(56957,85697,56325,45698,21367,56397,14758,39656)
and a 'template' row in a table.
I want to do this:
for value in valuelist:
{
insert into table1 (field1, field2, field3, field4)
select value1, value2, value3, (value)
from table1
where ID = (ID of template row)
}
I know how I would do this in code, like c# for instance, but I'm not sure how to 'loop' this while passing in a new value to the insert statement. (i know that code makes no sense, just trying to convey what I'm trying to accomplish.
There is no need to loop here, SQL is a set based language and you apply your operations to entire sets of data all at once as opposed to looping through row by row.
insert statements can come from either an explicit list of values or from the result of a regular select statement, for example:
insert into table1(col1, col2)
select col3
,col4
from table2;
There is nothing stopping you selecting your data from the same place you are inserting to, which will duplicate all your data:
insert into table1(col1, col2)
select col1
,col2
from table1;
If you want to edit one of these column values - say by incrementing the value currently held, you simply apply this logic to your select statement and make sure the resultant dataset matches your target table in number of columns and data types:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1;
Optionally, if you only want to do this for a subset of those values, just add a standard where clause:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1
where col1 = <your value>;
Now if this isn't enough for you to work it out by yourself, you can join your dataset to you values list to get a version of the data to be inserted for each value in that list. Because you want each row to join to each value, you can use a cross join:
declare #v table(value int);
insert into #v values(56957),(85697),(56325),(45698),(21367),(56397),(14758),(39656);
insert into table1(col1, col2, value)
select t.col1
,t.col2
,v.value
from table1 as t
cross join #v as v

T-SQL GROUP BY over a dynamic list

I have a table with an XML type column. This column contains a dynamic list of attributes that may be different between records.
I am trying to GROUP BY COUNT over these attributes without having to go through the table separately for each attribute.
For example, one record could have attributes A, B and C and the other would have B, C, D then, when I do the GROUP BY COUNT I would get A = 1, B = 2, C = 2 and D = 1.
Is there any straightforward way to do this?
EDIT in reply to Andrew's answer:
Because my knowledge of this construct is superficial at best I had to fiddle with it to get it to do what I want. In my actual code I needed to group by the TimeRange, as well as only select some attributes depending on their name. I am pasting the actual query below:
WITH attributes AS (
SELECT
Timestamp,
N.a.value('#name[1]', 'nvarchar(max)') AS AttributeName,
N.a.value('(.)[1]', 'nvarchar(max)') AS AttributeValue
FROM MyTable
CROSS APPLY AttributesXml.nodes('/Attributes/Attribute') AS N(a)
)
SELECT Datepart(dy, Timestamp), AttributeValue, COUNT(AttributeValue)
FROM attributes
WHERE AttributeName IN ('AttributeA', 'AttributeB')
GROUP BY Datepart(dy, Timestamp), AttributeValue
As a side-note: Is there any way to reduce this further?
WITH attributes AS (
SELECT a.value('(.)[1]', 'nvarchar(max)') AS attribute
FROM YourTable
CROSS APPLY YourXMLColumn.nodes('//path/to/attributes') AS N(a)
)
SELECT attribute, COUNT(attribute)
FROM attributes
GROUP BY attribute
CROSS APPLY is like being able to JOIN the xml as a table. The WITH is needed because you can't have xml methods in a group clause.
Here is a way to get the attribute data into a way that you can easily work with it and reduce the number of times you need to go through the main table.
--create test data
declare #tmp table (
field1 varchar(20),
field2 varchar(20),
field3 varchar(20))
insert into #tmp (field1, field2, field3)
values ('A', 'B', 'C'),
('B', 'C', 'D')
--convert the individual fields from seperate columns to one column
declare #table table(
field varchar(20))
insert into #table (field)
select field1 from #tmp
union all
select field2 from #tmp
union all
select field3 from #tmp
--run the group by and get the count
select field, count(*)
from #table
group by field