Is it possible to count lines which was read from import file ?
INSERT INTO table1(col1, col2)
SELECT col1, col2
FROM OPENROWSET(BULK 'F:\test.csv', FORMATFILE='f:\test.XML', ERRORFILE='F:\test.csv.log') AS a
WHERE col1%3=0;
Try to check ##ROWCOUNT after your statement statement.
Related
I am trying to store the output of a WITH statement in an INSERT statement in postgres along with an autoincrement id.
Below is the query:
INSERT
INTO
table
(row_id,(
SELECT
*
FROM
final_dataset
));
However, I am getting a syntax error near "SELECT". I am unabe to figure out the solution to this.
Are you looking for this?
insert into the_table (col1, col2, col3)
select nextval('the_table_id_seq'), x1, x2
from final_dataset;
If col1 is a serial or identitycolumn, I would remove it completely:
insert into the_table (col2, col3)
select x1, x2
from final_dataset;
I am inserting data into a table and want to return the inserted data. The inserted data contains foreign keys. I would like to get the whole data with the joins of the foreign keys.
I have tried putting a SELECT in RETURNING without luck. Is this even possible or do I just have to do another query after inserting the data?
Insert statement:
INSERT INTO someTable (col1, col2, col3, foreign_id)
VALUES ('value1', 'value2', 'value3', 1);
So in this case, could I have a RETURNING that basically would give me:
SELECT someTable.*, foreignTable.*
FROM someTable
JOIN foreignTable ON someTable.foreign_id = foreignTable.id;
demo:db<>fiddle
You can use a CTE for this:
WITH inserting AS (
INSERT INTO...
RETURNING <new data>
)
SELECT i.*, ft.*
FROM inserting i JOIN foreign_table ft ...
In this case the INSERT statement will be executed. The SELECT statement will be executed after that. This can reference the inserted data.
You can use a CTE for that:
with new_row as (
INSERT INTO some_table (col1, col2, col3, foreign_id)
VALUES ('value1', 'value2', 'value3', 1)
returning *
)
SELECT new_row.*, ft.*
FROM new_row
JOIN foreign_table ft ON new_row.foreign_id = ft.id;
Is it possible to EXECUTE a prepared statement using parameters you'd get from a CTE ?
The below samples are simplified versions of my code, but this is replicating exactly the problem I have.
Here's how far I've been able to go - without a CTE :
BEGIN;
CREATE TEMPORARY TABLE testTable
(
col1 NUMERIC,
col2 TEXT
) ON COMMIT DROP;
INSERT INTO testTable
VALUES (1, 'foo'), (2, 'bar');
PREPARE myStatement AS
WITH cteTable AS
(
SELECT col1, col2
FROM testTable
WHERE col1 = $1
)
SELECT col2 FROM cteTable;
EXECUTE myStatement(2);
DEALLOCATE myStatement;
COMMIT;
Here's the result:
col2
bar
And now, here's what I am trying to achieve :
BEGIN;
CREATE TEMPORARY TABLE testTable
(
col1 NUMERIC,
col2 TEXT
) ON COMMIT DROP;
INSERT INTO testTable
VALUES (1, 'foo'), (2, 'bar');
PREPARE myStatement AS
WITH cteTable AS
(
SELECT col1, col2
FROM testTable
WHERE col1 = $1
)
SELECT col2 FROM cteTable;
-- Using a CTE here to get the parameters for the prepared statement
WITH parameters AS
(
SELECT 2 val
)
EXECUTE myStatement(SELECT val FROM parameters);
DEALLOCATE myStatement;
COMMIT;
The error message I'm having is
Syntax error at or near EXECUTE
Even if try to run the EXECUTE part without attempting to use the CTE values, I still have the same error message.
As I haven't been able to find anyone else having the same issue despite my researches, I guess I could be doing it wrong.. If so could someone please point me into the right direction ?
Thanks
As Vao Tsun commented, this is not possible.
I have converted my prepared statement to a function which will return a table, then used a CTE to SELECT UNION my function with multiple parameters.
I have a list of values:
(56957,85697,56325,45698,21367,56397,14758,39656)
and a 'template' row in a table.
I want to do this:
for value in valuelist:
{
insert into table1 (field1, field2, field3, field4)
select value1, value2, value3, (value)
from table1
where ID = (ID of template row)
}
I know how I would do this in code, like c# for instance, but I'm not sure how to 'loop' this while passing in a new value to the insert statement. (i know that code makes no sense, just trying to convey what I'm trying to accomplish.
There is no need to loop here, SQL is a set based language and you apply your operations to entire sets of data all at once as opposed to looping through row by row.
insert statements can come from either an explicit list of values or from the result of a regular select statement, for example:
insert into table1(col1, col2)
select col3
,col4
from table2;
There is nothing stopping you selecting your data from the same place you are inserting to, which will duplicate all your data:
insert into table1(col1, col2)
select col1
,col2
from table1;
If you want to edit one of these column values - say by incrementing the value currently held, you simply apply this logic to your select statement and make sure the resultant dataset matches your target table in number of columns and data types:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1;
Optionally, if you only want to do this for a subset of those values, just add a standard where clause:
insert into table1(col1, col2)
select col1
,col2+1 as col2
from table1
where col1 = <your value>;
Now if this isn't enough for you to work it out by yourself, you can join your dataset to you values list to get a version of the data to be inserted for each value in that list. Because you want each row to join to each value, you can use a cross join:
declare #v table(value int);
insert into #v values(56957),(85697),(56325),(45698),(21367),(56397),(14758),(39656);
insert into table1(col1, col2, value)
select t.col1
,t.col2
,v.value
from table1 as t
cross join #v as v
I'm trying to get the output of queries within the with clause of my final query as csv or some sort of text files. I only have query access, I'm not allowed to create tables for this database. I have a set of queries that do some calculations on a data set, another set of queries that compute on the previous set and yet another that calculates on the final set. I don't want to run all of it as three seperate queries because the results from the first two are actually in the last one.
WITH
Q1 AS(
SELECT col1, col2, col3, col4, col5, col6, col7
FROM table1
),
Q2 AS(
SELECT AVG(col1) as col1Avg, MAX(col1) as col1Max, col2, col3,col4
FROm Q1
GROUP BY col2, col3, col4
)
SELECT
AVG(col1AVG), col3
FROM
Q2
GROUP BY col3
I would like the results from Q1, Q2 and the final select statement as preferably 3 csv files but I could live with all of it in one csv file. Is this possible?
Thanks!
Edit: Just to clarify, the columns from the queries are very different. I'm definitely pulling more columns from my first query than my second. I've edited the above code a bit to make this more clear.
To combine all the results together you'd use UNION ALL, but the number and data types of the columns must match.
select col1, col2, col2
from blah
union all
select col1, col2, col2
from blah2
union all
... etc
You can reference CTE's in there of course ...
with
cte_1 as (
select ... from ...),
cte_2 as (
select ... from ... cte_1),
cte_3 as (
select ... from ... cte_2)
select col1, col2, col2
from cte_1
union all
select col1, col2, col2
from cte_2
union all
select col1, col2, col2
from cte_3
If your final output is a csv then it looks like you have multiple row formats in there -- checksums? If so, in the queries that you union all together you might like to combine all the columns from each query into one string ...
with
cte_1 as (
select ... from ...),
cte_2 as (
select ... from ... cte_1),
cte_3 as (
select ... from ... cte_2)
select col1||','||col2||','||col2
from cte_1
union all
select col1||','||col2
from cte_2
union all
select col1
from cte_3