I have a Table named TEST with column name col1, col2, col3,col4..........
So, from information_schema.columns i will get details about this table object.
Now, i want to build a select query from TEST table by supplying colum name from information_schema.columns.
like this, select column_name from information_schema.columns where table_name = 'TEST'.This will return
column_name
col1
col2
col3
i want to use this output in select query from TEST. Like this
select col1, col2,col3,col4 from TEST.
Is this possible by single query?
You will have to compose an SQL string and execute that.
You can either do that with your client application or in a PostgreSQL function.
You must take special care to escape all string values using the format function or quote_ident and quote_literal to avoid problems from SQL injection.
Related
We have a use-case where we need to pivot the result set of a query to columns for insert statement. For that, we are using crosstab which takes text sql as a parameter.
so the query might conceptually look like this:
insert into table(col1, col2, ...)
select col1, col2, ...
from crosstab($$
select ....
where something = {something}
$$)
...
in postgresql client, it all works perfectly fine.
Now, when we are trying to implement this in code with Anorm, it would look as follows:
val sql = $"""
|insert into table(col1, col2, ...)
|select col1, col2, ...
|from crosstab($$$$
| select ....
| where something = {something}
|$$$$)
|...
"""
SQL(sql).on("something" -> param).execute()
This should work, but it does not. The reason is that in this particular case, parameters defined in the internal SQL string is not getting replaced with the actual value.
Of course, it is possible to build the SQL string by hand using string interpolation. But, I would prefer to use the tools that built for it.
What is there a way to make the parameters replacements work in this case?
DECLARE
MAX_upd_date_Entity_Incident timestamp without time zone;
MAX_upd_date_Entity_Incident := (SELECT LAST_UPDATE_DATE::timestamp FROM
test.table_1 where TABLE_NAME='mac_incidents_d');
execute 'insert into test.table2 (column_name,schema_name,tablename)
values(''col1'',''col2'',''col3'') from test.table3 X where X.dl_upd_ts::timestamp > '||
MAX_upd_date_Entity_Incident;
The above query is not getting executed, getting an error on the timestamp value, it's unable to read the variable MAX_upd_date_Entity_Incident.
Please suggest me if we need to cast timestamp to another format.
I am able to execute query in the SQL editor, but not using execute.
Don't pass values as string literals. Use placeholders and pass them as "native" values.
Additionally you can't use the values clause if you want to select the source values from a table. An insert statement is either insert into tablename (column_one, column_two) values (value_one, value_two) or insert into (column_one, column_two) select c1, c2 from some_table
I also don't see the need for dynamic SQL to begin with:
insert into test.table2 (column_name,schema_name,tablename)
select col1,col2,col3
from test.table3 X
where X.dl_upd_ts::timestamp > MAX_upd_date_Entity_Incident;
If you oversimplified your example and you indeed need dynamic SQL you should use something like this:
execute 'insert into test.table2 (column_name,schema_name,tablename)
select col1,col2,col3) from test.table3 X where X.dl_upd_ts::timestamp > $1'
using MAX_upd_date_Entity_Incident;
Can I perform a
select dblink_exec ('merg',E'insert into table1(col1,col2) select * from dblink(\'mc\',\'select distinct col1, col2 from table2\') as t(col1 bigint, col2 text)');
to be able to insert a select from a different database on the same server ?
I also tried to execute the second part into a view and then select from the view but did not work
All you need is to connect to one of the databases then execute
CREATE EXTENSION dblink
then just use:
select dblink_exec('dbname=table1', ....)
In Postgres, you can link to your other databases using dblink like so:
SELECT *
FROM dblink (
'dbname=name port=1234 host=host user=user password=password',
'select * from table'
) AS users([insert each column name and its type here]);
But this is quite verbose.
I've shortened it up by using dblink_connect and dblink_disconnect to abstract the connection string from my dblink queries. However, that still leaves me with the manual table definition (i.e., [insert each column name and its type here]).
Instead of defining the table manually, is there a way I can define it with a TYPE or anything else that'd be re-usable?
In my case, the number of remote tables I have to join and the number of columns involved makes my query massive.
I tried something along the lines of:
SELECT *
FROM dblink (
'myconn',
'select * from table'
) AS users(postgres_pre_defined_type_here);
But I received the following error:
ERROR: a column definition list is required for functions returning "record"
As you considered creating several types for dblink, you can accept creating several functions as well. The functions will be well defined and very easy to use.
Example:
create or replace function dblink_tables()
returns table (table_schema text, table_name text)
language plpgsql
as $$
begin
return query select * from dblink (
'dbname=test password=mypassword',
'select table_schema, table_name from information_schema.tables')
as tables (table_schema text, table_name text);
end $$;
select table_name
from dblink_tables()
where table_schema = 'public'
order by 1
What I'm trying to accomplish is to get aggregated data for all unique combinations of sendercompid, targetcompid, msgtype through all tables in inner SQL.
I expect to have from 20mil to 40mil unique rows in resulting output.
I cannot succeed in running next query on Postgresql 8.3.13:
SELECT
sendercompid, targetcompid, count(msgtype), msgtype
FROM
(SELECT table_name
FROM information_schema.tables
WHERE table_catalog = 'test'
AND table_schema = 'msg'
AND (table_name like 'fix_aee_20121214%') OR
(table_name like 'fix_aee2_20121214%')
)
WHERE
(sendercompid LIKE '%201%') OR
(targetcompid LIKE '%201%')
GROUP BY
sendercompid, targetcompid, msgtype ;
If this select is being split on 2 : outer and inner, then :
inner will provide list of tables and outer will do select and grouping from each table .
If I run those two SQLs as one, I have an alias error from pgsql db
ERROR: subquery in FROM must have an alias
I tried use alias, but this error not disappear.
Any thoughts what I am missing there?
Thank you.
FROM doesn't work the way you think it does. A sub-select works like any other query: it produces a set of rows. The outer SELECT works with those rows as though they were a table. There's no special magic beyond that; it has no idea that the values you're returning are table names, and won't treat them as such.
You can probably accomplish what you want using the catalog tables, but that would be complicated and hack-y.
Since your sub-tables appear to be date-based partitions, I think what you really want to use is the partitioning support built into Postgres, described in these docs. Essentially your partitions inherit from a parent table, and you set up range constraints on each child. When you query from the parent table with constraint_exclusion enabled, Postgres automatically selects the appropriate partition.
You can "run" such query using dynamic SQL. The idea - to form the proper query with table name as string inside PL\pgSQL procedure and EXECUTE it. Something like:
CREATE OR REPLACE FUNCTION public.function1 (
)
RETURNS TABLE (
"field1" NUMERIC,
"field2" NUMERIC,
...
) AS
$body$
BEGIN
RETURN EXECUTE 'SELECT * FROM '|| (SELECT table_name from information_schema.tables
where table_catalog = 'test' AND
table_schema='msg' AND
(table_name like 'fix_aee_20121214%') OR
(table_name like 'fix_aee2_20121214%'));
END;
$body$
LANGUAGE 'plpgsql';
Then use something like:
SELECT sendercompid, targetcompid, count(msgtype), msgtype
FROM function1
WHERE (sendercompid LIKE '%201%') OR
(targetcompid LIKE '%201%')
GROUP BY sendercompid, targetcompid, msgtype ;
Or you can create function with full query and some parameters to build WHERE clause.
Details: EXECUTE,
If this is just a concept then:
ERROR: subquery in FROM must have an alias.
There's is no information where columns sendercompid, targetcompid, count(msgtype), msgtype are come from.
SELECT sendercompid, targetcompid, count(msgtype), msgtype from (
SELECT table_name from information_schema.tables
where table_catalog = 'test' AND
table_schema='msg' AND
(table_name like 'fix_aee_20121214%') OR
(table_name like 'fix_aee2_20121214%')
) a
WHERE (sendercompid LIKE '%201%') OR
(targetcompid LIKE '%201%')
GROUP BY sendercompid, targetcompid, msgtype ;
Use alias for subquery. Better you alias for all tables to avoid confusion.