DECLARE
MAX_upd_date_Entity_Incident timestamp without time zone;
MAX_upd_date_Entity_Incident := (SELECT LAST_UPDATE_DATE::timestamp FROM
test.table_1 where TABLE_NAME='mac_incidents_d');
execute 'insert into test.table2 (column_name,schema_name,tablename)
values(''col1'',''col2'',''col3'') from test.table3 X where X.dl_upd_ts::timestamp > '||
MAX_upd_date_Entity_Incident;
The above query is not getting executed, getting an error on the timestamp value, it's unable to read the variable MAX_upd_date_Entity_Incident.
Please suggest me if we need to cast timestamp to another format.
I am able to execute query in the SQL editor, but not using execute.
Don't pass values as string literals. Use placeholders and pass them as "native" values.
Additionally you can't use the values clause if you want to select the source values from a table. An insert statement is either insert into tablename (column_one, column_two) values (value_one, value_two) or insert into (column_one, column_two) select c1, c2 from some_table
I also don't see the need for dynamic SQL to begin with:
insert into test.table2 (column_name,schema_name,tablename)
select col1,col2,col3
from test.table3 X
where X.dl_upd_ts::timestamp > MAX_upd_date_Entity_Incident;
If you oversimplified your example and you indeed need dynamic SQL you should use something like this:
execute 'insert into test.table2 (column_name,schema_name,tablename)
select col1,col2,col3) from test.table3 X where X.dl_upd_ts::timestamp > $1'
using MAX_upd_date_Entity_Incident;
Related
I have a dynamic SQL statement I'm building up that is something like
my_interval_var := interval - '60 days';
sql_query := format($f$ AND NOT EXISTS (SELECT 1 FROM some_table st WHERE st.%s = %s.id AND st.expired_date >= %L )$f$, var1, var2, now() - my_interval_var)
Regarding my first question, it seemed to insert the Timestamp correctly (it seems) after the now() - my_interval_var computation. However, I just want to make sure I don't need to cast anything or something, because the only way I could get it work was if I used %L, which is the string literal Identifer. Or does postgres allow direct comparisons with Strings that represent Time without a cast?, like
some_column <= '2021-12-31 00:00:00'; // is ::timestamp cast needed?
Second of all, regarding the sql_query variable that I concatenated an SQL String into above, I actually wanted to skip the Format I did, and directly inject this sql_query variable into an EXECUTE...FORMAT...USING statement.
I couldn't get it to work, but something like this:
EXECUTE format($f$ SELECT *
FROM %I tbl_alias
WHERE tbl_alias.%s = %L
%s ) USING var1, var2, var3, sql_query;
Is it possible to leave the Dynamic SQL Identifiers %I %L and %s inside the variable and FORMAT it at the EXECUTE... level? Something tells me this isn't possible, but it would be really cool.
Also last question I didn't want to add, but I feel someone might have a quick answer.
I was using the ]
FOR temprecord IN
SELECT myCol1, myCol2, myCol3
FROM %I tbl',var1)
LOOP
EXECUTE temprecord.someColumnOnMyTbl;
END LOOP;
...but I could not for the life of get the EXECUTE temprecord.someColumnOnMyTbl statement to work when I made the query dynamic. I tried everything identifier, using FORMAT, USING...
I thought columns were strings like %s because I do that for columns all the time when they are aliased like alias.%s = 'some string literal'
ANyway, I couldn't get it to work, I wanted to make the column name dynamic but tried all these things
EXECUTE format($f$ %I.%s $f$, var1, var2);
EXECUTE format($f$ %$1.%$2 $f$) USING var1, var2;
EXECUTE format($f$ %I.someColumnOn%s $f$, var1, var2);
EXECUTE format($f$ $1.someColumnOn$2 $f$) USING var1, var2;
Anyway, I tried more stuff than that, but I actually got some data from the DB when I made the temprecord variable an %I but I am Selecting 3 columns and it looked like sommething got jacked up with the second identifier because I got a syntax error and it looked like it was trying to concatenate all 3 columns of the query results...
I did try hardcoding it and that worked fine... any help appreciated!
String literal is unknown type value. Postgres always does cast to some target binary format. The type is deduced from context. When you use function format, and %L placeholder, then any binary value is converted to string, and escaped to Postgres's string literal (protection against syntax errors, and SQL injection). When you use USING clause, then the binary value is passed directly to executor. It is little bit faster, and there is not possibility to lost some information under cast to string. Without these points, the real effect of %L and USING clause is almost same.
Your type of variable is timestamp. Probably type of expired_date column is date type. So some conversion timestamp->date is necessary.
Function format is just string function. It just make string. For better readability it supports placeholders, that ensure correct escaping and correct result SQL string. %L is same like calling function quote_literal and %I is same like quote_ident (for column, table names). %s inserts string without escaping and quoting. The result of format function (when you use it in EXECUTE command) should be valid SQL statement. You can use it in RAISE NOTICE command, and you can print result to debug output. Usually it is good idea
DECLARE
query text;
x date DEFAULT current_date
y int;
BEGIN
query := format('.... WHERE inserted = $1', ...);
RAISE NOTICE 'dynamic query will be: %', query);
EXECUTE query USING x INTO y;
...
Clause USING allows using parameters in dynamic SQL (EXECUTE clause). Usually, the format's placeholdres should be used for table or column names, and USING for any other.
For types date and timestamp (scalars basic types) the following execution will be on 99.99% same:
EXECUTE format('select count(*) from foo where inserted = %L', current_date) INTO ..
EXECUTE 'select count(*) from foo where inserted = $1' USING current_date INTO ..
You cannot to use query parameters on column name or table name positions. This is limit of USING clause. But for any other cases, this clause should be used primary.
This code:
ALTER TABLE myschema.mytable add column geom geometry (point,4326);
CREATE INDEX mytable_idx on myschema.mytable using GIST(geom);
UPDATE myschema.mytable set geom = st_setsrid(st_point(mytable.long, mytable.lat), 4326);
This works fine when updating a single table. How would you convert it into a dynamic SQL function, with schema and table as parameters?
Since the function input must be an existing table, the simplest safe way would be to use a regclass input parameter like demonstrated here:
Table name as a PostgreSQL function parameter
However, you also need the bare table name for the concatenated index name, so I'll stick with taking text for schema and table separately:
CREATE OR REPLACE FUNCTION create_geom(_sch text, _tab text)
RETURNS void
LANGUAGE plpgsql AS
$func$
BEGIN
EXECUTE format(
'ALTER TABLE %1$I.%2$I ADD COLUMN geom geometry(POINT,4326);
UPDATE %1$I.%2$I SET geom = st_setsrid(st_point(long, lat), 4326);
CREATE INDEX %3$I ON %1$I.%2$I USING gist(geom);'
, _sch, _tab
, _tab || '_geom_gist_idx');
END
$func$;
Call:
SELECT create_geom('myschema', 'mytable');
Use a single EXECUTE, no need for multiple calls.
Just omit table-qualification for columns in the UPDATE. While not joining additional tables, column names are unambiguous. Else, use a table alias, which can be constant. Like:
UPDATE %1$s AS x SET geom = st_setsrid(st_point(x.long, x.lat), 4326);
But it's smarter to populate the column before you build the index. That's a lot faster and produces a balanced index without bloat. So I switched the commands.
Note how I concatenate the index name first (_tab || '_geom_gist_idx'), and then double-quote as required with %3$I. That's the safe way. Something like %I_idx fails with non-standard names.
That said, it's typically a mistake to add columns with redundant information to a table. (What keeps you from changing one or the other? Why bloat the table?) Either just use an expression index instead of all of the above:
CREATE INDEX ON myschema.mytable USING gist (st_setsrid(st_point(long, lat), 4326));
Or drop the now redundant long & lat from the table. Those can be extracted from the new geom cheaply on the fly.
Or, if you need all columns (for special performance reasons?), consider a generated column instead. See:
Computed / calculated / virtual / derived columns in PostgreSQL
Having your queries as SQL templates and using format function for identifiers:
CREATE OR REPLACE FUNCTION public.create_geom(sch text, tab text)
RETURNS void language plpgsql AS $body$
DECLARE
DYNSQLA constant text := 'ALTER TABLE %I.%I add column geom geometry (point,4326)';
DYNSQLB constant text := 'CREATE INDEX %I_idx on %I.%I using GIST(geom);';
DYNSQLC constant text := 'UPDATE %I.%I set geom = st_setsrid(st_point(%I.long, %I.lat), 4326)';
BEGIN
execute format(DYNSQLA, sch, tab);
execute format(DYNSQLB, tab, sch, tab);
execute format(DYNSQLC, sch, tab, tab, tab);
END;
$body$;
SELECT create_geom('myschema','mytable');
I'm trying to get a date formatted between quotes on a select format() query:
select format('CREATE TABLE temporary_table AS
SELECT id FROM table WHERE created >=%I ORDER BY 1 ASC LIMIT 1','2021-04-01');
I'm getting this:
CREATE TABLE temporary_table AS
SELECT table FROM table WHERE created >="2021-04-01" ORDER BY 1 ASC LIMIT 1;
And I wanted to actually get the date between single quotes (as this won't work on an execute)
How can I achieve this?
For single-quoted values you need the specifier %L for format(). Like:
SELECT format('CREATE TABLE temporary_table AS
SELECT id FROM table WHERE created >= %L ORDER BY 1 LIMIT 1','2021-04-01');
See:
Insert text with single quotes in PostgreSQL
Of course that only makes sense if you parameterize the input. You wouldn't bother to use format() for a constant date to begin with.
I've narrowed it down to two possibilities - DynamicSQL and using a case statement.
However, I've failed with both of these.
I simply don't understand dynamicSQL, and how I would use it in my case.
This is my attempt using case statements; one of many failed variations.
SELECT column_name,
CASE WHEN column_name = 'address' THEN (**update statement gives syntax error within here**)
END
FROM information_schema.columns
WHERE table_name = 'employees';
As an overview, I'm using Axios to talk to my Node server, which is making calls to my Heroku database using Massivejs.
Maybe this isn't the way to go - so here's my main problem:
I've ran into troubles because the values I'm planning on using as column names are sent to my server as strings. The exact call that I've been trying to use is
update employees
set $1 = $2
where employee_id = $3;
Once again, I'm passing into those using massive.
I get the error back { error: syntax error at or near "'address'"} because my incoming values are strings. My thought process was that the above statement would allow me to use variables because 'address' is encapsulated by quotes.
But alas, my thought process has failed me.
This seems to be close to answering my question, but I can't seem to figure out what to do in my case if using dynamic SQL.
How to use dynamic column names in an UPDATE or SELECT statement in a function?
Thanks in advance.
I will show you a way to do this by using a function.
First we create the employees table :
CREATE TABLE employees(
id BIGSERIAL PRIMARY KEY,
column1 TEXT,
column2 TEXT
);
Next, we create a function that requires three parameters:
columnName - the name of the column that needs to be updated
columnValue - the new value to which the column needs to be updated
employeeId - the id of the employee that will be updated
By using the format function we generate the update query as a string and use the EXECUTE command to execute the query.
Here is the code of the function.
CREATE OR REPLACE FUNCTION update_columns_on_employee(columnName TEXT, columnValue TEXT, employeeId BIGINT)
RETURNS VOID AS
$$
DECLARE update_statement TEXT := format('UPDATE EMPLOYEES SET %s = ''%s'' WHERE id = %L',columnName, columnValue, employeeId);
BEGIN
EXECUTE update_statement;
end;
$$ LANGUAGE plpgsql;
Now, lets insert some data into the employees table
INSERT INTO employees(column1, column2) VALUES ('column1_start_value','column2_start_value');
So now we currently have an employee with an id value of 1 who has 'column1_start_value' value for the column1, and 'column2_start_value' value for column2.
If we want to update the value of column2 from 'column2_start_value' to 'column2_new_value' all we have to do is execute the following call
SELECT * FROM update_columns_on_employee('column2','column2_new_value',1);
Due to a legacy report generation system, I need to use a cursor to traverse the result set from a stored procedure. The system generates report output by PRINTing data from each row in the result set. Refactoring the report system is way beyond scope for this problem.
As far as I can tell, the DECLARE CURSOR syntax requires that its source be a SELECT clause. However, the query I need to use lives in a 1000+ line stored procedure that generates and executes dynamic sql.
Does anyone know of a way to get the result set from a stored procedure into a cursor?
I tried the obvious:
Declare Cursor c_Data For my_stored_proc #p1='foo', #p2='bar'
As a last resort, I can modify the stored procedure to return the dynamic sql it generates instead of executing it and I can then embed this returned sql into another string and, finally, execute that. Something like:
Exec my_stored_proc #p1='foo', #p2='bar', #query='' OUTPUT
Set #sql = '
Declare Cursor c_Data For ' + #query + '
Open c_Data
-- etc. - cursor processing loop etc. goes here '
Exec #sql
Any thoughts? Does anyone know of any other way to traverse the result set from a stored proc via a cursor?
Thanks.
You could drop the results from the stored proc into a temp table and select from that for your cursor.
CREATE TABLE #myResults
(
Col1 INT,
Col2 INT
)
INSERT INTO #myResults(Col1,Col2)
EXEC my_Sp
DECLARE sample_cursor CURSOR
FOR
SELECT
Col1,
Col2
FROM
#myResults
Another option may be to convert your stored procedure into a table valued function.
DECLARE sample_cursor CURSOR
FOR
SELECT
Col1,
Col2
FROM
dbo.NewFunction('foo', 'bar')
You use INSERT ... EXEC to push the result of the procedure into a table (can be a temp #table or a #table variable), the you open the cursor over this table. The article in the link discusses the problems that may occur with this technique: it cannot be nested and it forces a transaction around the procedure.
You could execute your SP into a temporary table and then iterate over the temporary table with the cursor
create table #temp (columns)
insert into #temp exec my_stored_proc ....
perform cursor work
drop table #temp