using pgadmin4, postgres 9.6 on windows 10
I'm trying to use parameter to specify table name in a prepared statement as in the code below. However I do get a syntax error as below. Note that I'm able to use the parameters with a where condition et al.
Query
prepare mySelect(text) as
select *
from $1
limit 100;
execute mySelect('some_table');
pgAdmin message
ERROR: syntax error at or near "$1"
LINE 3: from $1
^
SQL state: 42601
Character: 50
It is not possible. The prepare statement is persistent execution plan - and execution plan contains pined source of data - so tables, column names cannot be mutable there.
When you change table, columns, then you change the semantic of query - you will got different execution plan and then this behave is not possible in prepared statements. The main use case of prepared statements is reusing of execution plans - plan once, execute more. But there are some principal limits - only some parameters can be changed.
Related
I have created a PL/pgSQL table-returning function that executes a SELECT statement and uses the input parameter in the WHERE clause of the query.
I frame the statement dynamically and execute it like this: EXECUTE sqlStmt USING empID;
sqlStmt is a variable of data type text that has the SELECT query which joins 3 tables.
When I execute that query in pgAdmin and analyze I could see that 'Index scan' on the tables are utilized as expected. However, when I do EXPLAIN ANALYZE SELECT * from fn_getDetails(12), the output just says "Function scan".
How do we know if the table indexes are utilized? Other SO answers to use auto_explain module did not provide details of the function body statements. And I am unable to use the PREPARE inside my function body.
The time taken by execution of the direct SELECT statement is almost the same as the use of function, just couple of milliseconds, but how can I know if the index was used?
auto_explain will certainly provide the requested information.
Set the following parameters:
shared_preload_libraries = 'auto_explain' # requires a restart
auto_explain.log_min_duration = 0 # log all statements
auto_explain.log_nested_statements = on # log statements in functions too
The last parameter is required for tracking SQL statements inside functions.
To activate the module, you need to restart the database.
Of course, testing if the index is used in a query on a small table won't give you a reliable result. You need about as many test data as you expect to have in reality.
I'm using Enterprise Postgres 9.5 with Oracle Compatibility. I have a problem with the EXECUTE IMMEDIATE command.
Say I have a table with few columns and one of them can accept NULLs. If I do
EXECUTE IMMEDIATE 'select null_col from '||table_name||' where col1=10' into x;
It sends the value to x, if null_col returns any.
When I given the condition col1=19, where 19 is not present in the table, then I get the error like this.
query returned no rows
and my execution stops. So how can I handle that. Oracle doesn't given any error for such statements, whereas EDB does. Please help.
I didn't find any EDB tags, so please tag if you think this is inappropriate question here. Thanks for understanding.
When a statement in my PLPGSQL function (Postgres 9.6) is being run I can see the query on one line, and then all the parameters on another line. A 2-line logging. Something like:
LOG: execute <unnamed>: SELECT * FROM table WHERE field1=$1 AND field2=$2 ...
DETAIL: parameters: $1 = '-767197682', $2 = '234324' ....
Is it possible to log the entire query in pg_log WITH the parameters already replaced in the query and log it in a SINGLE line?
Because this would make it much easier to copy/paste the query to reproduce it in another terminal, especially if queries have dozens of parameters.
The reason behind this: PL/pgSQL treats SQL statements as prepared statements internally.
First: With default settings, there is no logging of SQL statements inside PL/pgSQL functions at all. Are you using auto_explain?
Postgres query plan of a UDF invocation written in pgpsql
The first couple of invocations in the same session, the SPI manager (Server Programming Interface) generates a fresh execution plan, based on actual parameter values. Any kind of logging should report parameter values inline.
Postgres keeps track and after a couple of invocations in the current session, if execution plans don't seem sensitive to actual parameter values, it will start reusing a generic, cached plan. Then you should see the generic plan of a prepared statements with $n parameters (like in the question).
Details in the chapter "Plan Caching" in the manual.
You can observe the effect with a simple demo. In the same session (not necessarily same transaction):
CREATE TEMP TABLE tbl AS
SELECT id FROM generate_series(1, 100) id;
PREPARE prep1(int) AS
SELECT min(id) FROM tbl WHERE id > $1;
EXPLAIN EXECUTE prep1(3); -- 1st execution
You'll see the actual value:
Filter: (id > 3)
EXECUTE prep1(1); -- several more executions
EXECUTE prep1(2);
EXECUTE prep1(3);
EXECUTE prep1(4);
EXECUTE prep1(5);
EXPLAIN EXECUTE prep1(3);
Now you'll see a $n parameter:
Filter: (id > $1)
So you can get the query with parameter values inlined on the first couple of invocations in the current session.
Or you can use dynamic SQL with EXECUTE, because, per documentation:
Also, there is no plan caching for commands executed via EXECUTE.
Instead, the command is always planned each time the statement is run.
Thus the command string can be dynamically created within the function
to perform actions on different tables and columns.
That can actually affect performance, of course.
Related:
PostgreSQL Stored Procedure Performance
When I'm sketching out SQL statements I have a file of all the queries I have used to analyse my live data. Each time I write a new statement or group of statements at the end of the fileI select them and click 'execute' to see the results. I'm paranoid that I may forget the selection stage and accidentally run all the queries sequentially in the entire file and so I head the file with the line
USE FakeDatabase
so that the queries will fail as they will be run against a non-existing database. But no, instead I get the error
USE statement is not supported to switch between databases
(N.B. I am using SQL Server Management Studio v17.0 RC1 against a v12 Azure SQL Server database.)
What tSQL statement can I use that will prevent further execution of tSQL statements in a file?
use is not supported in AZURE...you can try below ,but there can be many options depending on your use case
Replace use Database with below statement
if db_name() <>'Fakedatabase'
return;
You could, instead, put something like this in each script:
IF ##SERVERNAME <> 'Not-Really-My-Server'
BEGIN
raiserror('Database Name Not Set', 20, -1) with log
END
-- Rest of my query...
I have written a DB2 query to do the following:
Create a temp table
Select from a monster query / insert into the temp table
Select from the temp table / delete from old table
Select from the temp table / insert into a different table
In MSSQL, I am allowed to run the commands one after another as one long query. Failing that, I can delimit them with 'GO' commands. When I attempt this in DB2, I get the error:
DB2CLI.DLL: ERROR [42601] [IBM][CLI Driver][DB2] SQL0199N The use of the reserved
word "GO" following "" is not valid. Expected tokens may include: "".
SQLSTATE=42601
What can I use to delimit these instructions without the temp table going out of scope?
GO is something that is used in MSSQL Studio, I have my own app for running upates into live and use "GO" to break the statements apart.
Does DB2 support the semi-colon (;)? This is a standard delimiter in many SQL implementations.
have you tried using just a semi-colon instead of "GO"?
This link suggests that the semi-colon should work for DB2 - http://www.scribd.com/doc/16640/IBM-DB2
I would try wrapping what you are looking to do in BEGIN and END to set the scope.
GO is not a SQL command, it's not even a TSQL command. It is an instruction for the parser. I don't know DB2, but I would imagine that GO is not neccessary.
From Devx.com Tips
Although GO is not a T-SQL statement, it is often used in T-SQL code and unless you know what it is it can be a mystery. So what is its purpose? Well, it causes all statements from the beginning of the script or the last GO statement (whichever is closer) to be compiled into one execution plan and sent to the server independent of any other batches.