Using PREPARE with INTO in PostgreSQL - postgresql

I'm studying the PREPARE and EXECUTE commands to optimize my functions in PostgreSQL, but i have some problems.
I have the following PREPARE command:
PREPARE my_query(int) AS SELECT id FROM table1 WHERE id = $1;
So, I need store this result (ID) into a variable, like this:
EXECUTE my_query(10) INTO var_1;
But I got a syntax error. What is the correct way to do this?

Your question doesn't really make any sense to me.
If you are trying to execute the prepared statement from within PL/PgSQL than you're out of luck. The EXECUTE command is reserved for a different purpose within PL/PgSQL. Not that it would have much use, PL/PgSQL plans are prepared and cached by default anyhow :)
If you are trying to do this outside of PL/PgSQL than the INTO would mean you are trying to write the result to a new table. Something you are unlikely to do/need that often.

Related

How is this code prone to SQL injection?

I was reading PostgreSql documentation here, and came across the following code snippet:
EXECUTE 'SELECT count(*) FROM mytable WHERE inserted_by = $1 AND inserted <= $2'
INTO c
USING checked_user, checked_date;
The documentation states that "This method is often preferable to inserting data values into the command string as text: it avoids run-time overhead of converting the values to text and back, and it is much less prone to SQL-injection attacks since there is no need for quoting or escaping".
Can you show me how this code is prone to SQL injection at all?
Edit: in all other RDBMS I have worked with this would completely prevent SQL injection. What is implemented differently in PostgreSql?
Quick answer is that it isn't by itself prone to SQL injection, As I understand your question you are asking why we don't just say so. So since you are looking for scenarios where this might lead to SQL injection, consider that mytable might be a view, and so could have additional functions behind it. Those functions might be vulnerable to SQL injection.
So you can't look at a query and conclude that it is definitely not susceptible to SQL injection. The best you can do is indicate that at the level provided, this specific level of your application does not raise sql injection concerns here.
Here is an example of a case where sql injection might very well happen.
CREATE OR REPLACE FUNCTION ban_user() returns trigger
language plpgsql security definer as
$$
begin
insert into banned_users (username) values (new.username);
execute 'alter user ' || new.username || ' WITH VALID UNTIL ''YESTERDAY''';
return new;
end;
Note that utility functions cannot be parameterized as you indicate, and we forgot to quote_ident() around new.username, thus making the field vulnerable.
CREATE OR REPLACE VIEW banned_users_today AS
SELECT username FROM banned_users where banned_date = 'today';
CREATE TRIGGER i_banned_users_today INSTEAD OF INSERT ON banned_users_today
FOR EACH ROW EXECUTE PROCEDURE ban_user();
EXECUTE 'insert into banned_users_today (username) values ($1)'
USING 'postgres with password ''boo''; drop function ban_user() cascade; --';
So no it doesn't completely solve the problem even if used everywhere it can be used. And proper use of quote_literal() and quote_ident() don't always solve the problem either.
The thing is that the problem can always be at a lower level than the query you are executing.
The bound parameters prevent garbage to manipulate the statement into doing anything other than what it's intended to do.
This guarantees no possibility for SQL-injection attacks short of a Postgres bug. (See H2C03's link for examples of what could go wrong.)
I imagine the "much less prone to SQL-injection attacks" amounts to CYA verbiage were such a thing were to arise.
SQL injection is usually associated with large data dumps on pastebin.com and such scenario won't work here even if the example used contatenation not variables. It's because that COUNT(*) will aggregate all data you'd be trying to steal.
But I can imagine scenarios where count of arbitrary record would be sufficiently valuable information - e.g. number of competitor's clients, number of sold products etc. And actually recalling some of the really tricky blind SQL injection methods it might be possible to build a query that using COUNT alone would allow to iteratively recover actual text from the database.
It would be also much easier to exploit on a database sufficiently old and misconfigured to allow the ; separator, in which case the attacker might just append a completely separate query.

What is the purpose of pgScript in PostgreSQL?

I have failed to understand the need for pgScript, which could be executed using pgAdmin tool. When it should be used? What it can do that plpgSQL cannot do? What is equivalent of it in Microsoft SQL Server?
pgScript is a client-side scripting language, while pl/PgSQL runs on the server. This means they have entirely different use cases. For example, PgScript can manage transaction status while pl/PgSQL cannot, but pl/Pgsql can be used to extend the language of SQL while pgScript cannot do that.
Additionally it means the two will handle many other things quite differently ranging from query plans to dynamic SQL, and while pgScript requires round trips between queries pl/Pgsql does not.
One use for pgScript is to define variables and use them later in your SQLs.
For example, you could do something like this:
declare #mytbl, #maxid;
set #mytbl = 'sometable';
set #maxid = 2500;
set #res = select count(*) from #mytbl where id <= #maxid;
print #res;
This approach is to just have any variables you want to change at the top of your script, rather than those getting buried deep inside complex SQL queries.
Of course, pgScript is a feature available only inside PgAdmin III client like #{Craig Ringer} mentioned in his comment.
I used DDL script generator from Toad Data Modeler.
I ran the script using normal query execution in PgAdmin-III but it kept giving me error:
"ERROR: relation "user" already exists SQL state: 42P07".
I ran Execute pgScript in PgAdmin-III and it worked fine.

Execute Stored Process with pass in SQL query from another table?

Currently my development environment is using SQL server express 2008 r2 and VS2010 for my project development.
My question is like this by providing a scenario:
Development goal:
I develop window services something like data mining or data warehousing using .net C#.
That meant I have a two or more database involved.
my senario is like this:
I have a database with a table call SQL_Stored inside provided with a coloum name QueryToExec.
I first idea that get on my mind is written a stored procedure and i tried to came out a stored procedure name Extract_Sources with two parameter passed in thats ID and TableName.
My first step is to select out the sql need to be execute from table SQL_Stored. I tried to get the SQL by using a simple select statement such as:
Select Download_Sql As Query From SQL_Stored
Where ID=#ID AND TableName=#TableName
Is that possible to get the result or is there another way to do so?
My Second step is to excecute the Sql that i get from SQL_Stored Table.Is possible to
to execute the query that select on the following process of this particular stored proc?
Need to create a variable to store the sql ?
Thank you,Appreciate for you all help.Please don't hesitate to voice out my error or mistake because I can learn from it. Thank you.
PS_1:I am sorry for my poor English.
PS_2:I am new to stored procedure.
LiangCk
Try this:
DECLARE #download_sql VARCHAR(MAX)
Select
#download_sql = Download_Sql
From
SQL_Stored
Where
AreaID = #AreaID
AND TableName = #TableName
EXEC (#download_sql)

Multiple prepared statements disrupt a transaction using DBD::Sybase

In my Perl script, I use DBD::Sybase (via DBI module) to connect to a SQL Server 2008. The base program as below runs without problem:
use DBI;
# assign values to $host, $usr, $pwd
my $dbh = DBI->connect("dbi:Sybase:$host", $usr, $pwd);
$dbh->do("BEGIN TRAN tr1");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
$update->execute(100, 'apple');
$dbh->do("END TRAN tr1");
however, if I insert one more prepare statement right before the existing prepare statement, to have the program look like:
...
my $insert = $dbh->prepare("INSERT INTO mytable (name, qty) VALUES (?, ?)");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
...
and the rest is all the same, then when I run it, I got:
DBD::Sybase::db do failed: Server message number=3902 severity=16 state=1 line=1 server=xxx text=The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
So looks like the additional prepare statement somehow disrupted the entire transaction flow. I had been running the same code via the DBD::ODBC driver with no problem against a SQL SERVER 2005. (But my firm upgraded to 2008 and I had to use the DBD::Sybase to get around some other problems.)
Any help / suggestion on how to resolve this issue would be much appreciated. In particular, using a different db handle for the other prepare is not a desired solution since that will beat the purpose of having them in a single transaction.
UPDATE: Turns out if I execute at least once on the additional insert, then the program is again run fine. So looks like every prepared statement needs to be run under Sybase. But that isn't a requirement with ODBC and isn't a reasonable requirement in general. Anyway to get around it?
You are learning perl AND Sybase basics and making several incorrect conclusions.
Forget about what it does under ODBC for a moment. ODBC most probably has AUTOCOMMIT turned on, and thus you have no transaction control whatsoever. (Why anyone would use ODBC when the DBD:: supports DB-Lib and CT-Lib is beyond me, but that's a separate story.)
Re: "So looks like every prepared statement needs to be run under Sybase."
Rawheiser is correct. What exactly do you expect to achieve by preparing a batch but performing a Do instead ? Where else do you expect to execute the batch prepared under Sybase, other than under Sybase?
Do vs prepare/execute are quite different. prepare/execute for Sybase works just fine in millions of programs. you just have to learn what it does, not what you think it should do. prepare let's you load a batch, a block of commands terminated by GO in the normal Sybase sense. Execute executes the prepared batch (supplies the GO and sends the batch to the server), and captures whatever is returned (according to whatever array/variables you have set).
Do is immediate, single command, with no prepare. A prepare+execute combined.
Performing only single-statement do's, and only dynamic SQL, simply because that's all that you could get to work, is very limiting and quite unnecessary.
You currently have:
Prepare:
UPDATE
Execute (100)
ExecuteImmediate(Do):
COMMIT TRAN
So of course, there is no BEGIN TRAN. (The first "do" executed, the BEGIN TRAN is gone)
I think what you want (intended originally) is this. Forget the 'do':
Prepare:
BEGIN TRAN
UPDATE
COMMIT TRAN
Execute (100)
Then change it to:
BEGIN TRAN
INSERT
UPDATE
COMMIT TRAN
Execute (100)
Your $update and $insert will confuse you (you're executing a multi-statement batch, right ?not a isolated single command in the middle of a prepare batch). If you get rid of them, and think in terms of $execute [whatever you have prepared in the batch], it might help you to understand the problem better.
Do not form conclusions until you have all the above working as intended.
And read up on BEGIN/COMMIT TRAN.
Last, What exactly is a "END TRAN" ? I do not think the code block you have posted is real.
Don't dynamically create SQL, it is dangerous (sql injection).
You should be able to prepare multiple inserts/updates and your link to the DBI documentation does not say you cannot, it says some drivers may not be able to tell you much about a statement which is ONLY prepared.
I'd post a failing example with error to the dbi-users list for comment as the DBD::Sybase maintainer hangs out there (see dbi.perl.org).
Turns out that DBI's prepare method is not quite portable across various database drivers as noted here. For the Sybase driver, it is most likely that prepare is not working as intended. One way to tell is that after running prepare, the variable $insert->{NUM_OF_FIELDS} is undefined.
To get around the problem, do one of the following:
1) do not prepare anything. Just dynamically construct the statement in text string and run $dbh->do($stmt), or
2) run finish on all outstanding statement handles (under that database handle) before running COMMIT TRAN. I personally prefer this way much better.

DB2 Equivalent to SQL's GO?

I have written a DB2 query to do the following:
Create a temp table
Select from a monster query / insert into the temp table
Select from the temp table / delete from old table
Select from the temp table / insert into a different table
In MSSQL, I am allowed to run the commands one after another as one long query. Failing that, I can delimit them with 'GO' commands. When I attempt this in DB2, I get the error:
DB2CLI.DLL: ERROR [42601] [IBM][CLI Driver][DB2] SQL0199N The use of the reserved
word "GO" following "" is not valid. Expected tokens may include: "".
SQLSTATE=42601
What can I use to delimit these instructions without the temp table going out of scope?
GO is something that is used in MSSQL Studio, I have my own app for running upates into live and use "GO" to break the statements apart.
Does DB2 support the semi-colon (;)? This is a standard delimiter in many SQL implementations.
have you tried using just a semi-colon instead of "GO"?
This link suggests that the semi-colon should work for DB2 - http://www.scribd.com/doc/16640/IBM-DB2
I would try wrapping what you are looking to do in BEGIN and END to set the scope.
GO is not a SQL command, it's not even a TSQL command. It is an instruction for the parser. I don't know DB2, but I would imagine that GO is not neccessary.
From Devx.com Tips
Although GO is not a T-SQL statement, it is often used in T-SQL code and unless you know what it is it can be a mystery. So what is its purpose? Well, it causes all statements from the beginning of the script or the last GO statement (whichever is closer) to be compiled into one execution plan and sent to the server independent of any other batches.