View parameter values on currently running procedure? - tsql

Using Transact SQL
Just curious, is there a way to view the values of a parameter (aka, the EXEC line run to execute the proc?) of a proc that is currently in the process of running?
Example, I run:
EXEC HelloWorld #SQL = 1
Is there a table or log or anything I can look at WHILE the proc is still running, and see #SQL = 1?

While a procedure is running, you can execute a DBCC INPUTBUFFER command in another window. You will need to know the SPID that is executing your HelloWorld procedure.
If you are running HelloWorld within SQL Server Management Studio, you can see the SPID shown on the status bar at the very bottom of the window. My IDE shows 6 panels on the status bar. The 3rd panel shows the login name with the SPID in parenthesis. Example "YourDomain\YourLogin (59)". The 59 is the SPID you are looking for.
If you are not running the query in SQL Server Management Studio and do not have the SPID readily available, you can execute the following command:
sp_who2
This will show a result set with a row for every connection to the SQL Server Instance. Any SPID below 50 represents an internal process. Anything greater than 50 is a user connection. Based on the information you see in this result set, hopefully you will be able to determine the SPID that is executing HelloWorld.
Once you know the SPID, you can see the command it is currently executing by issuing the following command in a new query window.
DBCC INPUTBUFFER(59)
You will want to replace the 59 above with the actual SPID that you previously determined. Executing the command above will show you the command that is currently executing, including the parameter values.

The best way to get the parameter values passed into procs, especially when executed remotely, is via SQL Server Profiler or Extended Events. If using SQL Server Profiler, you need to capture the following events:
(in Stored Procedures)
RPC: Completed
SP: Completed
(in TSQL)
SQL: BatchCompleted
And you will see the values in the "TextData" field (so you need to select that column for all 3 of those events).

There is no way to get parameter value from dynamic view of functions. But in your case I can recommend you to use the following code inside your procedure:
declare #SQL int = 1
raiserror('#SQL = %i',0,0,#SQL) WITH NOWAIT
WITH NOWAIT is very important thing. With these words won't wait information like if you use print

Related

How to explain analyze PostgreSQL plpgsql function in pgAdmin4?

We are porting MSSQL procs to PostgreSQL plpgsql functions in PG version 12. Each function RETURNS TABLE.
How can we explain analyze the inside of the functions to figure out where the bottle necks are?
Inside pgAdmin4 query window we enable the verbose explain and execute the function call like this:
select * from rts.do_something(301, 7, '[{"id":3488269, "seq":2, "ts":"2020-07-27"}]'::json);
However, the explain tab on the bottom of the window comes back with an icon just says: "rts.Function Scan" and nothing else:
There HAS to be a simple way of doing this?
PostgreSQL query planner treats function as "black boxes". They are planned and optimized, but in a separate process.
You can peek inside by using the auto_explain module: https://www.postgresql.org/docs/current/auto-explain.html
After enabling the module, set the following parameters:
SET auto_explain.log_nested_statements = ON; -- this will log function internal statements
SET auto_explain.log_min_duration = 0; -- this will give you logs of all statements in the session
Check the documentation on the link above for more details or ask in the comments.

How to not execute INSERT in read-only transaction

Postgres server is in hot standbuy mode.
Asynchronou streaming binary replication is used.
Command like
INSERT INTO logfile (logdate) values (current_date)
Causes error
cannot execute INSERT in a read-only transaction.
Maybe it should be changed to
INSERT INTO logfile (logdate)
SELECT current_date
WHERE ???
What where condition should used ?
It should work starting at Postgres 9.0
If direct where clause is not possible, maybe some plpgsql function can used in where.
Maybe
show transaction_read_only
result should captured or some function can used.
Alternately application can determine if database is read-only in startup. Should show transaction_read_only result used for this.
Running INSERT on a standby server is not possible in pure (non-procedural) SQL because when the server is in standby mode, all the data-modification queries are rejected in planning phase, before it's executed.
It's possible with conditionals in PL/PgSQL.
DO $code$
BEGIN
IF NOT pg_is_in_recovery() THEN
INSERT INTO logfile (logdate) VALUES (current_date);
END IF;
END;
$code$;
However, it's probably not recommended - it's usually better to test pg_is_in_recovery() once (in application code) and then act accordingly.
I'm using pg_is_in_recovery() system function instead of transaction_read_only GUC because it's not exactly the same thing. But if you prefer that, please use:
SELECT current_setting('transaction_read_only')::bool
More info: DO command, conditionals in PL/PgSQL, system information functions.

INSERT OPENQUERY timeout

I'm trying to execute and insert query to a linked server in SQL Server.
For that I'm using INSERT INTO OPENQUERY statement.
The linked server is an Apache HIVE using Cloudera ODBC Provider.
The insert operation takes around 1 minute in my setup when performed from HIVE client.
However, SQL INSERT always times out after 30 seconds.
I set the Query Timeout parameter to 0 but it seems to be not affecting INSERT statement, however, it is working fine for SELECT statements taking longer time.
Is this a known limitation?
Is there a way to change the timeout for the insert statement when using OPENQUERY?
EDIT
I would like to clarify the setup I'm working with.
---------- ---------------------- ---------------
| MS SQL | => Linked Server => | Hive ODBC Provider | => | Hive Server |
---------- ---------------------- ---------------
In Hive, I have a table called calc_result where I would like to periodically store calculation results from the SQL server. For example, I try to insert using a query like this.
insert openquery(HIVE, 'select timestamp timestamp , tag tag, value value from calc_result')
values('2019-04-22 11:50:41', 'test',2.0)
The insert operation is captured correctly by HIVE server and a MapReduce job starts. However, the job will be killed after 30 seconds due to timeout.
The SQL server will show the below error message.
OLE DB provider "MSDASQL" for linked server "HIVE" returned message "[Cloudera][Hardy] (72) Query execution timeout expired.".
However, SELECT OPENQUERY works fine and would follow Query Timeout settings of the linked server (Which is set to 0 in this case).
Edit that is completely different use case from what I've imagined. In that case there should not be any difference in select/insert.
As you have configured your linked server timeout, there is a second place in the linked server properties you can check a Command Timeout setting in the provider string:
Other option that comes into my mind is instance wide timout. Default set for 600 seconds (10 minutes) which is way above your 30 seconds. However, you can still try it to see if there is any impact.
For infinite wait:
sp_configure 'show advanced options',1
go
reconfigure
go
sp_configure 'remote query timeout (s)',0
go
reconfigure
go
I would try using SELECT INTO temporary table and then materializing it using regular INSERT INTO:
SELECT c1, c2
INTO #temp_tab
FROM OPENQUERY(mylinkedserver, 'SELECT c1, c2 FROM remote_table');
INSERT INTO normal_table(col1, col2)
SELECT c1, c2
FROM #temp_tab;
EDIT:
You could try wrapping it with transaction and remove aliases:
BEGIN TRAN;
insert openquery(HIVE, 'select timestamp, tag, value from calc_result')
values('2019-04-22 11:50:41', 'test',2.0);
COMMIT;
If necessary set up DTC: How can I enable distributed transactions for a linked server?
While I didn't find a way to change OPENQUERYtimeout from 30 seconds, I found that using EXEC AT Linked Server to work fine for INSERT queries while adhering to timeout settings.
I accidentally stumbled upon the solution in this 2009 blog post. Databases might not be my strength, but I feel SQL Server documentation can be improved. A simple page that lists possible ways to interact with a Linked Server could've saved me lots of retries.

Multiple prepared statements disrupt a transaction using DBD::Sybase

In my Perl script, I use DBD::Sybase (via DBI module) to connect to a SQL Server 2008. The base program as below runs without problem:
use DBI;
# assign values to $host, $usr, $pwd
my $dbh = DBI->connect("dbi:Sybase:$host", $usr, $pwd);
$dbh->do("BEGIN TRAN tr1");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
$update->execute(100, 'apple');
$dbh->do("END TRAN tr1");
however, if I insert one more prepare statement right before the existing prepare statement, to have the program look like:
...
my $insert = $dbh->prepare("INSERT INTO mytable (name, qty) VALUES (?, ?)");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
...
and the rest is all the same, then when I run it, I got:
DBD::Sybase::db do failed: Server message number=3902 severity=16 state=1 line=1 server=xxx text=The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
So looks like the additional prepare statement somehow disrupted the entire transaction flow. I had been running the same code via the DBD::ODBC driver with no problem against a SQL SERVER 2005. (But my firm upgraded to 2008 and I had to use the DBD::Sybase to get around some other problems.)
Any help / suggestion on how to resolve this issue would be much appreciated. In particular, using a different db handle for the other prepare is not a desired solution since that will beat the purpose of having them in a single transaction.
UPDATE: Turns out if I execute at least once on the additional insert, then the program is again run fine. So looks like every prepared statement needs to be run under Sybase. But that isn't a requirement with ODBC and isn't a reasonable requirement in general. Anyway to get around it?
You are learning perl AND Sybase basics and making several incorrect conclusions.
Forget about what it does under ODBC for a moment. ODBC most probably has AUTOCOMMIT turned on, and thus you have no transaction control whatsoever. (Why anyone would use ODBC when the DBD:: supports DB-Lib and CT-Lib is beyond me, but that's a separate story.)
Re: "So looks like every prepared statement needs to be run under Sybase."
Rawheiser is correct. What exactly do you expect to achieve by preparing a batch but performing a Do instead ? Where else do you expect to execute the batch prepared under Sybase, other than under Sybase?
Do vs prepare/execute are quite different. prepare/execute for Sybase works just fine in millions of programs. you just have to learn what it does, not what you think it should do. prepare let's you load a batch, a block of commands terminated by GO in the normal Sybase sense. Execute executes the prepared batch (supplies the GO and sends the batch to the server), and captures whatever is returned (according to whatever array/variables you have set).
Do is immediate, single command, with no prepare. A prepare+execute combined.
Performing only single-statement do's, and only dynamic SQL, simply because that's all that you could get to work, is very limiting and quite unnecessary.
You currently have:
Prepare:
UPDATE
Execute (100)
ExecuteImmediate(Do):
COMMIT TRAN
So of course, there is no BEGIN TRAN. (The first "do" executed, the BEGIN TRAN is gone)
I think what you want (intended originally) is this. Forget the 'do':
Prepare:
BEGIN TRAN
UPDATE
COMMIT TRAN
Execute (100)
Then change it to:
BEGIN TRAN
INSERT
UPDATE
COMMIT TRAN
Execute (100)
Your $update and $insert will confuse you (you're executing a multi-statement batch, right ?not a isolated single command in the middle of a prepare batch). If you get rid of them, and think in terms of $execute [whatever you have prepared in the batch], it might help you to understand the problem better.
Do not form conclusions until you have all the above working as intended.
And read up on BEGIN/COMMIT TRAN.
Last, What exactly is a "END TRAN" ? I do not think the code block you have posted is real.
Don't dynamically create SQL, it is dangerous (sql injection).
You should be able to prepare multiple inserts/updates and your link to the DBI documentation does not say you cannot, it says some drivers may not be able to tell you much about a statement which is ONLY prepared.
I'd post a failing example with error to the dbi-users list for comment as the DBD::Sybase maintainer hangs out there (see dbi.perl.org).
Turns out that DBI's prepare method is not quite portable across various database drivers as noted here. For the Sybase driver, it is most likely that prepare is not working as intended. One way to tell is that after running prepare, the variable $insert->{NUM_OF_FIELDS} is undefined.
To get around the problem, do one of the following:
1) do not prepare anything. Just dynamically construct the statement in text string and run $dbh->do($stmt), or
2) run finish on all outstanding statement handles (under that database handle) before running COMMIT TRAN. I personally prefer this way much better.

ADO: Execute multiple TSQL using connection and command object

For a particular installation of my application, I need to create the database and the schema on the SQL server from the installer itself. I have a custom installer through which I have been able to detect and install the pre-requisites and the software. The user is prompted to give the IP of the database server and the username and password. Behind the scene, I create a connection and a command object. I keep the queries in different files. I use a reader and read the content of the file and set the content of the file to the CommandText of the command object. The typical content of the file is like following:
create database mydatabase
Go
Use mydatabase
Go
EXEC sp_MSforeachtable #command1 = "DROP TABLE ?"
Now the issue is the first statements get executed but it gives error after that. The error that is shown is: "syntax error near 'GO'". I tried removing the GO statement and also tried ending the sql statements with semi-colon. The error in this case is "Database 'mydatabase'does not exist. Make sure that the name is entered correctly.".
However if I keep a single statement in the file, it works fine.
Can somebody help me?
As you can see at http://technet.microsoft.com/en-us/library/aa258908%28SQL.80%29.aspx
Remarks
GO is not a Transact-SQL statement; it
is a command recognized by the osql
and isql utilities and SQL Query
Analyzer.
So this is the cause of your problems when you run it using the SqlCommand from .Net.
In my opinion you have two options:
1) Execute the instructions one by one. Maybe use a separator in your files, then split the SQL statements and execute them sequentially using a for/foreach.
2) Use Server class from SQL Server Management Objects (SMO) that should allow you to execute the script containing "Go" statements.
You can execute more than one sql command statement by simply adding a ";" at the end of each command instead of a "GO" statement.
Example:
cmd.CommandText = #" Update TableA Set ColumnA = 'Test' Where ID = 1;
Update TableB Set ColumnA = 'Second line' Where ID = 2;
";