Meaning of # in a SQL Transaction - tsql

Hi I was checking some store procedures of a product installed on my company, you know to see how other people solve problems and learn.
Among this I found this but I do not know what is the meaning of the # in the sql sp in the line select distinct objecttype from #CascadeCollect, any comments please?
This is the whole sp..
begin
-- get all the unique otcs collected in the temp table.
declare #EntityCode int
-- check if the entity requires special casing.
declare #DbCascadeMask int
-- special casing for calendar delete
exec p_DeleteCalendar
declare otccursor cursor for
select distinct objecttype from #CascadeCollect <------ here is the # ....
open otccursor
fetch otccursor into #EntityCode
while ##fetch_status = 0
begin
select #DbCascadeMask = DbCascadeMask
from EntityView as Entity
where Entity.ObjectTypeCode = #EntityCode
if #DbCascadeMask <> 0
begin
exec p_BulkDeleteGeneric #EntityCode
end
fetch otccursor into #EntityCode
end
CLOSE otccursor
DEALLOCATE otccursor
-- Return the count of entity instances that are still not deleted (because they
-- require platform bizlogic/special casing.
select count(*) as NotDeletedCount from #CascadeCollect where processed = 2
end
Thanks for any comments !!!

A single # as a prefix indicates a locally scoped temporary object. In this case it is clearly a table but you can also have #temp procedures as well.
It is only visible to the batch in which it is created (and any child batches) and dropped automatically when the batch exits.
So if that is the whole stored procedure then it is obviously expected to be run from another procedure that actually creates the temp table.
You can also have global temporary objects prefixed with ##.

a table prefixed with # is a local temporary table, it will be dropped once it is out of scope
create table #test(id int)
insert #test values (1)
select * from #test
If you run this from another connection select * from #test the table is not available since it is local

Related

Is it safe to use temporary tables when an application may try to create them for independent, but simultaneous processes?

I am hoping that I can articulate this effectively, so here it goes:
I am creating a model which will be run on a platform by users, possibly simultaneously, but each model run is marked by a unique integer identifier. This model will execute a series of PostgreSQL queries and eventually write a result elswehere.
Now because of the required parallelization of model runs, I have to make sure that the processes will not collide, despite running in the same database. I am at a point now where I have to store a list of records, sorted by a score variable and then operate on them. This is the beginning of the query:
DO
$$
DECLARE row RECORD;
BEGIN
DROP TABLE IF EXISTS ranked_clusters;
CREATE TEMP TABLE ranked_clusters AS (
SELECT
pl.cluster_id AS c_id,
SUM(pl.total_area) AS cluster_score
FROM
emob.parking_lots AS pl
WHERE
pl.cluster_id IS NOT NULL
AND
run_id = 2005149
GROUP BY
pl.cluster_id
ORDER BY
cluster_score DESC
);
FOR row IN SELECT c_id FROM ranked_clusters LOOP
RAISE NOTICE 'Cluster %', row.c_id;
END LOOP;
END;
$$ LANGUAGE plpgsql;
So I create a temporary table called ranked_clusters and then iterate through it, at the moment just logging the identifiers of each record.
I have been careful to only build this list from records which have a run_id value equal to a certain number, so data from the same source, but with a different number will be ignored.
What I am worried about however is that a simultaneous process will also create its own ranked_clusters temporary table, which will collide with the first one, invalidating the results.
So my question is essentially this: Are temporary tables only visible to the session which creates them (or to the cursor object from say, Python)? And is it therefore safe to use a temporary table in this way?
The main reason I ask is because I see that these so-called "temporary" tables seem to persist after I execute the query in PgAdmin III, and the query fails on the next execution because the table already exists. This troubles me because it seems as though the tables are actually globally accessible during their lifetime and would therefore introduce the possibility of a collision when a simultaneous run occurs.
Thanks #a_horse_with_no_name for the explanation but I am not yet convinced that it is safe, because I have been able to execute the following code:
import psycopg2 as pg2
conn = pg2.connect(dbname=CONFIG["GEODB_NAME"],
user=CONFIG["GEODB_USER"],
password=CONFIG["GEODB_PASS"],
host=CONFIG["GEODB_HOST"],
port=CONFIG["GEODB_PORT"])
conn.autocommit = True
cur = conn.cursor()
conn2 = pg2.connect(dbname=CONFIG["GEODB_NAME"],
user=CONFIG["GEODB_USER"],
password=CONFIG["GEODB_PASS"],
host=CONFIG["GEODB_HOST"],
port=CONFIG["GEODB_PORT"])
conn2.autocommit = True
cur2 = conn.cursor()
cur.execute("CREATE TEMPORARY TABLE temptable (tempcol INTEGER); INSERT INTO temptable VALUES (0);")
cur2.execute("SELECT tempcol FROM temptable;")
print(cur2.fetchall())
And I receive the value in temptable despite it being created as a temporary table in a completely different connection as the one which queries it afterwards. Am I missing something here? Because it seems like the temporary table is indeed accessible between connections.
The above had a typo, Both cursors were actually being spawned from conn, rather than one from conn and another from conn2. Individual connections in psycopg2 are not able to access each other's temporary tables, but cursors spawned from the same connection are.
Temporary tables are only visible to the session (=connection) that created them. Even if two sessions create the same table, they won't interfere with each other.
Temporary tables are removed automatically when the session is disconnected.
If you want to automatically remove them when your transaction ends, use the ON COMMIT DROP option when creating the table.
So the answer is: yes, this is safe.
Unrelated, but: you can't store rows "in a sorted way". Rows in a table have no implicit sort order. The only way you can get a guaranteed sort order is to use an ORDER BY when selecting the rows. The order by that is part of your CREATE TABLE AS statement is pretty much useless.
If you have to rely on the sort order of the rows, the only safe way to do that is in the SELECT statement:
FOR row IN SELECT c_id FROM ranked_clusters ORDER BY cluster_score
LOOP
RAISE NOTICE 'Cluster %', row.c_id;
END LOOP;

DB2 Copy data from staging table to main table with no table locking

I have 2 tables (1 staging table and 1 main operational table).
Both tables have the same structure.
For my solution,
I am using DB2Copy in program to insert 10000 records into staging table (4 seconds)
From the staging table, will move the data into main table using stored procedure (10 seconds)
However, it will lock the main table when running stored procedure.
I am suspecting because of the BEGIN and END which cause the stored procedure to act like a transaction.
I do not want the table to be locked when running stored procedure. (any suggestion?) Prefer: Stored procedure insert record by record into main table without transaction behavior.
Below is my code:
CREATE PROCEDURE SP_NAME ( )
LANGUAGE SQL
NOT DETERMINISTIC
CALLED ON NULL INPUT
EXTERNAL ACTION
OLD SAVEPOINT LEVEL
MODIFIES SQL DATA
INHERIT SPECIAL REGISTERS
BEGIN
--DECLARE TEMP VARIABLES
BEGIN
DECLARE MYCURSOR CURSOR WITH RETURN TO CALLER FOR
--SELECT STAGING TABLE
DECLARE CONTINUE HANDLER FOR NOT FOUND SET AT_END = 1;
OPEN MYCURSOR;
-- FETCH MYCURSOR INTO TEMP VARIABLES
WHILE AT_END = 0 DO
-- INSERT MAIN TABLE
-- FETCH MYCURSOR INTO TEMP VARIABLES
END WHILE;
CLOSE MYCURSOR;
END;
END;
My Environment
I have a program "A" which is trying to insert 10k records into main table (A lot of indexes and high volume of data) which takes 10 minutes ++.
About main operational table
High number of read but minimum updates and inserts at front end.
At back end, another program will frequently insert record into this table.
Only 1 instance of the back end program is allowed to run at one time
When you create the procedure, make sure your commitment-control setting is *NONE (a.k.a. autocommit). This should not lock your whole table
Adding the example
CREATE PROCEDURE userS.SP_TEST (
IN col_DATA Varchar(10) )
LANGUAGE SQL
SPECIFIC userS.SP_TEST
NOT DETERMINISTIC
MODIFIES SQL DATA
SET OPTION COMMIT = *NONE
BEGIN INSERT INTO userS.TABLE1 VALUES(col_DATA);
END

Temp table behavior in SQL Server 2008 R2

I am executing below script.
declare #id int = 0
while(#id < 10)
begin
declare #tbl as table (id int)
insert into #tbl values(#id)
set #id = #id + 1
SELECT * FROM #tbl
end
I am getting result like below.
but this script should give only one row every time in temp table because temp table is declaring every time in while loop and I am inserting only one value in table.
I don't understand this behavior of temp table, please suggest.
According to Transact-SQL Variables:
The scope of a variable is the range of Transact-SQL statements that can reference the variable. The scope of a variable lasts from the point it is declared until the end of the batch or stored procedure in which it is declared.
Each DECLARE is only "read" once:
SQL Server parce the code and read all the DECLARE statements at compile time.
It then reserves memory for theses variables.
Code can then use them from the location of their respecting DECLARE until the end of the script.
If we look at your script, #tbl has already been reserved in memory when line 1 is executed but the variable can only be used once the script reach line 5. However it is not reserved again and again on each iteration of the loop.

sybase cursors in a trigger

I am trying to use a cursor in a trigger on a sybase ASE 15.0.3 system running on Solaris. The purpose for this is that I want to know which column of a table is getting updated. This information I then save in an admin table for further lookups.
create trigger test_trigger on my_table for update as
set nocount on
/* declare cursor */
declare #colname varchar(64)
declare column_name cursor for
select syscolumns.name from syscolumns join sysobjects on (sysobjects.id = syscolumns.id) where sysobjects.name = 'my_table'
/* open the cursor */
open column_name
/* fetch the first row */
fetch column_name into #colname
/* now loop, processing all the rows
** ##sqlstatus = 0 means successful fetch
** ##sqlstatus = 1 means error on previous fetch
** ##sqlstatus = 2 means end of result set reached
*/
while (##sqlstatus != 2)
begin
/* check for errors */
if (##sqlstatus = 1)
begin
print "Error in column_names cursor"
return
end
/* now do the insert if colum was updaed */
if update(#colname)
begin
insert into my_save_table (login,tablename,field,action,pstamp)
select suser_name(),'my_table',#colname,'U',getdate() from inserted
end
/* fetch the next row */
fetch column_name into #colname
end
/* close the cursor and return */
close column_name
go
Unfortunately when trying to run this in isql I get the following error:
Msg 102, Level 15, State 1:
Server 'my_sybase_server', Procedure 'test_trigger', Line 34:
Incorrect syntax near '#colname'.
I did some investigations and found out that line 34 means the following statement:
if update(#colname)
then I tried to just check on 1 column and replaced it by
if update(some_column_name)
That actually worked fine and I don't have any other idea how to fix that. It looks like the update() function somehow not allows to contain a variable. I did not find any additional information on the sybase books or anywhere else in google ect. Does anybody may find a solution for this? Is it may a bug? Are there workarounds for the cursor?
Thanks for any advice
The problem is that update(#colname) is something like update('colname') and needs to be update(colname). In order to you achieve that, you need to use Dynamic SQL.
I've already saw the documentation and it's possible to use:
Dynamically executing Transact-SQL
When used with the string or char_variable options, execute concatenates the supplied strings and variables to execute the
resulting Transact-SQL command. This form of the execute command may
be used in SQL batches, procedures, and triggers.
Check this article for an example on how to use dynamic sql!
If it is not a problem to recreate the trigger every time the table is altered (columns added/dropped) you may just generate the body for your trigger with such query
select
'if update('+c.name+')
set #colname = '''+c.name+'''
'
from syscolumns c where id = object_id('my_table')

After Insert Trigger....Looping

What would the processing load concern be if I had an "After Insert" trigger created on a table and in that trigger I performed a While loop to iterate through "potentially" multiple rows?
End result is I will 99.999% of the time have only 1 row, but as the future is unpredictable i also want to be able to handle multiple rows being inserted.
Trigger Model:
1) Insert information into the table
2) Create views specific to the client, via stored procedures (if possible)
What Say You? :)
Haven't fully developed but this is the design i am looking for, may not be structurally sound but should get the point acrossed.
CREATE TRIGGER dbo.New_Client_Setup
ON dbo.client
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
--Fill Temp Table
select * into #clients
from inserted
--Iterate through Temp Table
While (select count(*) from #clients) <> 0 BEGIN
declare #id int, #clnt nvarchar(10)
select top(1)
#id = id
, #clnt = short
order by id desc
Execute dbo.sp_Create_View_Client ( #id, #clnt )
-- Drop used ID
delete from #clients
where id = #id
END
Drop table #clients
END
GO
Again, observe the design of the trigger not necessarily the syntactic sugar
Design wise, reading the comments, I think you do not neccesarily need to do this in triggers. I would say you should do it as part of your insert statement in transactions - i.e. do the insert, and then do the loop that you want to do (whatever that does - execute dbo.sp_Create_View_Client)...
The second thing I would mention is what exactly is dbo.sp_Create_View_Client doing - is it a must-dependent on the insert? Meaning, what happens if the insert works fine, and the trigger fails? I would maybe do the whole insert and execute of the SP all in one transaction, so as to preserve data integrity.