At what point are DB2 declared global temporary tables automatically deleted...? - db2

When are DB2 declared global temporary tables 'cleaned up' and automatically deleted by the system...? This is for DB2 on AS400 v7r3m0, with DBeaver 5.2.5 as the dev client, and MS-Access 2007 for packaged apps for the end-users.
Today I started experimenting with a DGTT, thanks to this answer. So far I'm pleased with the functionality, although I did find our more recent system version has the WITH DATA option, which is an obvious advantage.
Everything is working, but at times I receive this error:
SQL Error [42710]: [SQL0601] NEW_PKG_SHEETS_DATA in QTEMP type *FILE already exists.
The meaning of the error is obvious, but the timing is not. When I started today, I could run the query multiple times, and the error didn't occur. It seemed as if the system was cleaning up and deleting it, which is just what I was looking for. But then the error started and now it's happening with more frequency.
If I make strategic use of DROP TABLE, this resolves the error, unless the table doesn't exist, in which case I get another error. I can also disconnect/reconnect to the server from my SQL dev client, as I would expect, since that would definitely drop the session.
This IBM article about DGTTs speaks much of sessions, but not many specifics. And this article is possibly the longest command syntax I've yet encountered in the IBM documentation. I got through it, but it didn't answer the question of what decided when a DGTT is deleted.
So I would like to ask:
What are the boundaries of a session..?
I'm thinking this is probably defined by the environment in my SQL client..?
I guess the best/safest thing to do is use DROP TABLE as needed..?
Does any one have any tips, tricks, or pointers they could share..?
Below is the SQL that I'm developing. For brevity, I've excluded chunks of the WITH-AS and SELECT statements:
DROP TABLE SESSION.NEW_PKG_SHEETS ;
DECLARE GLOBAL TEMPORARY TABLE SESSION.NEW_PKG_SHEETS_DATA
AS ( WITH FIRSTDAY AS (SELECT (YEAR(CURDATE() - 4 MONTHS) * 10000) +
(MONTH(CURDATE() - 4 MONTHS) * 100) AS DATEISO
FROM SYSIBM.SYSDUMMY1
-- <VARIETY OF ADDITIONAL CTE CLAUSES>
-- <SELECT STATEMENT BELOW IS A BIT LONGER>
SELECT DAACCT AS DAACCT,
DAIDAT AS DAIDAT,
DAINV# AS DAINV,
CAST(DAITEM AS NUMERIC(6)) AS DAPACK,
CAST(0 AS NUMERIC(14)) AS UPCNUM,
DAQTY AS DAQTY
FROM DAILYTRANS
AND DAIDAT >= (SELECT DATEISO+000 FROM FIRSTDAY) -- 1ST DAY FOUR MONTHS AGO
AND DAIDAT <= (SELECT DATEISO+399 FROM FIRSTDAY) -- LAST DAY OF LAST MONTH
) WITH DATA ;
DROP TABLE SESSION.NEW_PKG_SHEETS ;

The DGTT will only get cleaned automatically up by Db2 when the connection ends successfully (connect reset or equivalent according to whatever interface to Db2 is being used ).
For both Db2 for i and Db2-LUW, consider using the WITH REPLACE clause for the DECLARE GLOBAL TEMPORARY TABLE statement. That will ensure you don't need to explicitly drop the DGTT if the session remains open but the code needs the table to be replaced at next execution whether or not the DGTT already exists.
Using that WITH REPLACE clause means you do not need to worry about issuing a DROP statement for the DGTT, unless you really want to issue a drop.
Sometimes sessions may get re-used, or a close/disconnect might not happen or might not complete, or more likely a workstation performs a retry, and in those cases the WITH REPLACE can be essential for easily avoiding runtime errors.
Note that Db2 for Z/OS (at v12) does not offer the WITH REPLACE clause for DGTT, but has instead an optional syntax on commit drop table (but this is not documented for Db2-for-i and Db2-LUW).

Related

Postgresql insert stopped working, duplicate key value violations

About 8 months ago I used a suggestion to set up a holding table, then push to the formal table and prevent duplicate entries, per this post: Best way to prevent duplicate data on copy csv postgresql
It's been working very nicely, but today I noticed some errors and gaps in the data.
Here's my insert statement:
And here's how the index is set up:
And here's an example of the error I'm getting, although on the next chron insert, it went through.
Here's it going through fine:
I haven't noted any large changes in the data incoming. Here's what the data looks like that's coming in now:
In summary, I've noticed recent oddities with the insert statements, and success is erratic, resulting in large data gaps in the database. Thanks for any help, and I'm happy to provide more details, but I wanted to see if my information sounds like something someone else has already dealt with.
Thanks very much for any help,
S
As Gordon pointed out in his answer to your previous question, this approach only works if you have exclusive access to the table. There is a delay between the existence check and the insert itself, and if another process modifies the table during this window, you may end up with duplicates.
If you're on Postgres 9.5+, the best approach is to skip the existence check and simply use an INSERT ... ON CONFLICT DO NOTHING statement.
On earlier versions, the simplest solution (if you can afford to do so) would be to lock the table for the duration of the import. Otherwise, you can emulate ON CONFLICT DO NOTHING (albeit less efficiently) using a loop and an exception handler:
DO $$
DECLARE r RECORD;
BEGIN
FOR r IN (SELECT * FROM holding) LOOP
BEGIN
INSERT INTO ltg_data (pulsecount, intensity, time, lon, lat, ltg_geom)
VALUES (r.pulsecount, r.intensity, r.time, r.lon, r.lat, r.ltg_geom);
EXCEPTION WHEN unique_violation THEN
END;
END LOOP;
END
$$
As an aside, DELETE FROM holding is time-consuming and probably unnecessary. Create your staging table as TEMP, and it will be cleaned up automatically at the end of your session. You can easily build a table which matches the structure of your import target:
CREATE TEMP TABLE holding (LIKE ltg_data);

SQL Server temp tables via MS Access

Well I've been using #temp tables in standard T-SQL coding for years and thought I understood them.
However, I've been dragged into a project based in MS Access, utilizing pass-through queries, and found something that has really got me puzzled.
Though maybe it's the inner workings of Access that has me fooled !?
Here we go : Under normal usage, I understand the if I create a temp table in a Sproc, it's scope ends with the end of the SProc, and is dropped by default.
In the Access example, I found it was possible to do this in one Query:
select top(10) * into #myTemp from dbo.myTable
And then this in second separate query:
select * from #myTemp
How is this possible ?
If a temp table dies with the current session, does this mean that Access keeps a single session open, and uses that session for all Queries executed ?
Or has my fundamental understanding of scope been wrong all this time ?
Hope someone out there can help clarify what is occurring under the hood !?
Many Thanks
I found this answer of a kind of similar question:
Temp table is stored in tempdb until the connection is dropped (or in the case of a global temp tables when the last connection using it is dropped). You can also (and it is a good proctice to do so) manually drop the table when you are finished using it with a drop table statement.
I hope this helps out.

PostgreSQL, ODBC and temp table

Could you tell me why this query works in pgAdmin, but doesn't with software using ODBC:
CREATE TEMP TABLE temp296 WITH (OIDS) ON COMMIT DROP AS
SELECT age_group AS a,male AS m,mode AS t,AVG(speed) AS speed
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode=2
GROUP BY age_group,male,mode;
SELECT age_group,male,mode,
CASE
WHEN age_group=1 AND male=0 THEN (info_dist_km/(SELECT avg_speed FROM temp296 WHERE a=1 AND m=0))*60
ELSE 0
END AS info_durn_min
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode IN (7) AND info.info_dist_km>2;
I got "42P01: ERROR: relation "temp296" does not exist".
I also have tried with "BEGIN; [...] COMMIT;" - "HY010:The cursor is open".
PostgreSQL 9.0.10, compiled by Visual C++ build 1500, 64-bit
psqlODBC 09.01.0200
Windows 7 x64
I think that the reason why it did not work for you because by default ODBC works in autocommit mode. If you executed your statements serially, the very first statement
CREATE TEMP TABLE temp296 ON COMMIT DROP ... ;
must have autocommitted after finishing, and thus dropped your temp table.
Unfortunately, ODBC does not support directly using statements like BEGIN TRANSACTION; ... COMMIT; to handle transactions.
Instead, you can disable auto-commit using SQLSetConnectAttr function like this:
SQLSetConnectAttr(hdbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF, 0);
But, after you do that, you must remember to commit any change by using SQLEndTran like this:
SQLEndTran(SQL_HANDLE_DBC, hdbc, SQL_COMMIT);
While WITH approach has worked for you as a workaround, it is worth noting that using transactions appropriately is faster than running in auto-commit mode.
For example, if you need to insert many rows into the table (thousands or millions), using transactions can be hundreds and thousand times faster than autocommit.
It is not uncommon for temporary tables to not be available via SQLPrepare/SQLExecute in ODBC i.e., on prepared statements e.g., MS SQL Server is like this. The solution is usually to use SQLExecDirect.

DB2 deadlock timeout Sqlstate: 40001, reason code 68 due to update statements called from servlet using SQL

I am calling update statements one after the other from a servlet to DB2. I am getting error sqlstate 40001, reason code 68 which i found it is due to deadlock timeout.
How can I resolve this issue?
Can it be resolved by setting query timeout?
If yes then how to use it with update statements in servlet or where to use it?
The reason code 68 already tells you this is due to a lock timeout (deadlock is reason code 2) It could be due to other users running queries at the same time that use the same data you are accessing, or your own multiple updates.
Begin by running db2pd -db locktest -locks show detail from a db2 command line to see where the locks are. You'll then need to run something like:
select tabschema, tabname, tableid, tbspaceid
from syscat.tables where tbspaceid = # and tableid = #
filling in the # symbols with the ID number you get from the db2pd command output.
Once you see where the locks are, here are some tips:
◦Deadlock frequency can sometimes be reduced by ensuring that all applications access their common data in the same order – meaning, for example, that they access (and therefore lock) rows in Table A, followed by Table B, followed by Table C, and so on.
taken from: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.trb.doc/doc/t0055074.html
recommended reading: http://www.ibm.com/developerworks/data/library/techarticle/dm-0511bond/index.html
Addendum: if your servlet or another guilty application is using select statements found to be involved in the deadlock, you can try appending with ur to the select statements if accuracy of the newly updated (or inserted) data isn't important.
For me, the solution was adding FOR READ ONLY WITH UR at the end of all my SELECT statements. (Apparently my select statements were returning so much data, it locked the tables long enough to interfere with other SQL statements)
See https://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/sqlref/src/tpc/db2z_sql_isolationclause.html

Stored procedure hangs seemingly without explanation

we have a stored procedure that ran fine until 10 minutes ago and then it just hangs after you call it.
Observations:
Copying the code into a query window yields the query result in 1 second
SP takes > 2.5 minutes until I cancel it
Activity Monitor shows it's not being blocked by anything, it's just doing a SELECT.
Running sp_recompile on the SP doesn't help
Dropping and recreating the SP doesn't help
Setting LOCK_TIMEOUT to 1 second does not help
What else can be going on?
UPDATE: I'm guessing it had to do with parameter sniffing. I used Adam Machanic's routine to find out which subquery was hanging. I found things wrong with the query plan thanks to the hint by Martin Smith. I learned about EXEC ... WITH RECOMPILE, OPTION(RECOMPILE) for subqueries within the SP, and OPTION (OPTIMIZE FOR (#parameter = 1)) in order to attack parameter sniffing. I still don't know what was wrong in this particular case but I came out of this battle seasoned and much better armed. I know what to do next time. So here's the points!
I think that this is related to parameter sniffing and the need to parameterize your input params to local params within the SP. Adding with recompile causes the execution plan to be recreated and eliminates much of the benefits of having a SP. We were using With Recompile on many reports in an attempt to eliminate this hanging issue and it occassionally resulted in hanging SP's that may have been related to other locks and/or transactions accessing the same tables simultaneously. See this link for more details
Parameter Sniffing (or Spoofing) in SQL Server and change your SP's to the following to fix this:
CREATE PROCEDURE [dbo].[SPNAME] #p1 int, #p2 int
AS
DECLARE #localp1 int, #localp2 int
SET #localp1=#p1
SET #localp2=#p2
Run Adam Machanic's excellent sp_WhoIsActive stored proc while your query is running. It'll give you the wait information - meaning, what the stored proc is waiting on - plus things like the execution plan:
http://www.brentozar.com/archive/2010/09/sql-server-dba-scripts-how-to-find-slow-sql-server-queries/
If you want the outer command (like a calling stored procedure's full text), use the #get_outer_command = 1 parameter as well.
First thing First.
Please check if there are any uncommitted transactions. A begin transaction without "COMMIT TRANSACTION"
Thanks for all comments.
I still haven't found the answer, but I will post the progress here.
I failed to reproduce the problem before, but today I chanced upon another stored procedure with the same problem. Again the same symptoms appeared:
Hanging piece of query runs fine and quick (3 secs) in normal query window (hanging piece identified with sp_whoisactive)
No locks, according to Activity Monitor SPID is doing SELECT
Stored procedure runs for over 6 hours without response
Parameters passed to SP and variables declared in window are the same
Using above hints, I found the SP execution plan and it showed nothing out of the ordinary (to me, at least). Creating a new stored procedure with same contents did not solve the problem either. So I started stripping the SP to less and less contents until I encountered a UDF call to another database. When I removed that (replaced the call by the inline contents of the function, a CASE statement), it ran fine again.
So this COULD have been the problem, but I am not very certain, as last time the problem disappeared by itself and I also changed a lot of other things while stripping this SP.
When we add new data sometimes the execution plan becomes invalid or out of date then the stored procedure starts going into this limbo phase. Run the following commands on your database
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
It will flush the cache memory and rebuild the execution plan next time you will run the stored proc.
msdn.microsoft.com
I think I had the same problem. I removed my parameters from the subqueries. It ran fine after that. Not sure if this is possible in your script but that is what solved it for me.
An answer of Brent Ozar might work, but it returns only active command text by default. For example, it returns WAITFOR DELAY '00:00:05' for query like:
CREATE PROCEDURE spGetChangeNotifications
AS
BEGIN
SET NOCOUNT ON;
DECLARE
#actionType TINYINT;
WHILE #actionType IS NULL
BEGIN
WAITFOR DELAY '00:00:05';
SELECT TOP 1
#actionType = [ActionType]
FROM
TableChangeNotifications;
END;
SELECT
TOP 1000 [RecordID], [Component], [TableName], [ActionType], [Key1], [Key2], [Key3]
FROM
TableChangeNotifications;
END;
How it looks like:
Thus, check the parameter #get_outer_command as described here.
Also, try this one instead(slightly modified procedure from MS Docs):
DECLARE
#sessions TABLE
(
[SPID] INT,
STATUS VARCHAR(MAX),
[Login] VARCHAR(MAX),
[HostName] VARCHAR(MAX),
[BlkBy] VARCHAR(MAX),
[DBName] VARCHAR(MAX),
[Command] VARCHAR(MAX),
[CPUTime] INT,
[DiskIO] INT,
[LastBatch] VARCHAR(MAX),
[ProgramName] VARCHAR(MAX),
[SPID_1] INT,
[REQUESTID] INT
);
INSERT INTO #sessions
EXEC sp_who2;
SELECT
[req].[session_id],
[A].[Login] AS 'login',
[A].[HostName] AS 'hostname',
[req].[start_time],
[cpu_time] AS 'cpu_time_ms',
OBJECT_NAME([st].[objectid], [st].[dbid]) AS 'object_name',
SUBSTRING(REPLACE(REPLACE(SUBSTRING([ST].text, ([req].[statement_start_offset] / 2) + 1, ((CASE [statement_end_offset]
WHEN -1
THEN DATALENGTH([ST].text)
ELSE [req].[statement_end_offset]
END - [req].[statement_start_offset]) / 2) + 1), CHAR(10), ' '), CHAR(13), ' '), 1, 512) AS [statement_text],
[ST].text AS 'full_query_text'
FROM
sys.dm_exec_requests AS req
CROSS APPLY
sys.dm_exec_sql_text(req.sql_handle) AS ST
LEFT JOIN #sessions AS A
ON A.SPID = req.session_id
ORDER BY
[cpu_time] DESC;
How it looks like:
Of course, it's possible to modify code from Brent Ozar answer so it would select a full query text, too, though. Nearly same technique is chosen there(link of code of 18.07.2020 so might change after time):
I had the same problem today and I don't know what causes it but I found a solution. I took the input parameter and saved it into a new parameter, i.e.
declare #parameter2 as x = #parameter
Then i changed the references to the parameter in the queries from #parameter to #parameter2.