Well I've been using #temp tables in standard T-SQL coding for years and thought I understood them.
However, I've been dragged into a project based in MS Access, utilizing pass-through queries, and found something that has really got me puzzled.
Though maybe it's the inner workings of Access that has me fooled !?
Here we go : Under normal usage, I understand the if I create a temp table in a Sproc, it's scope ends with the end of the SProc, and is dropped by default.
In the Access example, I found it was possible to do this in one Query:
select top(10) * into #myTemp from dbo.myTable
And then this in second separate query:
select * from #myTemp
How is this possible ?
If a temp table dies with the current session, does this mean that Access keeps a single session open, and uses that session for all Queries executed ?
Or has my fundamental understanding of scope been wrong all this time ?
Hope someone out there can help clarify what is occurring under the hood !?
Many Thanks
I found this answer of a kind of similar question:
Temp table is stored in tempdb until the connection is dropped (or in the case of a global temp tables when the last connection using it is dropped). You can also (and it is a good proctice to do so) manually drop the table when you are finished using it with a drop table statement.
I hope this helps out.
Related
When are DB2 declared global temporary tables 'cleaned up' and automatically deleted by the system...? This is for DB2 on AS400 v7r3m0, with DBeaver 5.2.5 as the dev client, and MS-Access 2007 for packaged apps for the end-users.
Today I started experimenting with a DGTT, thanks to this answer. So far I'm pleased with the functionality, although I did find our more recent system version has the WITH DATA option, which is an obvious advantage.
Everything is working, but at times I receive this error:
SQL Error [42710]: [SQL0601] NEW_PKG_SHEETS_DATA in QTEMP type *FILE already exists.
The meaning of the error is obvious, but the timing is not. When I started today, I could run the query multiple times, and the error didn't occur. It seemed as if the system was cleaning up and deleting it, which is just what I was looking for. But then the error started and now it's happening with more frequency.
If I make strategic use of DROP TABLE, this resolves the error, unless the table doesn't exist, in which case I get another error. I can also disconnect/reconnect to the server from my SQL dev client, as I would expect, since that would definitely drop the session.
This IBM article about DGTTs speaks much of sessions, but not many specifics. And this article is possibly the longest command syntax I've yet encountered in the IBM documentation. I got through it, but it didn't answer the question of what decided when a DGTT is deleted.
So I would like to ask:
What are the boundaries of a session..?
I'm thinking this is probably defined by the environment in my SQL client..?
I guess the best/safest thing to do is use DROP TABLE as needed..?
Does any one have any tips, tricks, or pointers they could share..?
Below is the SQL that I'm developing. For brevity, I've excluded chunks of the WITH-AS and SELECT statements:
DROP TABLE SESSION.NEW_PKG_SHEETS ;
DECLARE GLOBAL TEMPORARY TABLE SESSION.NEW_PKG_SHEETS_DATA
AS ( WITH FIRSTDAY AS (SELECT (YEAR(CURDATE() - 4 MONTHS) * 10000) +
(MONTH(CURDATE() - 4 MONTHS) * 100) AS DATEISO
FROM SYSIBM.SYSDUMMY1
-- <VARIETY OF ADDITIONAL CTE CLAUSES>
-- <SELECT STATEMENT BELOW IS A BIT LONGER>
SELECT DAACCT AS DAACCT,
DAIDAT AS DAIDAT,
DAINV# AS DAINV,
CAST(DAITEM AS NUMERIC(6)) AS DAPACK,
CAST(0 AS NUMERIC(14)) AS UPCNUM,
DAQTY AS DAQTY
FROM DAILYTRANS
AND DAIDAT >= (SELECT DATEISO+000 FROM FIRSTDAY) -- 1ST DAY FOUR MONTHS AGO
AND DAIDAT <= (SELECT DATEISO+399 FROM FIRSTDAY) -- LAST DAY OF LAST MONTH
) WITH DATA ;
DROP TABLE SESSION.NEW_PKG_SHEETS ;
The DGTT will only get cleaned automatically up by Db2 when the connection ends successfully (connect reset or equivalent according to whatever interface to Db2 is being used ).
For both Db2 for i and Db2-LUW, consider using the WITH REPLACE clause for the DECLARE GLOBAL TEMPORARY TABLE statement. That will ensure you don't need to explicitly drop the DGTT if the session remains open but the code needs the table to be replaced at next execution whether or not the DGTT already exists.
Using that WITH REPLACE clause means you do not need to worry about issuing a DROP statement for the DGTT, unless you really want to issue a drop.
Sometimes sessions may get re-used, or a close/disconnect might not happen or might not complete, or more likely a workstation performs a retry, and in those cases the WITH REPLACE can be essential for easily avoiding runtime errors.
Note that Db2 for Z/OS (at v12) does not offer the WITH REPLACE clause for DGTT, but has instead an optional syntax on commit drop table (but this is not documented for Db2-for-i and Db2-LUW).
I am trying to do what should be a pretty straightforward insert statement in a postgres database. It is not working, but it's also not erroring out, so I don't know how to troubleshoot.
This is the statement:
INSERT INTO my_table (col1, col2) select col1,col2 FROM my_table_temp;
There are around 200m entries in the temp table, and 50m entries in my_table. The temp table has no index or constraints, but both columns in my_table have btree indexes, and col1 has a foreign key constraint.
I ran the first query for about 20 days. Last time I tried a similar insert of around 50m, it took 3 days, so I expected it to take a while, but not a month. Moreover, my_table isn't getting longer. Queried 1 day apart, the following produces the same exact number.
select count(*) from my_table;
So it isn't inserting at all. But it also didn't error out. And looking at system resource usage, it doesn't seem to be doing much of anything at all, the process isn't drawing resources.
Looking at other running queries, nothing else that I have permissions to view is touching either table, and I'm the only one who uses them.
I'm not sure how to troubleshoot since there's no error. It's just not doing anything. Any thoughts about things that might be going wrong, or things to check, would be very helpful.
For the sake of anyone stumbling onto this question in the future:
After a lengthy discussion (see linked discussion from the comments above), the issue turned out to be related to psycopg2 buffering the query in memory.
Another useful note: inserting into a table with indices is slow, so it can help to remove them before bulk loads, and then add them again after.
in my case it was date format issue. i commented date attribute before interting to DB and it worked.
In my case it was a TRIGGER on the same table I was updating and it failed without errors.
Deactivated the trigger and the update worked flawlessly.
I'm trying to connect from Microsoft SQL server to as AS/400 so i can pull data from the AS/400 then flag the data as being pulled.
I've successfully created and OLE DB "IBMDASQL" connection, and am able to pull data some data, but i'm running into an issue when i try to pull data from a very large table
This runs fine, and returns a count of 170 million:
select count(*)
from transactions
This query executed for 15 hours before i gave up on it. (It should return zero since i haven't flagged anything as 'in process' yet)
select count(*)
from transactions
where processed = 'In process'
I'm a Microsoft guy, but my AS/400 guy says that there is an index on the 'processed' column and that locally, that query run instantaneously.
Any thoughts on what i might be doing wrong? I found a table with only 68 records in it, and was able to run this query in about a second:
select count(*)
from smallTable
where RandomColumn = 'randomValue'
So I know that the AS/400 is at least able to understand that type of query.
I have had to fight this battle many times.
There are two ways of approaching this.
1) Stage your data from the AS400 into SQL server where you can optimize your indexes
2) Ask the AS400 folks to create logical views which speed up data retrieval, your AS400 programmer is correct, index will help but I forget the term they use to define a "view" similar to a sql server view, I beleive its something like "physical" v/s "logical". Logical is what you want.
Thirdly, 170 million is a lot of records, even for a relational database like SQL server, have you considered running an SSIS package nightly that stages your data into your own SQL table to see if it improves performance?
I would suggest this way to have good performance, i suppose you have at least SQL2005, i havent tested yet but this is a tip
Let the AS400 perform the select in native way by creating stored procedure in the AS400
open a AS400 session
launch STRSQL
create an AS400 stored procedure in this way to get/update the recordset
CREATE PROCEDURE MYSELECT (IN PARAM CHAR(10))
LANGUAGE SQL
DYNAMIC RESULT SETS 1
BEGIN
DECLARE C1 CURSOR FOR SELECT * FROM MYLIB.MYFILE WHERE MYFIELD=PARAM;
OPEN C1;
RETURN;
END
create an AS400 stored procedure to update the recordset
CREATE PROCEDURE MYUPDATE (IN PARAM CHAR(10))
LANGUAGE SQL
RESULT SETS 0
BEGIN
UPDATE MYLIB.MYFILE SET MYFIELD='newvalue' WHERE MYFIELD=PARAM;
END
Call those AS400 SP from SQL SERVER
declare #myParam char(10)
set #myParam = 'In process'
-- get the recordset
EXEC ('CALL NAME_AS400.MYLIB.MYSELECT(?) ', #myParam) AT AS400 -- < AS400 = name of linked server
-- update
EXEC ('CALL NAME_AS400.MYLIB.MYUPDATE(?) ', #myParam) AT AS400
Hope it helps
I recommend following the suggestions in the IBM Redbook SQL Performance Diagnosis on IBM DB2 Universal Database for iSeries to determine what's really happening.
IBM technical support can also be extremely helpful in diagnosing issues such as these. Don't be afraid to get in touch with them as the software support is generally included as part of the maintenance contract and there is no charge to talk to them.
I've seen OLEDB connections eat up 100% cpu for hours and when the same query is run through VisualExplain (query analyzer) it estimates mere seconds to execute.
We found that running the query like this performed liked expected:
SELECT *
FROM OpenQuery( LinkedServer,
'select count(*)
from transactions
where processed = ''In process''')
GO
Could this be a collation problem? - your WHERE clause is testing on a text field and if the collations of the two servers don't match this clause will be applied clientside rather than serverside so you are first of all pulling all 170 million records down to the client and then performing the WHERE clause on it there.
Based on the past interactions I have had, the query should take about the same amount of time no matter how you access the data. Another thought would be if you could create a view on the table to get the data you need or use a stored procedure.
we have a stored procedure that ran fine until 10 minutes ago and then it just hangs after you call it.
Observations:
Copying the code into a query window yields the query result in 1 second
SP takes > 2.5 minutes until I cancel it
Activity Monitor shows it's not being blocked by anything, it's just doing a SELECT.
Running sp_recompile on the SP doesn't help
Dropping and recreating the SP doesn't help
Setting LOCK_TIMEOUT to 1 second does not help
What else can be going on?
UPDATE: I'm guessing it had to do with parameter sniffing. I used Adam Machanic's routine to find out which subquery was hanging. I found things wrong with the query plan thanks to the hint by Martin Smith. I learned about EXEC ... WITH RECOMPILE, OPTION(RECOMPILE) for subqueries within the SP, and OPTION (OPTIMIZE FOR (#parameter = 1)) in order to attack parameter sniffing. I still don't know what was wrong in this particular case but I came out of this battle seasoned and much better armed. I know what to do next time. So here's the points!
I think that this is related to parameter sniffing and the need to parameterize your input params to local params within the SP. Adding with recompile causes the execution plan to be recreated and eliminates much of the benefits of having a SP. We were using With Recompile on many reports in an attempt to eliminate this hanging issue and it occassionally resulted in hanging SP's that may have been related to other locks and/or transactions accessing the same tables simultaneously. See this link for more details
Parameter Sniffing (or Spoofing) in SQL Server and change your SP's to the following to fix this:
CREATE PROCEDURE [dbo].[SPNAME] #p1 int, #p2 int
AS
DECLARE #localp1 int, #localp2 int
SET #localp1=#p1
SET #localp2=#p2
Run Adam Machanic's excellent sp_WhoIsActive stored proc while your query is running. It'll give you the wait information - meaning, what the stored proc is waiting on - plus things like the execution plan:
http://www.brentozar.com/archive/2010/09/sql-server-dba-scripts-how-to-find-slow-sql-server-queries/
If you want the outer command (like a calling stored procedure's full text), use the #get_outer_command = 1 parameter as well.
First thing First.
Please check if there are any uncommitted transactions. A begin transaction without "COMMIT TRANSACTION"
Thanks for all comments.
I still haven't found the answer, but I will post the progress here.
I failed to reproduce the problem before, but today I chanced upon another stored procedure with the same problem. Again the same symptoms appeared:
Hanging piece of query runs fine and quick (3 secs) in normal query window (hanging piece identified with sp_whoisactive)
No locks, according to Activity Monitor SPID is doing SELECT
Stored procedure runs for over 6 hours without response
Parameters passed to SP and variables declared in window are the same
Using above hints, I found the SP execution plan and it showed nothing out of the ordinary (to me, at least). Creating a new stored procedure with same contents did not solve the problem either. So I started stripping the SP to less and less contents until I encountered a UDF call to another database. When I removed that (replaced the call by the inline contents of the function, a CASE statement), it ran fine again.
So this COULD have been the problem, but I am not very certain, as last time the problem disappeared by itself and I also changed a lot of other things while stripping this SP.
When we add new data sometimes the execution plan becomes invalid or out of date then the stored procedure starts going into this limbo phase. Run the following commands on your database
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
It will flush the cache memory and rebuild the execution plan next time you will run the stored proc.
msdn.microsoft.com
I think I had the same problem. I removed my parameters from the subqueries. It ran fine after that. Not sure if this is possible in your script but that is what solved it for me.
An answer of Brent Ozar might work, but it returns only active command text by default. For example, it returns WAITFOR DELAY '00:00:05' for query like:
CREATE PROCEDURE spGetChangeNotifications
AS
BEGIN
SET NOCOUNT ON;
DECLARE
#actionType TINYINT;
WHILE #actionType IS NULL
BEGIN
WAITFOR DELAY '00:00:05';
SELECT TOP 1
#actionType = [ActionType]
FROM
TableChangeNotifications;
END;
SELECT
TOP 1000 [RecordID], [Component], [TableName], [ActionType], [Key1], [Key2], [Key3]
FROM
TableChangeNotifications;
END;
How it looks like:
Thus, check the parameter #get_outer_command as described here.
Also, try this one instead(slightly modified procedure from MS Docs):
DECLARE
#sessions TABLE
(
[SPID] INT,
STATUS VARCHAR(MAX),
[Login] VARCHAR(MAX),
[HostName] VARCHAR(MAX),
[BlkBy] VARCHAR(MAX),
[DBName] VARCHAR(MAX),
[Command] VARCHAR(MAX),
[CPUTime] INT,
[DiskIO] INT,
[LastBatch] VARCHAR(MAX),
[ProgramName] VARCHAR(MAX),
[SPID_1] INT,
[REQUESTID] INT
);
INSERT INTO #sessions
EXEC sp_who2;
SELECT
[req].[session_id],
[A].[Login] AS 'login',
[A].[HostName] AS 'hostname',
[req].[start_time],
[cpu_time] AS 'cpu_time_ms',
OBJECT_NAME([st].[objectid], [st].[dbid]) AS 'object_name',
SUBSTRING(REPLACE(REPLACE(SUBSTRING([ST].text, ([req].[statement_start_offset] / 2) + 1, ((CASE [statement_end_offset]
WHEN -1
THEN DATALENGTH([ST].text)
ELSE [req].[statement_end_offset]
END - [req].[statement_start_offset]) / 2) + 1), CHAR(10), ' '), CHAR(13), ' '), 1, 512) AS [statement_text],
[ST].text AS 'full_query_text'
FROM
sys.dm_exec_requests AS req
CROSS APPLY
sys.dm_exec_sql_text(req.sql_handle) AS ST
LEFT JOIN #sessions AS A
ON A.SPID = req.session_id
ORDER BY
[cpu_time] DESC;
How it looks like:
Of course, it's possible to modify code from Brent Ozar answer so it would select a full query text, too, though. Nearly same technique is chosen there(link of code of 18.07.2020 so might change after time):
I had the same problem today and I don't know what causes it but I found a solution. I took the input parameter and saved it into a new parameter, i.e.
declare #parameter2 as x = #parameter
Then i changed the references to the parameter in the queries from #parameter to #parameter2.
I use both Firebird embedded and Firebird Server, and from time to time I need to reindex the tables using a procedure like the following:
CREATE PROCEDURE MAINTENANCE_SELECTIVITY
ASDECLARE VARIABLE S VARCHAR(200);
BEGIN
FOR select RDB$INDEX_NAME FROM RDB$INDICES INTO :S DO
BEGIN
S = 'SET statistics INDEX ' || s || ';';
EXECUTE STATEMENT :s;
END
SUSPEND;
END
I guess this is normal using embedded, but is it really needed using a server? Is there a way to configure the server to do it automatically when required or periodically?
First, let me point out that I'm no Firebird expert, so I'm answering on the basis of how SQL Server works.
In that case, the answer is both yes, and no.
The indexes are of course updated on SQL Server, in the sense that if you insert a new row, all indexes for that table will contain that row, so it will be found. So basically, you don't need to keep reindexing the tables for that part to work. That's the "no" part.
The problem, however, is not with the index, but with the statistics. You're saying that you need to reindex the tables, but then you show code that manipulates statistics, and that's why I'm answering.
The short answer is that statistics goes slowly out of whack as time goes by. They might not deteriorate to a point where they're unusable, but they will deteriorate down from the perfect level they're in when you recreate/recalculate them. That's the "yes" part.
The main problem with stale statistics is that if the distribution of the keys in the indexes changes drastically, the statistics might not pick that up right away, and thus the query optimizer will pick the wrong indexes, based on the old, stale, statistics data it has on hand.
For instance, let's say one of your indexes has statistics that says that the keys are clumped together in one end of the value space (for instance, int-column with lots of 0's and 1's). Then you insert lots and lots of rows with values that make this index contain values spread out over the entire spectrum.
If you now do a query that uses a join from another table, on a column with low selectivity (also lots of 0's and 1's) against the table with this index of yours, the query optimizer might deduce that this index is good, since it will fetch many rows that will be used at the same time (they're on the same data page).
However, since the data has changed, it'll jump all over the index to find the relevant pieces, and thus not be so good after all.
After recalculating the statistics, the query optimizer might see that this index is sub-optimal for this query, and pick another index instead, which is more suited.
Basically, you need to recalculate the statistics periodically if your data is in flux. If your data rarely changes, you probably don't need to do it very often, but I would still add a maintenance job with some regularity that does this.
As for whether or not it is possible to ask Firebird to do it on its own, then again, I'm on thin ice, but I suspect there is. In SQL Server you can set up maintenance jobs that does this, on a schedule, and at the very least you should be able to kick off a batch file from the Windows scheduler to do something like it.
That does not reindex, it recomputes weights for indexes, which are used by optimizer to select most optimal index. You don't need to do that unless index size changes a lot. If you create the index before you add data, you need to do the recalculation.
Embedded and Server should have exactly same functionality apart the process model.
I wanted to update this answer for newer firebird. here is the updated dsql.
SET TERM ^ ;
CREATE OR ALTER PROCEDURE NEW_PROCEDURE
AS
DECLARE VARIABLE S VARCHAR(300);
begin
FOR select 'SET statistics INDEX ' || RDB$INDEX_NAME || ';'
FROM RDB$INDICES
WHERE RDB$INDEX_NAME <> 'PRIMARY' INTO :S
DO BEGIN
EXECUTE STATEMENT :s;
END
end^
SET TERM ; ^
GRANT EXECUTE ON PROCEDURE NEW_PROCEDURE TO SYSDBA;