Could you tell me why this query works in pgAdmin, but doesn't with software using ODBC:
CREATE TEMP TABLE temp296 WITH (OIDS) ON COMMIT DROP AS
SELECT age_group AS a,male AS m,mode AS t,AVG(speed) AS speed
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode=2
GROUP BY age_group,male,mode;
SELECT age_group,male,mode,
CASE
WHEN age_group=1 AND male=0 THEN (info_dist_km/(SELECT avg_speed FROM temp296 WHERE a=1 AND m=0))*60
ELSE 0
END AS info_durn_min
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode IN (7) AND info.info_dist_km>2;
I got "42P01: ERROR: relation "temp296" does not exist".
I also have tried with "BEGIN; [...] COMMIT;" - "HY010:The cursor is open".
PostgreSQL 9.0.10, compiled by Visual C++ build 1500, 64-bit
psqlODBC 09.01.0200
Windows 7 x64
I think that the reason why it did not work for you because by default ODBC works in autocommit mode. If you executed your statements serially, the very first statement
CREATE TEMP TABLE temp296 ON COMMIT DROP ... ;
must have autocommitted after finishing, and thus dropped your temp table.
Unfortunately, ODBC does not support directly using statements like BEGIN TRANSACTION; ... COMMIT; to handle transactions.
Instead, you can disable auto-commit using SQLSetConnectAttr function like this:
SQLSetConnectAttr(hdbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF, 0);
But, after you do that, you must remember to commit any change by using SQLEndTran like this:
SQLEndTran(SQL_HANDLE_DBC, hdbc, SQL_COMMIT);
While WITH approach has worked for you as a workaround, it is worth noting that using transactions appropriately is faster than running in auto-commit mode.
For example, if you need to insert many rows into the table (thousands or millions), using transactions can be hundreds and thousand times faster than autocommit.
It is not uncommon for temporary tables to not be available via SQLPrepare/SQLExecute in ODBC i.e., on prepared statements e.g., MS SQL Server is like this. The solution is usually to use SQLExecDirect.
Related
When are DB2 declared global temporary tables 'cleaned up' and automatically deleted by the system...? This is for DB2 on AS400 v7r3m0, with DBeaver 5.2.5 as the dev client, and MS-Access 2007 for packaged apps for the end-users.
Today I started experimenting with a DGTT, thanks to this answer. So far I'm pleased with the functionality, although I did find our more recent system version has the WITH DATA option, which is an obvious advantage.
Everything is working, but at times I receive this error:
SQL Error [42710]: [SQL0601] NEW_PKG_SHEETS_DATA in QTEMP type *FILE already exists.
The meaning of the error is obvious, but the timing is not. When I started today, I could run the query multiple times, and the error didn't occur. It seemed as if the system was cleaning up and deleting it, which is just what I was looking for. But then the error started and now it's happening with more frequency.
If I make strategic use of DROP TABLE, this resolves the error, unless the table doesn't exist, in which case I get another error. I can also disconnect/reconnect to the server from my SQL dev client, as I would expect, since that would definitely drop the session.
This IBM article about DGTTs speaks much of sessions, but not many specifics. And this article is possibly the longest command syntax I've yet encountered in the IBM documentation. I got through it, but it didn't answer the question of what decided when a DGTT is deleted.
So I would like to ask:
What are the boundaries of a session..?
I'm thinking this is probably defined by the environment in my SQL client..?
I guess the best/safest thing to do is use DROP TABLE as needed..?
Does any one have any tips, tricks, or pointers they could share..?
Below is the SQL that I'm developing. For brevity, I've excluded chunks of the WITH-AS and SELECT statements:
DROP TABLE SESSION.NEW_PKG_SHEETS ;
DECLARE GLOBAL TEMPORARY TABLE SESSION.NEW_PKG_SHEETS_DATA
AS ( WITH FIRSTDAY AS (SELECT (YEAR(CURDATE() - 4 MONTHS) * 10000) +
(MONTH(CURDATE() - 4 MONTHS) * 100) AS DATEISO
FROM SYSIBM.SYSDUMMY1
-- <VARIETY OF ADDITIONAL CTE CLAUSES>
-- <SELECT STATEMENT BELOW IS A BIT LONGER>
SELECT DAACCT AS DAACCT,
DAIDAT AS DAIDAT,
DAINV# AS DAINV,
CAST(DAITEM AS NUMERIC(6)) AS DAPACK,
CAST(0 AS NUMERIC(14)) AS UPCNUM,
DAQTY AS DAQTY
FROM DAILYTRANS
AND DAIDAT >= (SELECT DATEISO+000 FROM FIRSTDAY) -- 1ST DAY FOUR MONTHS AGO
AND DAIDAT <= (SELECT DATEISO+399 FROM FIRSTDAY) -- LAST DAY OF LAST MONTH
) WITH DATA ;
DROP TABLE SESSION.NEW_PKG_SHEETS ;
The DGTT will only get cleaned automatically up by Db2 when the connection ends successfully (connect reset or equivalent according to whatever interface to Db2 is being used ).
For both Db2 for i and Db2-LUW, consider using the WITH REPLACE clause for the DECLARE GLOBAL TEMPORARY TABLE statement. That will ensure you don't need to explicitly drop the DGTT if the session remains open but the code needs the table to be replaced at next execution whether or not the DGTT already exists.
Using that WITH REPLACE clause means you do not need to worry about issuing a DROP statement for the DGTT, unless you really want to issue a drop.
Sometimes sessions may get re-used, or a close/disconnect might not happen or might not complete, or more likely a workstation performs a retry, and in those cases the WITH REPLACE can be essential for easily avoiding runtime errors.
Note that Db2 for Z/OS (at v12) does not offer the WITH REPLACE clause for DGTT, but has instead an optional syntax on commit drop table (but this is not documented for Db2-for-i and Db2-LUW).
I am calling update statements one after the other from a servlet to DB2. I am getting error sqlstate 40001, reason code 68 which i found it is due to deadlock timeout.
How can I resolve this issue?
Can it be resolved by setting query timeout?
If yes then how to use it with update statements in servlet or where to use it?
The reason code 68 already tells you this is due to a lock timeout (deadlock is reason code 2) It could be due to other users running queries at the same time that use the same data you are accessing, or your own multiple updates.
Begin by running db2pd -db locktest -locks show detail from a db2 command line to see where the locks are. You'll then need to run something like:
select tabschema, tabname, tableid, tbspaceid
from syscat.tables where tbspaceid = # and tableid = #
filling in the # symbols with the ID number you get from the db2pd command output.
Once you see where the locks are, here are some tips:
◦Deadlock frequency can sometimes be reduced by ensuring that all applications access their common data in the same order – meaning, for example, that they access (and therefore lock) rows in Table A, followed by Table B, followed by Table C, and so on.
taken from: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.trb.doc/doc/t0055074.html
recommended reading: http://www.ibm.com/developerworks/data/library/techarticle/dm-0511bond/index.html
Addendum: if your servlet or another guilty application is using select statements found to be involved in the deadlock, you can try appending with ur to the select statements if accuracy of the newly updated (or inserted) data isn't important.
For me, the solution was adding FOR READ ONLY WITH UR at the end of all my SELECT statements. (Apparently my select statements were returning so much data, it locked the tables long enough to interfere with other SQL statements)
See https://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/sqlref/src/tpc/db2z_sql_isolationclause.html
I'm trying to connect from Microsoft SQL server to as AS/400 so i can pull data from the AS/400 then flag the data as being pulled.
I've successfully created and OLE DB "IBMDASQL" connection, and am able to pull data some data, but i'm running into an issue when i try to pull data from a very large table
This runs fine, and returns a count of 170 million:
select count(*)
from transactions
This query executed for 15 hours before i gave up on it. (It should return zero since i haven't flagged anything as 'in process' yet)
select count(*)
from transactions
where processed = 'In process'
I'm a Microsoft guy, but my AS/400 guy says that there is an index on the 'processed' column and that locally, that query run instantaneously.
Any thoughts on what i might be doing wrong? I found a table with only 68 records in it, and was able to run this query in about a second:
select count(*)
from smallTable
where RandomColumn = 'randomValue'
So I know that the AS/400 is at least able to understand that type of query.
I have had to fight this battle many times.
There are two ways of approaching this.
1) Stage your data from the AS400 into SQL server where you can optimize your indexes
2) Ask the AS400 folks to create logical views which speed up data retrieval, your AS400 programmer is correct, index will help but I forget the term they use to define a "view" similar to a sql server view, I beleive its something like "physical" v/s "logical". Logical is what you want.
Thirdly, 170 million is a lot of records, even for a relational database like SQL server, have you considered running an SSIS package nightly that stages your data into your own SQL table to see if it improves performance?
I would suggest this way to have good performance, i suppose you have at least SQL2005, i havent tested yet but this is a tip
Let the AS400 perform the select in native way by creating stored procedure in the AS400
open a AS400 session
launch STRSQL
create an AS400 stored procedure in this way to get/update the recordset
CREATE PROCEDURE MYSELECT (IN PARAM CHAR(10))
LANGUAGE SQL
DYNAMIC RESULT SETS 1
BEGIN
DECLARE C1 CURSOR FOR SELECT * FROM MYLIB.MYFILE WHERE MYFIELD=PARAM;
OPEN C1;
RETURN;
END
create an AS400 stored procedure to update the recordset
CREATE PROCEDURE MYUPDATE (IN PARAM CHAR(10))
LANGUAGE SQL
RESULT SETS 0
BEGIN
UPDATE MYLIB.MYFILE SET MYFIELD='newvalue' WHERE MYFIELD=PARAM;
END
Call those AS400 SP from SQL SERVER
declare #myParam char(10)
set #myParam = 'In process'
-- get the recordset
EXEC ('CALL NAME_AS400.MYLIB.MYSELECT(?) ', #myParam) AT AS400 -- < AS400 = name of linked server
-- update
EXEC ('CALL NAME_AS400.MYLIB.MYUPDATE(?) ', #myParam) AT AS400
Hope it helps
I recommend following the suggestions in the IBM Redbook SQL Performance Diagnosis on IBM DB2 Universal Database for iSeries to determine what's really happening.
IBM technical support can also be extremely helpful in diagnosing issues such as these. Don't be afraid to get in touch with them as the software support is generally included as part of the maintenance contract and there is no charge to talk to them.
I've seen OLEDB connections eat up 100% cpu for hours and when the same query is run through VisualExplain (query analyzer) it estimates mere seconds to execute.
We found that running the query like this performed liked expected:
SELECT *
FROM OpenQuery( LinkedServer,
'select count(*)
from transactions
where processed = ''In process''')
GO
Could this be a collation problem? - your WHERE clause is testing on a text field and if the collations of the two servers don't match this clause will be applied clientside rather than serverside so you are first of all pulling all 170 million records down to the client and then performing the WHERE clause on it there.
Based on the past interactions I have had, the query should take about the same amount of time no matter how you access the data. Another thought would be if you could create a view on the table to get the data you need or use a stored procedure.
In my Perl script, I use DBD::Sybase (via DBI module) to connect to a SQL Server 2008. The base program as below runs without problem:
use DBI;
# assign values to $host, $usr, $pwd
my $dbh = DBI->connect("dbi:Sybase:$host", $usr, $pwd);
$dbh->do("BEGIN TRAN tr1");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
$update->execute(100, 'apple');
$dbh->do("END TRAN tr1");
however, if I insert one more prepare statement right before the existing prepare statement, to have the program look like:
...
my $insert = $dbh->prepare("INSERT INTO mytable (name, qty) VALUES (?, ?)");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
...
and the rest is all the same, then when I run it, I got:
DBD::Sybase::db do failed: Server message number=3902 severity=16 state=1 line=1 server=xxx text=The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
So looks like the additional prepare statement somehow disrupted the entire transaction flow. I had been running the same code via the DBD::ODBC driver with no problem against a SQL SERVER 2005. (But my firm upgraded to 2008 and I had to use the DBD::Sybase to get around some other problems.)
Any help / suggestion on how to resolve this issue would be much appreciated. In particular, using a different db handle for the other prepare is not a desired solution since that will beat the purpose of having them in a single transaction.
UPDATE: Turns out if I execute at least once on the additional insert, then the program is again run fine. So looks like every prepared statement needs to be run under Sybase. But that isn't a requirement with ODBC and isn't a reasonable requirement in general. Anyway to get around it?
You are learning perl AND Sybase basics and making several incorrect conclusions.
Forget about what it does under ODBC for a moment. ODBC most probably has AUTOCOMMIT turned on, and thus you have no transaction control whatsoever. (Why anyone would use ODBC when the DBD:: supports DB-Lib and CT-Lib is beyond me, but that's a separate story.)
Re: "So looks like every prepared statement needs to be run under Sybase."
Rawheiser is correct. What exactly do you expect to achieve by preparing a batch but performing a Do instead ? Where else do you expect to execute the batch prepared under Sybase, other than under Sybase?
Do vs prepare/execute are quite different. prepare/execute for Sybase works just fine in millions of programs. you just have to learn what it does, not what you think it should do. prepare let's you load a batch, a block of commands terminated by GO in the normal Sybase sense. Execute executes the prepared batch (supplies the GO and sends the batch to the server), and captures whatever is returned (according to whatever array/variables you have set).
Do is immediate, single command, with no prepare. A prepare+execute combined.
Performing only single-statement do's, and only dynamic SQL, simply because that's all that you could get to work, is very limiting and quite unnecessary.
You currently have:
Prepare:
UPDATE
Execute (100)
ExecuteImmediate(Do):
COMMIT TRAN
So of course, there is no BEGIN TRAN. (The first "do" executed, the BEGIN TRAN is gone)
I think what you want (intended originally) is this. Forget the 'do':
Prepare:
BEGIN TRAN
UPDATE
COMMIT TRAN
Execute (100)
Then change it to:
BEGIN TRAN
INSERT
UPDATE
COMMIT TRAN
Execute (100)
Your $update and $insert will confuse you (you're executing a multi-statement batch, right ?not a isolated single command in the middle of a prepare batch). If you get rid of them, and think in terms of $execute [whatever you have prepared in the batch], it might help you to understand the problem better.
Do not form conclusions until you have all the above working as intended.
And read up on BEGIN/COMMIT TRAN.
Last, What exactly is a "END TRAN" ? I do not think the code block you have posted is real.
Don't dynamically create SQL, it is dangerous (sql injection).
You should be able to prepare multiple inserts/updates and your link to the DBI documentation does not say you cannot, it says some drivers may not be able to tell you much about a statement which is ONLY prepared.
I'd post a failing example with error to the dbi-users list for comment as the DBD::Sybase maintainer hangs out there (see dbi.perl.org).
Turns out that DBI's prepare method is not quite portable across various database drivers as noted here. For the Sybase driver, it is most likely that prepare is not working as intended. One way to tell is that after running prepare, the variable $insert->{NUM_OF_FIELDS} is undefined.
To get around the problem, do one of the following:
1) do not prepare anything. Just dynamically construct the statement in text string and run $dbh->do($stmt), or
2) run finish on all outstanding statement handles (under that database handle) before running COMMIT TRAN. I personally prefer this way much better.
I have written a DB2 query to do the following:
Create a temp table
Select from a monster query / insert into the temp table
Select from the temp table / delete from old table
Select from the temp table / insert into a different table
In MSSQL, I am allowed to run the commands one after another as one long query. Failing that, I can delimit them with 'GO' commands. When I attempt this in DB2, I get the error:
DB2CLI.DLL: ERROR [42601] [IBM][CLI Driver][DB2] SQL0199N The use of the reserved
word "GO" following "" is not valid. Expected tokens may include: "".
SQLSTATE=42601
What can I use to delimit these instructions without the temp table going out of scope?
GO is something that is used in MSSQL Studio, I have my own app for running upates into live and use "GO" to break the statements apart.
Does DB2 support the semi-colon (;)? This is a standard delimiter in many SQL implementations.
have you tried using just a semi-colon instead of "GO"?
This link suggests that the semi-colon should work for DB2 - http://www.scribd.com/doc/16640/IBM-DB2
I would try wrapping what you are looking to do in BEGIN and END to set the scope.
GO is not a SQL command, it's not even a TSQL command. It is an instruction for the parser. I don't know DB2, but I would imagine that GO is not neccessary.
From Devx.com Tips
Although GO is not a T-SQL statement, it is often used in T-SQL code and unless you know what it is it can be a mystery. So what is its purpose? Well, it causes all statements from the beginning of the script or the last GO statement (whichever is closer) to be compiled into one execution plan and sent to the server independent of any other batches.