SQL: OPENQUERY Not returning all rows - tsql

i have the following which queries a linked server i have to talk to.
SELECT
*
FROM
OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA ')
It will typically return most of the data but some rows are missing?
The linkeds server is coming from an oracle client
Is this a problem anyone has encountered w/ openquery?

I had exactly the same problem.
The root cause is that you've set up your linked server using ODBC instead of OLE DB.
Here's how I fixed it:
Delete the linked server from SQL Server
Right click on the "Linked Servers" folder and select "New Linked Server..."
Linked Server: enter anything..this will be the name of your new linked server
Provider: Select "Oracle Provider for OLE DB"
Product Name: enter "Oracle" (without the double quotes)
Data Source: enter the alias from your TNSNAMES.ORA file. In my case, it was "ABC.WORLD" (without the double quotes)
Provider String: leave it blank
Location: leave it blank
Catalog: leave it blank
Now go to the "Security" tab, and click the last radio button that says "Be made using this security context:" and enter the username & password for your connection
That should be it!

This seems to be related to the underlying provider capabilities and others have also run into this and similar size/row limitations. One possible work-around would be to implement an iterative/looping query with some filtering built in to pull back a certain amount of rows. With oracle, I think this might be using the rownum (not very familiar with oracle).
So something like
--Not tested sql, just winging it syntax-wise
SELECT * FROM OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA where rownum between 0 AND 500')
SELECT * FROM OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA where rownum between 500 AND 1000')
SELECT * FROM OPENQUERY(DWH_LINK, 'SELECT * FROM TABLEA where rownum ...')
BOL:
link
This is subject to the capabilities of the OLE DB provider. Although the query may return multiple result sets, OPENQUERY returns only the first one.

I had this same problem using the Oracle 10 instant client and ODBC. I used this client as I am connecting to an Oracle 10gR2 database. I opened a ticket with Microsoft support and they suggested using the Oracle 11 instant client. Surprise! Uninstalling the 10g instant client, installing the 11g instant client and rebooting resolved the issue.
Ken

I had exact same problem with an SQL 2014 getting data from SQL 2000 through OPENQUERY. Because ODBC compatibility problem, I had to keep generic OLE DB for ODBC driver. Moreover, the problem was only with SQL non-admin account.
So finally, the solution I found was to add SET ROWCOUNT 0:
SELECT * FROM OPENQUERY(DWH_LINK, 'SET ROWCOUNT 0 SELECT * FROM TABLEA ')
It seems the rowcount might been change somewhere through the SQL procedure (or for this user session), so setting it to 0 force it to return "all rows".

Related

How can I see the sql statement of a view (db resides on AWS)

I've just installed the vscode extension (Oracle Developer Tools for VS Code (SQL and PLSQL)
) and successfully connected the db.
The db resides on AWS.
I can connect the db and just wanted to test it by opening an existing view.
But, it just lets me "describe" the view. So I can see the columns but I need to edit the query statement.
What's missing? Or is the problem the AWS part?
I usually use SQL Developer but I'm furthermore interested in backing up the work via git commits. And I like the way "git graph" extensions presents the changes.
DDL view_name
Or
SELECT
text_vc
FROM
dba_views
WHERE
owner = :schema AND
view_name = :view_name;
With help from someone of the Oracle community I managed to get it working.
Basic query is:
select
dbms_metadata.get_ddl('VIEW', 'VIEW_NAME', 'VIEW_OWNER')
from
dual;
So, in my case it is:
select
dbms_metadata.get_ddl('VIEW', 'ALL_DATA_WAREHOUSE_BOSTON', 'WHB')
from
dual;
Owner is the name you fill in when connection to the database, which is the key/value pair (username/password).
If you are not sure who the owner of the view is, check it with this query:
select owner from ALL_VIEWS where VIEW_NAME ='ALL_DATA_WAREHOUSE_BOSTON';

Column alias querying IBM DB2 using Oracle SQL developer

I'm connected to an IBM DB2 database using Oracle SQL Developer and I'm querying several tables in order to perform an automated extraction of data. The issue here is that I can't set aliases for my results. I tried a lot of variants like adding quotes ("") ([]) ('') and it's not working. I saw several tutorials and everyone uses "AS" only, but for me it's not working. Any recommendations? Thanks!
Image as example here: https://i.stack.imgur.com/5NrED.png
My code is:
SELECT
"A"."TC_SHIPMENT_ID" AS SHIPMENT_ID,
"A"."CLAIM_ID",
B.DESCRIPTION CLAIM_CLASSIFICATION,
C.DESCRIPTION CLAIM_CATEGORY,
D.DESCRIPTION CLAIM_TYPE,
F.DESCRIPTION CLAIM_STATUS
FROM CLAIMS A
INNER JOIN CLAIM_CLASSIFICATION B ON A.CLAIM_CLASSIFICATION = B.CLAIM_CLASSIFICATION
INNER JOIN CLAIM_CATEGORY C ON A.CLAIM_CATEGORY = C.CLAIM_CATEGORY
INNER JOIN CLAIM_TYPE D ON A.CLAIM_TYPE = D.CLAIM_TYPE
INNER JOIN CLAIM_STATUS F ON A.CLAIM_STATUS = F.CLAIM_STATUS;
TLDR: append the connection-attribute(s) to the database name bounded by : and ;
When creating a new DB2-connection: On the dialog box for 'New /Select Database Connection', click the DB2 tab, and on the field labelled 'Database' you enter your database-name followed by a colon, followed by your property=value (connection attribute), followed by a semicolon.
When you want to alter the properties of an existing DB2 connection, right click that DB2-connection icon and choose properties, and adjust the database name in the same pattern as above, then test and save.
For example, in my case the database name is SAMPLE and if I want the application to show the correlation-ID names from my queries then I use for the database-name:
SAMPLE:useJDBC4ColumnNameAndLabelSemantics=No;
The same labels for result-sets as given in my queries then appear on the
Query-Result pane on Oracle SQL Developer.
Tested with DB2 v11.1.2.2 with db2jcc4.jar and Oracle SQL Developer 17.2.0.188

Jasper server community edition installation issues for Postgres

I installed the war file distribution using the install scripts in buildomatic. The installation is successful but when I boot tomcat server it shows some database exceptions
https://gist.github.com/shruti-palshikar/5ae801674dbd2a537518
I checked if the latest postgres driver exists in the tomcat/lib.
I also checked if the database 'jasperserver' has all the necessary tables
However these tables are empty , does anyone know which script loads data into tables?
Any help is appreciated
The actual error from PostgreSQL is:
relation "jiresourcefolder" does not exist
The query seems to be:
select this_.id as id5_0_, this_.version as version5_0_, this_.uri as uri5_0_, this_.hidden as hidden5_0_, this_.name as name5_0_, this_.label as label5_0_, this_.description as descript7_5_0_, this_.parent_folder as parent8_5_0_, this_.creation_date as creation9_5_0_, this_.update_date as update10_5_0_
from JIResourceFolder this_ where (this_.uri=?)
Typically ugly framework generated SQL.
There are only two possibilities:
There is no table "jiresourcefolder", "JIResourceFolder" or any other variation in capitals.
The table was created with quotes to preserve its case and the query is not using quotes.
The following will work:
CREATE TABLE JiReSoRrCeFoLdEr ...
SELECT * FROM jiresourcefolder...
SELECT * FROM JIRESOURCEFOLDER...
SELECT * FROM JIresourceFolder...
Any unquoted table (or column) names are internally mapped to lower-case so will all match.
If however you quote a created table:
CREATE TABLE "JIResourceFolder"
SELECT * FROM "JIResourceFolder" -- works
SELECT * FROM JIResourceFolder -- doesn't
Check your database schema and see if you have this table and whether it is all lower-case. Then, check the documentation for your java framework(s) and see if there is some flag that controls quoting of database tables. It seems likely that the flag is set in one place and not in another.
I just had the same issue in Jasper Studio.
My problem was that a wrong Data Adapter (a DB that did not have such a table) was assigned to the Report.
I had switch to the Design window and select the right Data Adapter in the upper right of that window right beside "Settings".

SQL Server OpenQuery() behaving differently then a direct query from TOAD

The following query works efficiently when run directly against Oracle 11 using TOAD (with native Oracle drivers)
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date('12/31/9999','mm/dd/yyyy')
and rgn_nm = 'Boston'
) ...
;
The exact same query "never" returns if passed from SQL Server 2008 to the same Oracle database via openquery(). SQL Server has a link to the Oracle database using an Oracle Provider OLE DB driver.
select * from openquery( servername, '
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
The query doesn't return in a reasonable amount of time, and the user kills the query. I don't know if it would eventually return with the correct result.
This result where the direct TOAD query works efficiently and the openquery() version "never" returns is reproducible.
A small modification to the openquery() gives the correct efficient result: Change eff_endt to trunc(eff_endt).
That is well and good, but it doesn't seem like the change should be necessary.
openquery() is supposed to be pass through, so how can there be a difference between the TOAD and openquery() behavior?
The reason we care is because we frequently develop complex queries with TOAD directly accessing Oracle. Once we have the query functioning and optimized, we convert it to an openquery() string for use in a SQL Server application. It is extremely aggravating to have a query suddenly fail with openquery() when we know it worked as a direct query. Then we have to search for a work-around through trial and error.
I would like to see the Oracle trace files for the two scenarios, but the Oracle server is within another organization, and we are not getting cooperation from the Oracle DBAs.
Does anyone know of any driver, or TOAD, or ??? issues that could account for the discrepancy? Is there any way to eliminate the problem such that both methods always give the same result?
I know you asked this a while ago but I just came across your question.
I agree, they should be the same. Obviously there is a difference. We need to find out where the difference is.
I am thinking out loud as I type...
What happens if you specify just a few column instead of select * from openquery?
How many rows are supposed to be returned?
What if, in the oracle select, you limit the returned rows?
How quickly does the openquery timeout?
Are TOAD and SS on the same machine? Are you RDPing into the SS and running toad from there?
Are they using the same drivers? including bit? (32/64) version?
Are they using the same account on oracle?
It is interesting that using the trunc() makes a difference. I assume [eff_endt] is one of the returned fields?
I am wondering if SS is getting all the rows back but it is choking on doing the date conversions. The date type in oracle may need to be converted to a ss date type before ss shows it to you.
What if you insert the rows from the openquery into a table where the date field is just a (n)varchar. I am thinking ss might just dump the date it is getting back from oracle into that text field without trying to convert it.
something like:
insert into mytable(f1,f2,f3,datetimeX)
select f1,f2,f3,datetimeX from openquery( servername, '
select f1,f2,f3,datetimeX from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
What if toad or ss is modifying the query statement before sending it to oracle. You could fire up wireshark and see what toad and ss are actually sending.
I would be very curious if you get this resolved. I link ss to oracle often and have not run into this issue.
Here are basic things you can check for to see what the database is doing after it receives the query. First, check that the execution plans are the same in TOAD as when the query runs using openquery. You could plan the query yourself in TOAD using:
explain plan set statement_id = 'openquery_test' for <your query here>;
select *
from table(dbms_xplan.display(statement_id => 'openquery_test';
then have someone initiate the query using openquery() and have someone with permissions to view v$ tables to run:
select sql_id from v$session where username = '<user running the query>';
(If there's more than one connection with the same user, you'll have to find an additional attribute to isolate the row representing the session running the query.)
select *
from table(dbms_xplan.display_cursor('<value from query above'));
If those look the same then I'd move on to checking database waits and see what it's stuck on.
select se.username
, sw.event
, sw.p1text
, sw.p2text
, sw.p3text
, sw.wait_time_micro/1000000 as seconds_in_wait
, sw.state
, sw.time_since_last_wait_micro/1000000 as seconds_since_last_wait
from v$session se
inner join
v$session_wait sw
on se.sid = sw.sid
where se.username = '<user running the query>'
;
(again, if there's more than one session with the same username, you'll need another attribute to whittle it down to the one you're interested in.)
If the plans are different, then you need to find out why, or if they're the same, look into what it's waiting on (e.g. SQL*Net message to client ?) and why.
I noticed a difference using OLEDB through MS Access (2013) connecting to Oracle 10g & 11g tables, in that it did not always recognize indexes or primary keys on the Oracle tables properly. The same query through an MS Access 2000 database (using odbc) worked fine / had no problem with indexes & keys. The only way I found to fix the OLEDB version was to include all of the key fields in the SELECT -- which was not a satisfying answer, but it's all I could find. This might be an option to try through SSMS / OpenQuery(...) as well.
Besides that... you can try some alternatives to OPENQUERY, such as:
4-part names: SELECT ... FROM Server..Schema.Table
Execute AT: EXEC ('select...') at linked server
But as for why the OLEDB provider works differently than the native Oracle Provider -- the providers are not identical, and the native provider would be more likely to pave-over Oracle quirks than the more generic OLEDB provider would.

Transaction context in use by another session

I have a table called MyTable on which I have defined a trigger, like so:
CREATE TRIGGER dbo.trg_Ins_MyTable
ON dbo.MyTable
FOR INSERT
AS
BEGIN
SET NOCOUNT ON;
insert SomeLinkedSrv.Catalog.dbo.OtherTable
(MyTableId, IsProcessing, ModifiedOn)
values (-1, 0, GETUTCDATE())
END
GO
Whenever I try to insert a row in MyTable, I get this error message:
Msg 3910, Level 16, State 2, Line 1
Transaction context in use by another session.
I have SomeLinkedSrv properly defined as a linked server (for example, select * from SomeLinkedSrv.Catalog.dbo.OtherTable works just fine).
How can I avoid the error and successfully insert record+execute the trigger?
Loopback linked servers can't be used in a distributed transaction if MARS is enabled.
Loopback linked servers cannot be used in a distributed transaction.
Trying a distributed query against a loopback linked server from
within a distributed transaction causes an error, such as error 3910:
"[Microsoft][ODBC SQL Server Driver][SQL Server]Transaction context in
use by another session." This restriction does not apply when an
INSERT...EXECUTE statement, issued by a connection that does not have
multiple active result sets (MARS) enabled, executes against a
loopback linked server. Note that the restriction still applies when
MARS is enabled on a connection.
http://msdn.microsoft.com/en-us/library/ms188716(SQL.105).aspx
I solve It.
I was using the same linked server to call the second procedure and then into the procedure I was using the same linked server.
It's very Easy, only we have to know the restricctions of linked servers.
I have resolved it by removing linked server used in the stored procedure and then called stored procedure by the same linked server. It wasnt working in DEV environement.
One of causes of this situation is a trigger that works for linked-sever database table. An also SQL version of SQL-Server which processes database matters. To avoid this ERROR during sql query execution we should temporarily disable and after execution enable triggers for tables updated. All with database name check. Here is an example:
Select * From People where PersonId In (#PersonId, #PersonIdRight)
IF 'DOUBLE' = DB_NAME()
ALTER TABLE [dbo].[PeopleSites] DISABLE TRIGGER [PeopleSites_ENTDB_UPDATE]
Update PeopleSites Set PersonId = #PersonIdRight Where PersonId = #PersonId
IF 'DOUBLE' = DB_NAME()
ALTER TABLE [dbo].[PeopleSites] ENABLE TRIGGER [PeopleSites_ENTDB_UPDATE]
Select * From PeopleSites where PersonId In (#PersonId, #PersonIdRight)
I also got the same error in our DEV environemnt, moving the linked databases to another sql instance resolved the issue. In our production environment these databases are already on separate instances
In my case I was using SQL 2005 and got "transaction context in use by another session" when running Insert....exec over a linked server. The fix for me was to patch from SP2 build 3161 to SP3. SP2 cumulative 5 is supposed to fix though.
https://support.microsoft.com/en-us/kb/947486
When remote database sits on the same server,configure the linked server without specifying the database server ip / hostname and port. Just the database name should be sufficient.
I was getting the same "transaction context in use by another session error" when trying to run an UPDATE query:
BEGIN TRAN
--ROLLBACK TRAN
--COMMIT TRAN
UPDATE did
SET did.IsProcessed = 0,
did.ProcessingLockID = NULL
FROM [proddb\production].DLP.dbo.tbl_DLPID did (NOLOCK)
WHERE did.dlpid IN ('bunch of GUIDs')
--WHERE did.DLPID IN (SELECT DLPID FROM #TableWithData)
However I didn't realize I was already trying to run this on the DLP database on the ProdDb\Production server. Once I removed that "[proddb\production].DLP.dbo." prefix from the query, it worked fine.