How to Fix Postgres Numeric Auto Become 6 Decimal Places When Using Insert Into OPENQUERY from SQL Server - postgresql

I found an issue of Postgres decimal places auto become 6 places when try to insert data from SQL Server into Postgres using OPENQUERY.
I have searched many references that suggested using CAST or Convert to limit decimal places from SQL Server, everything is work fine when I just tried select from the SQL Server side (It is 0.001), but whenever run the query like below, in Postgres (for example the 'Rounding' will become 0.001000).
For example:
INSERT INTO OPENQUERY(RND,
'SELECT
name,
rounding
FROM test.public.product_uom')
SELECT
UoMID,
0.001
FROM dbo.tUoM
WHERE UoMID IN ('YEAR', 'ZAK');
The expected result that I would like is to have the same value of Rounding when Insert into from SQL Server to Postgres that is 0.001. Any help or suggestions will be appreciated and thanks in advance.

Related

PostgreSQL in list, mixing string and int

We are using PostgreSQL 11 and have a query from Redmine database. It is a query that works fine in MySQL 8 but on PostgreSQL we get an error.
SELECT fixed_version_id
FROM issues WHERE
((issues.fixed_version_id IN ('current_version','2')));
ERROR: invalid input syntax for integer: "current_version"
LINE 1: ...d FROM issues WHERE ((issues.fixed_version_id IN ('current_v...
I understand that fixed_version_id is an int and that I quering strings. However, is other SQL like MySQL 8 you can do this and it actually returns values. But in PostgreSQL we get an error. Not sure if we have it setup wrong or if this is just the way PostgreSQL works?
Any help would be most appreciated thank you.
We ran this query
SELECT fixed_version_id
FROM issues
WHERE ((issues.fixed_version_id IN ('current_version','2')));
We were expecting Not to get an error.
SQL is a tightly typed language (seems MySql does not adhere to the standard). The only correction is using the correct type - in this case integer. But you can `CAST' an integer to text and compare.
SQL Standard:
WHERE ((cast (issues.fixed_version_id as text) IN ('current_version','2')));
Postgresql extension:
WHERE ((issues.fixed_version_id::text IN ('current_version','2')));

PostGIS ST_X() precision behaviour

We are investigating using PostGIS to perform some spacial filtering of data that has been gathered from a roving GPS engine. We have defined some start and end points that we use in our processing with the following table structure:
CREATE TABLE IF NOT EXISTS tracksegments
(
idtracksegments bigserial NOT NULL,
name text,
approxstartpoint geometry,
approxendpoint geometry,
maxpoints integer
);
If the data in this table is queried:
SELECT ST_AsText(approxstartpoint) FROM tracksegments
we get ...
POINT(-3.4525845 58.5133318)
Note that the Long/Lat points are given to 7 decimal places.
To get just the longitude element, we tried:
SELECT ST_X(approxstartpoint) AS long FROM tracksegments
we get ...
-3.45
We need much more precision than the 2 decimal places that are returned. We've searched the documentation and there does not appear to be a way to set the level of precision. Any help would be appreciated.
Vance
Your problem is definitely client related. Your client is most likely truncating double precision values for some reason. As ST_AsText returns a text value, it does not get affected by this behaviour.
ST_X does not truncate the coordinate's precision like that, e.g.
SELECT ST_X('POINT(-3.4525845 58.5133318)');
st_x
------------
-3.4525845
(1 Zeile)
Tested with psql in PostgreSQL 9.5 + PostGIS 2.2 and PostgreSQL 12.3 + PostGIS 3.0 and with pgAdmin III
Note: PostgreSQL 9.5 is a pretty old release! Besides the fact that it will reach EOL next January, you're missing really kickass features in the newer releases. I sincerely recommend you to plan a system upgrade as soon as possible.

Import interval column from teradata into SQL Server

I am trying to import a table from Teradata into SQL Server, using a linked server. This table has a column with data type interval time to minute that is causing the issue. The value in the column is 0:18.
In SQL Server, I do something like this :
select <mycol>
into <SQLTable>
from openquery(<TERADATADB>,
'select *
from <TDBD>.<TDTable>') T
I get a result which is a binary(6) and the value is 0x000000000000 which corresponds to ASCII 0.
I would expect to have 0x303A31380000 which is the binary equivalent of '0:18' as returned in the cast below run in SQL Server.
select cast('0:18' as binary(6))
Do you have any idea why this issue? Should I reinstall my SQL Server?
I have a SQL Server 2014 Express installed on my Surface Pro, Windows 10, 64 bit.
Thanks for your help.

SQL Server OpenQuery() behaving differently then a direct query from TOAD

The following query works efficiently when run directly against Oracle 11 using TOAD (with native Oracle drivers)
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date('12/31/9999','mm/dd/yyyy')
and rgn_nm = 'Boston'
) ...
;
The exact same query "never" returns if passed from SQL Server 2008 to the same Oracle database via openquery(). SQL Server has a link to the Oracle database using an Oracle Provider OLE DB driver.
select * from openquery( servername, '
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
The query doesn't return in a reasonable amount of time, and the user kills the query. I don't know if it would eventually return with the correct result.
This result where the direct TOAD query works efficiently and the openquery() version "never" returns is reproducible.
A small modification to the openquery() gives the correct efficient result: Change eff_endt to trunc(eff_endt).
That is well and good, but it doesn't seem like the change should be necessary.
openquery() is supposed to be pass through, so how can there be a difference between the TOAD and openquery() behavior?
The reason we care is because we frequently develop complex queries with TOAD directly accessing Oracle. Once we have the query functioning and optimized, we convert it to an openquery() string for use in a SQL Server application. It is extremely aggravating to have a query suddenly fail with openquery() when we know it worked as a direct query. Then we have to search for a work-around through trial and error.
I would like to see the Oracle trace files for the two scenarios, but the Oracle server is within another organization, and we are not getting cooperation from the Oracle DBAs.
Does anyone know of any driver, or TOAD, or ??? issues that could account for the discrepancy? Is there any way to eliminate the problem such that both methods always give the same result?
I know you asked this a while ago but I just came across your question.
I agree, they should be the same. Obviously there is a difference. We need to find out where the difference is.
I am thinking out loud as I type...
What happens if you specify just a few column instead of select * from openquery?
How many rows are supposed to be returned?
What if, in the oracle select, you limit the returned rows?
How quickly does the openquery timeout?
Are TOAD and SS on the same machine? Are you RDPing into the SS and running toad from there?
Are they using the same drivers? including bit? (32/64) version?
Are they using the same account on oracle?
It is interesting that using the trunc() makes a difference. I assume [eff_endt] is one of the returned fields?
I am wondering if SS is getting all the rows back but it is choking on doing the date conversions. The date type in oracle may need to be converted to a ss date type before ss shows it to you.
What if you insert the rows from the openquery into a table where the date field is just a (n)varchar. I am thinking ss might just dump the date it is getting back from oracle into that text field without trying to convert it.
something like:
insert into mytable(f1,f2,f3,datetimeX)
select f1,f2,f3,datetimeX from openquery( servername, '
select f1,f2,f3,datetimeX from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
What if toad or ss is modifying the query statement before sending it to oracle. You could fire up wireshark and see what toad and ss are actually sending.
I would be very curious if you get this resolved. I link ss to oracle often and have not run into this issue.
Here are basic things you can check for to see what the database is doing after it receives the query. First, check that the execution plans are the same in TOAD as when the query runs using openquery. You could plan the query yourself in TOAD using:
explain plan set statement_id = 'openquery_test' for <your query here>;
select *
from table(dbms_xplan.display(statement_id => 'openquery_test';
then have someone initiate the query using openquery() and have someone with permissions to view v$ tables to run:
select sql_id from v$session where username = '<user running the query>';
(If there's more than one connection with the same user, you'll have to find an additional attribute to isolate the row representing the session running the query.)
select *
from table(dbms_xplan.display_cursor('<value from query above'));
If those look the same then I'd move on to checking database waits and see what it's stuck on.
select se.username
, sw.event
, sw.p1text
, sw.p2text
, sw.p3text
, sw.wait_time_micro/1000000 as seconds_in_wait
, sw.state
, sw.time_since_last_wait_micro/1000000 as seconds_since_last_wait
from v$session se
inner join
v$session_wait sw
on se.sid = sw.sid
where se.username = '<user running the query>'
;
(again, if there's more than one session with the same username, you'll need another attribute to whittle it down to the one you're interested in.)
If the plans are different, then you need to find out why, or if they're the same, look into what it's waiting on (e.g. SQL*Net message to client ?) and why.
I noticed a difference using OLEDB through MS Access (2013) connecting to Oracle 10g & 11g tables, in that it did not always recognize indexes or primary keys on the Oracle tables properly. The same query through an MS Access 2000 database (using odbc) worked fine / had no problem with indexes & keys. The only way I found to fix the OLEDB version was to include all of the key fields in the SELECT -- which was not a satisfying answer, but it's all I could find. This might be an option to try through SSMS / OpenQuery(...) as well.
Besides that... you can try some alternatives to OPENQUERY, such as:
4-part names: SELECT ... FROM Server..Schema.Table
Execute AT: EXEC ('select...') at linked server
But as for why the OLEDB provider works differently than the native Oracle Provider -- the providers are not identical, and the native provider would be more likely to pave-over Oracle quirks than the more generic OLEDB provider would.

Why wont this sql server CE command work?

I have the command:
INSERT INTO tbl_media
(DateAdded) VALUES (GetDate())
SELECT CAST(##Identity AS int)
It works fine against a standard sql db but not against a CE db I get the following error:
SQL Execution Error.
Executed SQL statement...
Error Source: SQL Server Compact ADO.NET Data Provider
Error Message: There was an error parsing the query. [Token line number = 2, Token line offset = 31, Token in error = )]
Shame the error isn't more useful anyone know what could be going on?
Cheers
UPDATE::::::
After much messing around with visual studio editor (rubbish) I downloaded dataport and read the MSDN. It seems there are 2 problems...
1) SELECT CAST(##Identity AS int) is not valid sql
SELECT ##Identity is
2) SqlCe server does not like it when I put these two commands together:
INSERT INTO tbl_media
(DateAdded) VALUES (getdate())
SELECT ##Identity
If I do the insert and select at different times then it works. So how do I get round this? I cant do it at different times I need to know the ID of the objects as I create them!!!
UPDATE 2:
According to the very helpful Erik E you cant do 2 statements at the same time. So the following parses as correct but wont work:
INSERT INTO tbl_media
(DateAdded) VALUES (getdate());
SELECT ##Identity;
So what I really want to know is how do I guarantee that identities wont get mixed up when adding records?
I.e. what if someone creates a record while someone is getting the identity for one they have just inserted?
You have an extra ) I dont know if that will fix your error but look at VALUES you have
VALUES(GETDATE())) '<-- one ) extra.
Change it to this:
INSERT INTO tbl_media(DateAdded)
VALUES (GetDate())
SELECT CAST(##Identity AS int)
You can only run a single SQL statement per ExecuteNonQuery call. So you must spilt in 2 calls.