What is the type that sql server assigns to the numeric literal: 2. , i.e. 2 followed by a dot?
I was curious because:
select convert(varchar(50), 2.)
union all
select convert(varchar(50), 2.0)
returns:
2
2.0
which made me ask what's the difference between 2. and 2.0 type wise?
Sql server seems to assign types to numeric literals depending on the number itself by finding the minimal storage type that can hold the number. A value of 1222333 is stored as int while 1152921504606846975 is stored as big int.
thanks
Edit: I also want to add why this is so important. In sql server 2008 r2, select 2/5 returns 0 while select 2./5 returns 0.4, due to the way sql server treats these types. In oracle and Access select 2/5 (oracle: select 2/5 from dummy) returns 0.4. That's the way it should be. I wonder if they fixed this behaviour in sql server 2012. I would be surprised if they did.
This script might answer my question. The type of 2. is numeric(1, 0).
create table dbo.test_type (field sql_variant)
go
delete from dbo.test_type
go
INSERT INTO dbo.test_type
VALUES (2.);
INSERT INTO dbo.test_type
VALUES (2.0);
SELECT field
, sql_variant_property (field
, 'BaseType')
AS BaseType
, sql_variant_property (field
, 'Precision')
AS Precision
, sql_variant_property (field
, 'Scale')
AS Scale
FROM dbo.test_type
It returns:
2 numeric 1 0
2.0 numeric 2 1
This is why when 2.0 is converted to varchar the result is 2.0. Sql server seems to record the precision.
Related
The dump function in Oracle displays the internal representation of data:
DUMP returns a VARCHAR2 value containing the data type code, length in bytes, and internal representation of expr
Fore example:
SELECT DUMP(cast(1 as number ))
2 FROM DUAL;
DUMP(CAST(1ASNUMBER))
--------------------------------------------------------------------------------
Typ=2 Len=2: 193,2
SQL> SELECT DUMP(cast(1.000001 as number ))
2 FROM DUAL;
DUMP(CAST(1.000001ASNUMBER))
--------------------------------------------------------------------------------
Typ=2 Len=5: 193,2,1,1,2
It shows that the first 1 uses 2 byte for storing and the second example uses 5 bytes for storing.
I suppose the similar function in PostgreSQL is pg_typeof but it returns only the type name without information about byte usage:
SELECT pg_typeof(33);
pg_typeof
integer (1 row)
Does anybody know if there is an equivalent function in PostgreSQL?
I don't speak PostgreSQL.
However, Oracle functionality page says that there's Orafce which
implements in Postgres some of the functions from the Oracle database that are missing (or behaving differently)
It, furthermore, mentions the dump function
dump (anyexpr [, int]): Returns a text value that includes the datatype code, the length in bytes, and the internal representation of the expression
One of examples looks like this:
postgres=# select pg_catalog.dump('Pavel Stehule',10);
dump
-------------------------------------------------------------------------
Typ=25 Len=17: 68,0,0,0,80,97,118,101,108,32,83,116,101,104,117,108,101
(1 row)
To me, it looks like Oracle's dump:
SQL> select dump('Pavel Stehule') result from dual;
RESULT
--------------------------------------------------------------
Typ=96 Len=13: 80,97,118,101,108,32,83,116,101,104,117,108,101
SQL>
I presume you'll have to visit GitHub and install the package to see whether you can use it or not.
It is not a complete equivalent, but if you want to figure out the byte values used to encode a string in PostgreSQL, you can simply cast the value to bytea, which will give you the bytes in hexadecimal:
SELECT CAST ('schön' AS bytea);
This will work for strings, but not for numbers.
Good day!
I need to export/import data from SQL Server 2019 to AWS RDS running PostgreSQL 13.3
It's just a few hundred rows from a handful of tables.
This is my first ever encounter with Postgres, so I decided to simply script data as "INSERT ... SELECT", as I would with SQL Server... and I've looked into AWS Glue, RDS S3 Import - it all seems waaay too much for what I need.
I am using DBeaver v21 for of this as I have easy access to both source and destination DBs.
This I tested with success:
CREATE TABLE public.invoices (
invoiceno int8 NOT NULL GENERATED BY DEFAULT AS IDENTITY,
terminalid int4 NOT NULL,
invoicedate timestamp NOT NULL,
description varchar(100) NOT null
);
INSERT INTO public.invoices(InvoiceNo,TerminalID,InvoiceDate,Description)
SELECT 7 as invoiceno , 5 as terminalid , '2018-10-24 21:29:00' as invoicedate , N'Coffe and cookie' as description
-- Updated Rows 1
-- No problem here
I scripted the rest of the data with UNION ALL, like so (shortened example) :
INSERT INTO public.invoices(InvoiceNo,TerminalID,InvoiceDate,Description)
SELECT 7 as invoiceno , 5 as terminalid , '2018-10-24 21:29:00' as invoicedate , N'Coffe and cookie' as description
UNION ALL
SELECT 1000, 5 , '2018-10-24 21:29:00' , N'Tea and crumpets'
and now I get:
SQL Error [42804]: ERROR: column "invoicedate" is of type timestamp without time zone but expression is of type text
Hint: You will need to rewrite or cast the expression.
Position: 118
I do see in the message that it can be "fixed" with a CAST (or rewrite!)....
but how come Postgres can convert 1 row implicitly, yet 2 rows is impossible?
why does this fail when more than 1 row is being inserted? - it clearly knows how to convert text -> date ...
I tried using VALUES, CTE, derived tables with no success.
As I have to spend more time with postgres - I really would like to understand what's going on here. Is my syntax wrong (works fine SQL Server), is DBeaver messing up something with my data, etc...?
Any suggestions would be appreciated.
Thank you
'2018-10-24 21:29:00' is a string value and Postgres is a bit more picky about correct data types then SQL Server.
You need to specify the value as a proper timestamp constant,
timestamp '2018-10-24 21:29:00'
Note that you can write that in a bit more compact form using a values clause:
INSERT INTO public.invoices(InvoiceNo,TerminalID,InvoiceDate,Description)
values
(7, 5, timestamp '2018-10-24 21:29:00', 'Coffe and cookie'),
(1000, 5 , timestamp '2018-10-24 21:29:00' , 'Tea and crumpets');
The reason of such behaviour is in order of compilation.
In situation when you use VIEW first are compiled querys in view and types of columns (names too) in view is taken from the first part of a "view" (the first SELECT command).
So, you have got text instead of timestamp and it doesn't match to inserted table type.
MSSQL compiler is a little bit smarter :-).
In first example you have simple INSERT INTO ... SELECT ....
and compiler at once expect timestamp type - so , it not rise any compilation error (but error can ocure in execution time when the data do not pass rules of automatic conversion).
I found an issue of Postgres decimal places auto become 6 places when try to insert data from SQL Server into Postgres using OPENQUERY.
I have searched many references that suggested using CAST or Convert to limit decimal places from SQL Server, everything is work fine when I just tried select from the SQL Server side (It is 0.001), but whenever run the query like below, in Postgres (for example the 'Rounding' will become 0.001000).
For example:
INSERT INTO OPENQUERY(RND,
'SELECT
name,
rounding
FROM test.public.product_uom')
SELECT
UoMID,
0.001
FROM dbo.tUoM
WHERE UoMID IN ('YEAR', 'ZAK');
The expected result that I would like is to have the same value of Rounding when Insert into from SQL Server to Postgres that is 0.001. Any help or suggestions will be appreciated and thanks in advance.
I'm trying to pull data for certain dates out of a staging table where the offshore developers imported everything in the file, so I need to filter out the "non-data" rows and convert the remaining strings to datetime.
Which should be simple enough but... I keep getting this error:
The conversion of a varchar data type to a datetime data type resulted in an out-of-range value.
I've taken the query and pulled it apart, made sure there are no invalid strings left and even tried a few different configurations of the query. Here's what I've got now:
SELECT *
FROM
(
select cdt = CAST(cmplt_date as DateTime), *
from stage_hist
WHERE cmplt_date NOT LIKE '(%'
AND ltrim(rtrim(cmplt_date)) NOT LIKE ''
AND cmplt_date NOT LIKE '--%'
) f
WHERE f.cdt BETWEEN '2017-09-01' AND '2017-10-01'
To make sure the conversion is working at least, I can run the inner query and the cast actually works for all rows. I get a valid data set for the rows and no errors, so the actual cast is working.
The BETWEEN statement must be throwing the error then, right? But I've casted both strings I use for that successfully, and even taken a value out of the table and did a test query using it which also works succesfully:
select 1
WHERE CAST(' 2017-09-26' as DateTime) BETWEEN '2017-09-01' AND '2017-10-01'
So if all the casts work individually, how come I'm getting an out-of-range error when running the real query?
I am guessing that this is due to the fact that in your cmplt_date field there are values which are not valid dates. Yes, I know you are filtering them using a WHERE clause, but know that Logical Processing Order of the SELECT statement is not always the actual order. What does this mean is that sometimes, the SQL Engine my start performing your CAST operation before finishing the filtering.
You are using SQL Server 2012, so you can just add TRY_CAST:
SELECT *
FROM
(
select cdt = TRY_CAST(cmplt_date as DateTime), *
from stage_hist
WHERE cmplt_date NOT LIKE '(%'
AND ltrim(rtrim(cmplt_date)) NOT LIKE ''
AND cmplt_date NOT LIKE '--%'
) f
WHERE f.cdt BETWEEN '2017-09-01' AND '2017-10-01'
Below is code that I built from an example I found online, I can't find the link, but the code is referenced in the answers on this stack overflow question: Passing multiple values for a single parameter in Reporting Services.
Here is the SQL code I am working with right now within my stored procedure, it was a long procedure so I summed it down to just the section I am working on, and added the DECLARE and SET for #EMPLOYEES, which are passed as a parameter from SSRS to make the code snippet run.
DECLARE #EMPLOYEES varchar(8000)
-- EMPLOYEES is a comma separated list of EMPLOYEE IDS
-- FROM SSRS Report Parameters. Each ID is 12 characters
-- And there are 806 Employees to choose from, which
-- when all are selected, the Comma separated string grows
-- to 11,193 characters, much longer than 8000
SET #EMPLOYEES = 'EMP000000001,EMP000000002,EMP000000003'
CREATE TABLE #EMPLOYEEIDS
(
EMPLOYEEID varchar(100) NOT NULL
)
DECLARE #CharIndex AS int
DECLARE #Piece AS varchar(100)
-- FILL THE #EMPLOYEEIDS TABLE WITH THE COMMA SEPARATED EMPLOYEE IDS
SELECT #CharIndex = 1
WHILE #CharIndex > 0 AND LEN(#EMPLOYEES) > 0
BEGIN
SELECT #CharIndex = CHARINDEX(',', #EMPLOYEES)
IF #CharIndex > 0
SELECT #Piece = LEFT(#EMPLOYEES, #CharIndex - 1)
ELSE
SELECT #Piece = #EMPLOYEES
INSERT INTO #EMPLOYEEIDS (EMPLOYEEID) VALUES (#Piece)
SELECT #EMPLOYEES = RIGHT(#EMPLOYEES, LEN(#EMPLOYEES) - #CharIndex)
END
SELECT * FROM #EMPLOYEEIDS
DROP TABLE #EMPLOYEEIDS
I had 6 sets of multi-values, all of them worked fine, until I found that the reports were missing much of the data for employees, to which I found that the VARCHAR(8000) was overflowed when selecting all the employees on the report parameters (there are over 800 of them). The Report would run, SQL would happily truncate the VARCHAR to 8000 characters, and a quarter of the IDS were not parsed.
So I tried to switch the VARCHAR to a text field, and none of the parsing functions would work when the field is set up as TEXT. I get errors like the following:
Msg 8116, Level 16, State 2, Procedure usp_QualityMonitoring_AllProfiles_SelectWithParameters, Line 89
Argument data type text is invalid for argument 1 of left function.
This is understandable, I know that many functions that work with VARCHAR will not work with TEXT. So, SQL is truncating everything after 8000 characters when I use a VARCHAR, and the procedure won't ever run if I switch it to TEXT.
What other options to I have to pass multi-valued parameters from SSRS to a SQL Server stored procedure that can support this many options?
OR is there a way to fix the code in the stored procedure to parse through TEXT instead of VARCHAR?
Note: I originally thought the SQL Server running the Stored Proc was 2005, but I have determined that it is not:
SELECT ##VERSION
-- Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)