column type in PostgreSQL for floating numbers and integers - postgresql

I have a column in Postgres where data looks something like this:
1.8,
3.4,
7,
1.2,
3
So it has floating numbers in it as well as integers...
What would be the right type for this kind of column?
Numeric data type ?

Here is a similar question: Data types to store Integer and Float values in SQL Server
Numeric should work!
Another option is to use a VARCHAR column, and store a character representation of the value.
But it seems that you would need some type of indicator as to which type of value was stored. And there's several drawbacks to this approach. One big drawback is that these would allow for "invalid" values to be stored.
Another approach would be to use two columns, one of them INTEGER, the other FLOAT, and specify a precedence, and allow a NULL value in the INTEGER column to represent that the value was stored in the FLOAT column.
For all datatypes in SQL look here: Data types to store Integer and Float values in SQL Server

Related

DECIMAL Types in tsql

I have a Table with about 200Mio Rows and multiple Columns of Datatype DECIMAL(p,s) with varying precision/scales.
Now, as far as i understand, DECIMAL(p,s) is a fixed size column, with a size depending on the precision, see:
https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql?view=sql-server-ver16
Now, when altering the table and changing a column from DECIMAL(15,2) to DECIMAL(19,6), for example, i would have expected there to be almost no work to be done on the side of the SQL-Sever as the required bytes to store the value are the same, yet the altering itself does take a long time - so what exactly does the server do when i execute the alter statement?
Also, is there any benefit (other than having constraints on a column) to storing a DECIMAL(15,2) instead of, for example, a DECIMAL(19,2)? It seems to me the storage requirements would be the same, but i could store larger values in the latter.
Thanks in advance!
The precision and scale of a decimal / numeric type matters considerably.
As far as SQL Server is concerned, decimal(15,2) is a different data type to decimal(19,6), and is stored differently. You therefore cannot make the assumption that just because the overall storage requirements do not change, nothing else does.
SQL Server stores decimal data types in byte-reversed (little endian) format with the scale being the first incrementing value therefore changing the definition can require the data to be re-written, SQL Server will use an internal worktable to safely convert the data and update the values on every page.

How does Oracle store integers? [duplicate]

This question already has answers here:
Oracle NUMBER(p) storage size?
(3 answers)
Closed 1 year ago.
I have been tasked to convert some Oracle DB's to Postgresql, using AWS Schema Conversion Tool. Oracle only has the Number[(precision[,scale])] type. It would seem that Oracle stores integers using fixed-point. A lot of ID columns are defined as Number(19,0) which is being converted to Numeric(19,0) by SCT.
I've even seen one (so far) simple Number which is converted to Double Precision.
Postgres has proper scalar integer types like bigint.
On first blush it seems that storing integers as fixed-point numbers would be grossly inefficient in both storage and time compared to simple integers.
Am I missing something, does Oracle store them as efficient scalar ints under-the-covers?
Out of interest what's the best type for a simple ID column in Oracle?
Oracle's number data type is a variable length data type, which means that the value 1 uses less storage (and memory) than 123456789
In theory number(19,0) should be mapped to bigint, however Oracle's number(19,0) allows storing a value like 9999999999999999999 which would exceed the range for a bigint in Postgres.
The bigget value a bigint can store is 9223372036854775807 - if that is enough for you, then stick with bigint.
If you really need higher values, you will have to bite the bullet and use numeric in Postgres.

Postgresql jsonb vs datetime

I need to store two dates valid_from, and valid_to.
Is it better to use two datetime fields like valid_from:datetime and valid_to:datatime.
Would be it better to store data in jsonb field validity: {"from": "2001-01-01", "to": "2001-02-02"}
Much more reads than writes to database.
DB: PostgresSQL 9.4
You can use daterange type :
ie :
'[2001-01-01, 2001-02-02]'::daterange means from 2001-01-01 to 2001-02-02
bound inclusive
'(2001-01-01, 2001-02-05)'::daterange means from 2001-01-01
to 2001-02-05 bound exclusive
Also :
Special value like Infinite can be use
lower(anyrange) => lower bound of range
and many other things like overlap operator, see the docs ;-)
Range Type
Use two timestamp columns (there is no datetime type in Postgres).
They can efficiently be indexed and they protect you from invalid timestamp values - nothing prevents you from storing "2019-02-31 28:99:00" in a JSON value.
If you very often need to use those two values to check if another tiemstamp values lies in between, you could also consider a range type that stores both values in a single column.

PostgreSql Queries treats Int as string datatypes

I store the following rows in my table ('DataScreen') under a JSONB column ('Results')
{"Id":11,"Product":"Google Chrome","Handle":3091,"Description":"Google Chrome"}
{"Id":111,"Product":"Microsoft Sql","Handle":3092,"Description":"Microsoft Sql"}
{"Id":22,"Product":"Microsoft OneNote","Handle":3093,"Description":"Microsoft OneNote"}
{"Id":222,"Product":"Microsoft OneDrive","Handle":3094,"Description":"Microsoft OneDrive"}
Here, In this JSON objects "Id" amd "Handle" are integer properties and other being string properties.
When I query my table like below
Select Results->>'Id' From DataScreen
order by Results->>'Id' ASC
I get the improper results because PostgreSql treats everything as a text column and hence does the ordering according to the text, and not as integer.
Hence it gives the result as
11,111,22,222
instead of
11,22,111,222.
I don't want to use explicit casting to retrieve like below
Select Results->>'Id' From DataScreen order by CAST(Results->>'Id' AS INT) ASC
because I will not be sure of the datatype of the column due to the fact that JSON structure will be dynamic and the keys and values may change next time. and Hence could happen the same with another JSON that has Integer and string keys.
I want something so that Integers in Json structure of JSONB column are treated as integers only and not as texts (string).
How do I write my query so that Id And Handle are retrieved as Integer Values and not as strings , without explicit casting?
I think your assumtions about the id field don't make sense. You said,
(a) Either id contains integers only or
(b) it contains strings and integers.
I'd say,
If (a) then numerical ordering is correct.
If (b) then lexical ordering is correct.
But if (a) for some time and then (b) then the correct order changes, too. And that doesn't make sense. Imagine:
For the current database you expect the order 11,22,111,222. Then you add a row
{"Id":"aa","Product":"Microsoft OneDrive","Handle":3095,"Description":"Microsoft OneDrive"}
and suddenly the correct order of the other rows changes to 11,111,22,222,aa. That sudden change is what bothers me.
So I would either expect a lexical ordering ab intio, or restrict my id field to integers and use explicit casting.
Every other option I can think of is just not practical. You could, for example, create a custom < and > implementation for your id field which results in 11,111,22,222,aa. ("Order all integers by numerical value and all strings by lexical order and put all integers before the strings").
But that is a lot of work (it involves a custom data type, a custom cast function and a custom operator function) and yields some counterintuitive results, e.g. 11,111,22,222,0a,1a,2a,aa (note the position of 0a and so on. They come after 222).
Hope, that helps ;)
If Id always integer you can cast it in select part and just use ORDER BY 1:
select (Results->>'Id')::int From DataScreen order by 1 ASC

How do I determine if a Field in Salesforce.com stores integers?

I'm writing an integration between Salesforce.com and another service, and I've hit a problem with integer fields. In Salesforce.com I've defined a field of type "Number" with "Decimal Places" set to "0". In the other service, it is stored definitively as an integer. These two fields are supposed to store the same integral numeric values.
The problem arises once I store a value in the Salesforce.com variant of this field. Salesforce.com will return that same value from its Query() and QueryAll() operations with an amount of precision incorrectly appended.
As an example, if I insert the value "827" for this field in Salesforce.com, when I extract that number from Salesforce.com later, it will say the value is "827.0".
I don't want to hard-code my integration to remove these decimal values from specific fields. That is not maintainable. I want it to be smart enough to remove the decimal values from all integer fields before the rest of the integration code runs. Using the Salesforce.com SOAP API, how would I accomplish this?
I assume this will have something to with DescribeSObject()'s "Field" property where I can scan the metadata, but I don't see a way to extract the number of decimal places from the DescribeSObjectResult.
Ah ha! The number of decimal places is on a property called Scale on the Field object. You know you have an integer field if that's equal to "0".
Technically, sObject fields aren't integers, even if the "Decimal Places" property is set to 0. They are always decimals with varying scale properties. This is important to remember in APEX because the methods that are available are for Decimals aren't the same as those for integers, and you there are other potential type conversion issues (not always, but in some contexts).