storing decimal into pgtype numeric - postgresql

I was working with pgtype.NumRange. The lower and upper bounds are typed pgtype.Numeric. pgtype.Numeric has the following fields
type Numeric struct {
Int *big.Int
Exp int32
Status Status
NaN bool
InfinityModifier InfinityModifier
}
I guess I'm suppose to able to represent any number using the duo of the Int and Exp fields, but I don't know how to, as big.Int and `int32' are integer types.
From Postgres docs, I can see that numrange can take decimal values in any of lower and upper bounds and data type numeric is a superset of decimal type. So I know I should be able to use decimals, but
how can I go about this?

Related

How can I declare and initialize a constant bigger than UInt64 in Swift?

I like to know, How can I declare and initialize a constant bigger than UInt64 in Swift?
Swift infer seems unable to work for down number. How I should solve this issue?
let number = 11111111222222233333333344444445555555987654321 // Error: overflow
print(number, type(of: number))
Decimal is the numeric type capable of holding the largest value in Swift. However,you can't declare a Decimal literal, since integer literals are inferred to Int, while floating point literals are inferred to Double, so you need to initialise the Decimal from a String literal.
let number = Decimal(string: "321321321155564654646546546546554653334334")!
From the documentation of NSDecimalNumber (whose Swift version is Decimal and hence their numeric range is equivalent):
An instance can represent any number that can be expressed as mantissa x 10^exponent where mantissa is a decimal integer up to 38 digits long, and exponent is an integer from –128 through 127.
If you need to be able to represent arbitrary-length numbers in Swift, you need to use a 3rd party library (or create one yourself), there's no built-in type that could handle this in Swift.

Validate if big decimal is created without any precision loss

I have a method which accepts BigDecimal. I want to make sure its decimal value have no precision loss (i.e) input to big decimal is exactly stored as it is. My understanding is that precision loss happens when I try to convert infinite double to big decimal and so, it is recommended to create big decimal using string representation of double. Basically, I want to make sure if big decimal is constructed using BigDecimal(String) for such infinite doubles.
As per my understanding after going through doc, input double value which results in precision loss during big decimal conversion always have very large magnitude which won't fit in 64 bits. Example: 0.1. So, string and double value representation of such big decimals won't match. is it enough to say that precision loss has occurred when string and double value won't match?
Eg:
BigDecimal decimal = new BigDecimal(0.1);
System.out.println(decimal.toString()) // prints 0.1000000000000000055511151231257827021181583404541015625
System.out.println(decimal.doubleValue()) // prints 0.1.
String and double value of big decimal differ and so, precision loss happened.
This idea breaks down if you allow the BigDecimal to be the result of arithmetic.
If you are going to require the BigDecimal to be the direct, unmodified result of conversion of a decimal string it would be much simpler to require a String argument and convert it to BigDecimal in your method.
The following program is an attempt to implement and test your validity check. The variable third was calculated without any involvement of doubles, using only decimal strings and BigDecimal, but fails the test.
import java.math.BigDecimal;
import java.math.RoundingMode;
public strictfp class Test {
public static void main(String[] args) {
BigDecimal third = BigDecimal.ONE.divide(new BigDecimal("3"), 30, RoundingMode.HALF_EVEN);
testIt(new BigDecimal("0.1"));
testIt(new BigDecimal(0.1));
testIt(third);
}
static void testIt(BigDecimal in) {
System.out.println(in+" "+isValid(in));
}
static boolean isValid(BigDecimal in) {
double d = in.doubleValue();
String s1 = in.toString();
String s2 = Double.toString(d);
return s1.equals(s2);
}
}
Output:
0.1 true
0.1000000000000000055511151231257827021181583404541015625 false
0.333333333333333333333333333333 false

How to create an uint256 in PostgreSQL

How can I create an uint256 data type in Postgres? It looks like they only support up to 8 bytes for integers natively..
They offer decimal and numeric types with user-specified precision. For my app, the values are money, so I would assume I would use numeric over decimal, or does that not matter?
NUMERIC(precision, scale)
So would I use NUMERIC(78, 0)? (2^256 is 78 digits) Or do I need to do NUMERIC(155, 0) and force it to always be >= 0 (2^512, 155 digits, with the extra bit representing the sign)? OR should I be using decimal?
numeric(78,0) has a max value of 9.999... * 10^77 > 2^256 so that is sufficient.
You can create a domain.
CREATE DOMAIN uint_256 AS NUMERIC NOT NULL
CHECK (VALUE >= 0 AND VALUE < 2^256)
CHECK (SCALE(VALUE) = 0)
This creates a reusable uint_256 datatype which is constrained to be within the 2^256 limit and also prevents rounding errors by only allowing the scale of the number to be 0 (i.e. throws an error with decimal values). There is nothing like NULL in Solidity so the datatype should not be nullable.
Try it: dbfiddle

Implicit type of constant in swift tutorial

When I do example from tutorial, I get some issue from constants variables topic.
If someone explain my example I'll be appreciate for this.
When you don't specify a type, a floating point number literal will be inferred to be of type Double.
Double, as its name suggests, has double precision than Float. So when you do:
let a = 64.1
The actual value in memory may be something like 64.099999999999991. Since Double shows only 16 significant digits, it shows 64.09999999999999, rounding off the last "1".
Why does let b: Float = 64.1 show the correct number?
When you specify the type to float, the precision decreases. Float only shows 8 significant digits. That's 64.099999, but there's a "9" straight after that, so it rounds it up to get 64.1.
This has nothing to do with explicitly stating the variable type. Try specifying it to be a Double:
let b: Double = 64.1
It will still show 64.09999999999999.

Number Operations and Return Types

I am confused by what is returned when performing number operations in Swift between various types. Consider the following:
var castedFoo = Float(7.0/5.0) // returns 1.39999997...
var specifiedTypeFoo:Float = 7/5.0 //returns 1.39999997...
var foo = (7/5.0) //returns 1.4
What separates the first two from the last one? They are all returning floats, so why is the value from the last one rounded? I understand that the first is casted and the second explicitly specified to be a Float, but the last one also returns a Float value. So what makes the difference here?
According to Swift documentation,
Unless otherwise specified, the default type of a floating-point literal is the Swift standard library type Double, which represents a 64-bit floating-point number.
In other words, the literal 5.0 is of type Double.
Your first two examples set the result type to Float; your last example keeps the type of the result a Double, because the result of the division of an Int and a Double is a Double. Because of that difference, the last result has higher precision.