how to avoid the automatic rounding of decimal value in extjs ? - extjs3

how to avoid the automatic rounding of decimal value in Extensible java script? For example if i enter 2.003 it will rounded as 2 or some times if i enter 2.039 it is accepting as 2.0388888887 please help me out.

For numberfields you can use decimal precision config
The maximum precision to display after the decimal separator (defaults to 2)
Technically it is simply toFixed:
fixPrecision : function(value) {
var nan = isNaN(value);
if (!this.allowDecimals || this.decimalPrecision == -1 || nan || !value) {
return nan ? '' : value;
}
return parseFloat(parseFloat(value).toFixed(this.decimalPrecision));
},

Related

How to allow only 2 digits and 1 digit after the comma (.)?

I have the problem that I can't get the proper RegExp together.
My Goal is to allow up to 3 digits before the comma, and ONLY IF there is a decimal, then 1 digit after the comma. Which RegExp or Regexes do I have to use for this behavior?
Wanted allowed outcomes: 000.0, 00.0, 0.0, 000, 00, 0
thats the current code, but the problem is that here also 4 digits can be placed without a decimal:
inputFormatters: [
FilteringTextInputFormatter.allow(RegExp(r'^\d{1,3}\.?\d{0,1}')),
],
I already scrolled through these but they are not working for me:
Javascript Regex to allow only 2 digit numbers and 3 digit numbers as comma separated
Javascript regex to match only up to 11 digits, one comma, and 2 digits after it
Jquery allow only float 2 digit before and after dot
Flutter - Regex in TextFormField
Allow only two decimal number in flutter input?
RegExp(r'^\d{0,3}(\.\d{1})?$')
try this regex
also I think by comma your mean decimal (.) and considering you want 3 digits before decimal and 1 decimal after decimal
TextFormField(
autoValidateMode: AutoValidateMode.always,
validator: (value) {
return RegExp(r'^\d{0,3}(\.\d{1})?$').hasMatch(value) ? null : 'Invalid value';
},
)
I'm not a regex expert, so I can only suggest you using this helper function:
bool match(String input) {
if (input.split('.').length == 1) {
// if there is no dot in the string
// returns true if the length is < 4
return (input.length < 4) ? true : false;
} else if (input.split('.').length > 1){
// if there is more than one dot in the string
return false;
} else {
// if there is a dot in the string
// returns true if there are < 4 digits before the dot and exactly 1 digit
// after the dot
return (input.split('.')[0].length < 4 && input.split('.')[1].length == 1)
? true
: false;
}
}
Input formatter is not the case here because it formats data visually and your desired formats includes each other. You should use TextFormField and validator property with #Andrej's validator or use RegExp.
TextFormField(
autoValidateMode: AutoValidateMode.always,
validator: (value) {
return RegEx(r'^\d{1,3}(\.\d)?$').hasMatch(value) ? null : 'Invalid value';
},
)
RegExp is working here.

Select only two decimal places without rounding up

I want to select only two decimal places without rounding up.
$d = 123000.1264
'{0:f2}' -f $d
Result: 123000,13, but I need the result 123000,12
Any ideas to solve this problem?
Thank you in advance!
[Math]::Truncate(123000.1264 * 100) / 100
does it.
123000.1264 * 100 = 12300012.64
[Math]::Truncate(12300012.64) = 12300012
12300012 / 100 = 123000.12
You should use the [decimal] type for numbers when you need to preserve the accuracy of the fractional part, e.g.
$d = [decimal]123000.1264
and then [Math]::Truncate will use its decimal overload to give a decimal, and a decimal divided by an integer (or a double) will give a decimal result.
Of course, there is more than one way to interpret "up": it could mean increase in value (3 > -5) or increase in magnitude (|-5| > |3|). If you need the former, then use [Math]::Floor (which converts -1.1 -> -2.0) instead of [Math]::Truncate (which converts -1.1 -> 1.0).

How to create an uint256 in PostgreSQL

How can I create an uint256 data type in Postgres? It looks like they only support up to 8 bytes for integers natively..
They offer decimal and numeric types with user-specified precision. For my app, the values are money, so I would assume I would use numeric over decimal, or does that not matter?
NUMERIC(precision, scale)
So would I use NUMERIC(78, 0)? (2^256 is 78 digits) Or do I need to do NUMERIC(155, 0) and force it to always be >= 0 (2^512, 155 digits, with the extra bit representing the sign)? OR should I be using decimal?
numeric(78,0) has a max value of 9.999... * 10^77 > 2^256 so that is sufficient.
You can create a domain.
CREATE DOMAIN uint_256 AS NUMERIC NOT NULL
CHECK (VALUE >= 0 AND VALUE < 2^256)
CHECK (SCALE(VALUE) = 0)
This creates a reusable uint_256 datatype which is constrained to be within the 2^256 limit and also prevents rounding errors by only allowing the scale of the number to be 0 (i.e. throws an error with decimal values). There is nothing like NULL in Solidity so the datatype should not be nullable.
Try it: dbfiddle

Number validation and formatting

I want to format, in real time, the number entered into a UITextField. Depending on the field, the number may be an integer or a double, may be positive or negative.
Integers are easy (see below).
Doubles should be displayed exactly as the user enters with three possible exceptions:
If the user begins with a decimal separator, or a negative sign followed by a decimal separator, insert a leading zero:
"." becomes "0."
"-." becomes "-0."
Remove any "excess" leading zeros if the user deletes a decimal point:
If the number is "0.00023" and the decimal point is deleted, the number should become "23".
Do not allow a leading zero if the next character is not a decimal separator:
"03" becomes "3".
Long story short, one and only one leading zero, no trailing zeros.
It seemed like the easiest idea was to convert the (already validated) string to a number then use format specifiers. I've scoured:
https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/Strings/Articles/formatSpecifiers.html
and
http://www.cplusplus.com/reference/cstdio/printf/
and others but can't figure out how to format a double so that it does not add a decimal when there are no digits after it, or any trailing zeros. For example:
x = 23.0
print (String(format: "%f", x))
//output is 23.000000
//I want 23
x = 23.45
print (String(format: "%f", x))
//output is 23.450000
//I want 23.45
On How to create a string with format?, I found this gem:
var str = "\(INT_VALUE) , \(FLOAT_VALUE) , \(DOUBLE_VALUE), \(STRING_VALUE)"
print(str)
It works perfectly for integers (why I said integers are easy above), but for doubles it appends a ".0" onto the first character the user enters. (It does work perfectly in Playground, but not my program (why???).
Will I have to resort to counting the number of digits before and after the decimal separator and inserting them into a format specifier? (And if so, how do I count those? I know how to create the format specifier.) Or is there a really simple way or a quick fix to use that one-liner above?
Thanks!
Turned out to be simple without using NumberFormatter (which I'm not so sure would really have accomplished what I want without a LOT more work).
let decimalSeparator = NSLocale.current.decimalSeparator! as String
var tempStr: String = textField.text
var i: Int = tempStr.count
//remove leading zeros for positive numbers (integer or real)
if i > 1 {
while (tempStr[0] == "0" && tempStr[1] != decimalSeparator[0] ) {
tempStr.remove(at: tempStr.startIndex)
i = i - 1
if i < 2 {
break
}
}
}
//remove leading zeros for negative numbers (integer or real)
if i > 2 {
while (tempStr[0] == "-" && tempStr[1] == "0") && tempStr[2] != decimalSeparator[0] {
tempStr.remove(at: tempStr.index(tempStr.startIndex, offsetBy: 1))
i = i - 1
if i < 3 {
break
}
}
}
Using the following extension to subscript the string:
extension String {
subscript (i: Int) -> Character {
return self[index(startIndex, offsetBy: i)]
}
}

iphone / Objective C - Comparing doubles not working

I think I'm going insane.
"counter" and "interval" are both doubles. This is happening on accelerometer:didAccelerate at an interval of (.01) . "counter" should eventually increment to "interval". For some reason i cant get this "if" to ring true.
Am I overlooking something?
double interval = .5;
if( counter == interval ){ //should eventually be .50000 == .50000
NSLog( #"Hit!" );
[self playSound];
counter = 0;
}else{
counter += .01;
}
NSLog( #"%f, %f, %d",counter,interval,(counter == interval) );
Don't ever compare doubles or floats with equality - they might look the same at the number of significant figures your are examining but the computer sees more.
For this purpose, the Foundation Framework provides "epsilon" values for different types such as "float" and "double". If the distance between two numbers is smaller than epsilon, you can assume these two numbers are equal.
In your case, you would use it as follow:
- (BOOL)firstDouble:(double)first isEqualTo:(double)second {
if(fabs(first - second) < DBL_EPSILON)
return YES;
else
return NO;
}
Or in Swift 4:
func doublesAreEqual(first: Double, second: Double) -> Bool {
if fabs(first - second) < .ulpOfOne {
return true
}
return false
}
Two very useful links:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Interesting discussion of Unit in Last Place (ULP) and usage in Swift
Friday Q&A 2011-01-04: Practical Floating Point
In your else block, you are not adding 0.01 to counter, because that is not a representable double-precision value. You are actually adding the value:
0.01000000000000000020816681711721685132943093776702880859375
Unsurprisingly, when you repeatedly add this value to itself, you never get 0.5 exactly.
Two options: the better is to replace the if condition with (counter >= interval). Alternatively, you could use a small power of two for the increment instead of something that cannot be represented, like 0.0078125.